Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 118

Savitribai Phule Pune University

Faculty of Information Technology

414467: Computer Laboratory - X

BEIT (2015 Course)


Semester - II

Teaching Scheme Examination Scheme


Practical : 2 Hrs. / Week Term work : 25 Marks
Oral 25 Marks

LABORATORY MANUAL (Version 1.0)

ACADEMIC YEAR 2018-19

Computer Lab-X (BEIT 2015 Course) Page 1 of 108


VISION

To provide excellent Information Technology education by building strong teaching


and research environment.

MISSION

1) To transform the students into innovative, competent and high quality IT


professionals to meet the growing global challenges.
2) To achieve and impart quality education with an emphasis on practical
skills and social relevance.
3) To endeavor for continuous up-gradation of technical expertise of students
to cater to the needs of the society.
4) To achieve an effective interaction with industry for mutual benefits.
PROGRAM EDUCATIONAL OBJECTIVES

The students of Information Technology course after passing out will

1. Graduates of the program will possess strong fundamental concepts in


mathematics, science, engineering and Technology to address technological
challenges with emerging trends.

2. Possess knowledge and skills in the field of Computer Science & Engineering
and Information Technology for analyzing, designing and implementing
multifaceted engineering problems of any domain with innovative and
efficient approaches.

3. Acquire an attitude and aptitude for research, entrepreneurship and higher


studies in the field of Computer Science & Engineering and Information
Technology.

4. Learn commitment to ethical practices, societal contributions through


communities and life-long intellect.

5. Attain better communication, presentation, time management and teamwork


skills leading to responsible & competent professionals and will be able to
address challenges in the field of IT at global level.
PROGRAM OUTCOMES
The students in the Information Technology course will attain:

a. An ability to apply knowledge of computing, mathematics including discrete mathematics as


well as probability and statistics, science, engineering and technology.

b. An ability to define a problem and provide a systematic solution with the help of conducting
experiments, as well as analyzing and interpreting the data.

c. An ability to design, implement, and evaluate a software or a software/hardware co-system,


component, or process to meet desired needs within realistic constraints.

d. An ability to identify, formulate, and provide systematic solutions to complex engineering


problems.

e. An ability to use the techniques, skills, and modern engineering technologies tools, standard
processes necessary for practice as an IT professional.

f. An ability to apply mathematical foundations, algorithmic principles, and Information


Technology theory in the modeling and design of computer-based systems with necessary
constraints and assumptions.

g. An ability to analyze the local and global impact of computing on individuals, organizations and
society.

h. An ability to understand professional, ethical, legal, security and social issues and
responsibilities.

i. An ability to function effectively as an individual or as a team member to accomplish a desired


goal(s).

j. An ability to engage in life-long learning and continuing professional development to cope up


with fast changes in the technologies/tools with the help of electives, professional
organizations and extra-curricular activities.

k. An ability to communicate effectively in engineering community at large by means of effective


presentations, report writing, paper publications, demonstrations.

l. An ability to understand engineering, management, financial aspects, performance,


optimizations and time complexity necessary for professional practice.

m. An ability to apply design and development principles in the construction of software systems
of varying complexity.
Sinhgad College of Engineering, Pune

Department of Information Technology

Compliance
Document Control

Reference Code SCOE-IT / Lab Manual Procedures


Version No 1.0
Compliance Status Complete
Revision Date 18 Feb 2019
Academic Year 2018-19
Security Classification Department Specific
Document Status Definitive
Review Period Yearly

Authors

Name of Faculty Name of Institute


Prof. S.C. Badwaik Sinhgad College of Engineering, Pune
Prof. C.D. Kokane Sinhgad College of Engineering, Pune
Prof. Jaydeep Patil AISSMS's Institute of Information Technology, Pune
Prof. Supriya Patil Sinhgad College of Engineering, Pune
Savitribai Phule Pune University
FACULTY OF INFORMATION TECHNOLOGY
Syllabus

414467: COMPUTER LABORATORY-X


(2015 Course)

Teaching Scheme: Examination Scheme:

Practical: 2 Hours/Week Term-work: 25 Marks


Practical: 25 Marks
Credit: 01

Prerequisites:

1. Computer Network Technology.

2. Human Computer Interface.

Course Objectives:

1. To design and implement user interfaces for performing database operations.

2. To design applications for accessing smart devices and data generated through sensors
and services.

3. To implement authentication protocols for providing security.

Course Outcomes:

Upon successful completion of this course student will be able to

1. Set up the Android environment and explain the Evolution of cellular networks.

2. Develop the User Interfaces using pre-built Android UI components.

3. Create applications for performing CURD SQLite database operations using Android.

4. Create the smart android applications using the data captured through sensors.

5. Implement the authentication protocols between two mobile devices for providing. Security.

6. Analyze the data collected through android sensors using any machine learning algorithm.

Guidelines:
This Computer Laboratory-X course has ubiquitous computing as a core subject. The problem
statements should framed based on first six assignments mentioned in the syllabus. The teachers will
frame the problem statements with due consideration that students have three hours to complete
that. The practical examination will comprise of implementation and related theory. All assignments to
be performed in Java 9.

Tools Required Android SDK / Android Studio, SQL Lite, Sensors, Arduino kit.

Laboratory Assignments

Assignment 1

Android development environment. Installing and setting up the environment. Hello world
application. Running the emulator. Inserting debug messages.

Assignment 2

Android UI Design: Design a User Interface using pre-built UI components such as structured layout
objects, UI controls and special interfaces such as dialogs, notifications, and menus. Also make this
UI attractive using Android graphics platform OpenGL.

Assignment 3

Android-database Connectivity: Create a SQLite Database for an Android Application and perform
CRUD (Create, Read, Update and Delete) database operations.

Assignment 4

Sensors for building Smart Applications: Use any sensors on the device to add rich location and
motion capabilities to your app, from GPS or network location to accelerometer, gyroscope,
temperature, barometer, and more.

Assignment 5

Develop a Smart Light System (Light that automatically switched on in evening and gets off in
morning) using open source Hardware platform like Arduino and some sensors (Light dependent
resistor) and actuator (An LED).

Assignment 6

Design and Develop a GUI for FAN regulator that uses Android platform.

Assignment 7

Develop an Android based FAN regulator using open source Hardware platform like NodeMcu and
actuator (a SERVO Motor).
Assignment 8

Android and Machine Learning: Mobile multimodal sensing- Draw inferences over the data coming
from phone’s sensing hardware (e.g. accelerometer, GPS, microphone), and processing these
samples with the help of machine learning. (Any Application: Healthcare, Smart City, Agriculture,
etc).

Assignment 9

Android API: Implement an application that uses Android APIs like Google Map, recording and
playing audio and video, using the built-in camera as an input device.

Assignment 10

Wireless Network: Develop an app for a rolling display program of news on computer display. The
input strings are supplied by the mobile phone/ by another computer connected through wireless
networks.

Assignment 11

Android Security: Authentication of two mobile devices.

Assignment 12

Case Study: Evolution of cellular networks all the way up to 7G.

Links for Laboratory Assignments

1. https://developer.android.com/
2. https://www.androidhive.info/2011/11/android-sqlite-database-tutorial/
3. https://developers.google.com/android/guides/api-client
4. https://developer.android.com/guide/topics/sensors/sensors_overview
INDEX
Sr. Page
Title
No. No.
Android development environment. Installing and setting up the
1 environment. Hello world application. Running the emulator. Inserting 14
debug messages.
Android UI Design: Design a User Interface using pre-built UI components
2 such as structured layout objects, UI controls and special interfaces such 31
as dialogs, notifications, and menus. Also, make this UI attractive using
Android graphics platform OpenGL.
Android-database Connectivity: Create a SQLite Database for an Android
3 Application and perform CRUD (Create, Read, Update and Delete) 52
database operations.
Sensors for building Smart Applications: Use any sensors on the device to
add rich location and motion capabilities to your app, from GPS or
4 69
network location to accelerometer, gyroscope, temperature,
barometer, and
more.
Develop a Smart Light System (Light that automatically switched on in
evening and gets off in morning) using open source Hardware platform
5 like Arduino and some sensors (Light dependent resistor) and actuator 70
(An LED).

6 Design and Develop a GUI for FAN regulator that uses Android platform. 74

Develop an Android based FAN regulator using open source Hardware


7 78
platform like NodeMcu and actuator (a SERVO Motor).
Android and Machine Learning: Mobile multimodal sensing- Draw
inferences over the data coming from phone’s sensing hardware (e.g.
8 accelerometer, GPS, microphone), and processing these samples with -
the help of machine learning. (Any Application: Healthcare, Smart City,
Agriculture, etc).
Android API: Implement an application that uses Android APIs like Google
9 Map, recording and playing audio and video, using the built-in camera as -
an input device.
Wireless Network: Develop an app for a rolling display program of news
10 on computer display. The input strings are supplied by the mobile phone/ -
by another computer connected through wireless networks.
11 Android Security: Authentication of two mobile devices. 91

12 Case Study: Evolution of cellular networks all the way up to 7G. 98


Lab Planning (Scheduling)
No. of
Sr. No. Title Week
Hrs.
Android development environment. Installing and
setting up the environment. Hello world application.
1 2 1
Running the emulator. Inserting debug messages.

Android UI Design: Design a User Interface using pre-


built UI components such as structured layout objects,
2 UI controls and special interfaces such as dialogs, 2 2
notifications, and menus. Also, make this UI attractive
using Android graphics platform OpenGL.

Android-database Connectivity: Create a SQLite


Database for an Android Application and perform
3 2 3
CRUD (Create, Read, Update and Delete) database
operations.
Sensors for building Smart Applications: Use any
sensors on the device to add rich location and motion
capabilities to your app, from GPS or network location
4 2 4
to accelerometer, gyroscope, temperature, barometer,
and more.

Develop a Smart Light System (Light that automatically


switched on in evening and gets off in morning) using
open source Hardware platform like Arduino and some
5 2 5
sensors (Light dependent resistor) and actuator (An
LED).

Design and Develop a GUI for FAN regulator that uses


6 Android platform. 2 6

Develop an Android based FAN regulator using open


source Hardware platform like NodeMcu and actuator
7 2 7
(a SERVO Motor).

Android and Machine Learning: Mobile multimodal


8 sensing- Draw inferences over the data coming from 2 8
phone’s sensing hardware (e.g. accelerometer, GPS,
microphone), and processing these samples with the
help of machine learning. (Any Application: Healthcare,
Smart City, Agriculture, etc).

Android API: Implement an application that uses


Android APIs like Google Map, recording and playing
9 audio and video, using the built-in camera as an input 2 9
device.

Wireless Network: Develop an app for a rolling display


program of news on computer display. The input
10 strings are supplied by the mobile phone/ by another 2 10
computer connected through wireless networks.

Android Security: Authentication of two mobile


11 devices. 2 11

Case Study: Evolution of cellular networks all the way


12 up to 7G. 2 12
Assignment No-1 Date:

Title: Android development environment, Installing and setting up the environment.


Assignment No-1
Aim: Android development environment. Installing and setting up the environment. Hello world
application. Running the emulator. Inserting debug messages.

Objective:

 Install and use the Android IDE.


 Understand the development process for building Android apps.
 Create an Android project from a basic app template.

Theory:
Android Studio is Google's IDE for Android apps. Android Studio gives you an advanced code
editor and a set of app templates. In addition, it contains tools for development, debugging,
testing, and performance that make it faster and easier to develop apps. You can test your
apps with a large range of preconfigured emulators or on your own mobile device, and build
production APKs for publication.

To get up and running with Android Studio:

 You may need to install the Java Development Kit - Java 7 or better.
 Install Android Studio

Task 1. Install Android Studio

Android Studio is available for Windows, Mac, and Linux computers. The installation is similar
for all platforms. Any differences will be noted in the sections below.

Installing the Java Development Kit

1. On your computer, open a terminal window.


2. Type java -version

The output includes a line:

Java (™) SE Runtime Environment (build1. X.0_05-b13)

X is the version number to look at.

-If this is 7 or greater, you can move on to installing Android Studio.


-If you see a Java SE version is below 7 or if Java is not installed, you need to install the latest
version of the Java SE development kit before installing Android Studio.

Installing Android Studio

1. Navigate to the Android developers site and follow the instructions to download and install
Android Studio.

o Accept the default configurations for all steps.


o Make sure that all components are selected for installation.

2. After finishing the install, the Setup Wizard will download and install some additional
components.
3. When the download completes, Android Studio will start, and you are ready to create your
first project.

Task 2: Create "Hello World" app

In this task, you will implement the "Hello World" app to verify that Android studio is correctly
installed and learn the basics of developing with Android Studio.

2.1 Create the "Hello World" app

1. Launch Android Studio.


2. In the main Welcome to Android Studio window, click "Start a new Android Studio project".
3. In the New Project window, give your application an Application Name, such as "Hello
World".
4. Verify the Project location, or choose a different directory for storing your project.
5. Choose a unique Company Domain.

o Apps published to the Google Play Store must have a unique package name. Since
domains are unique, prepending your app's name with your or your company's
domain name is going to result in a unique package name.
o If you are not planning to publish your app, you can accept the default example
domain. Be aware that changing the package name of your app later is extra work.

6. Verify that the default Project location is where you want to store your Hello World app and
other Android Studio projects, or change it to your preferred directory. Click Next.
7. On the Target Android Devices screen, "Phone and Tablet" should be selected.
8. Click Next.
9. If your project requires additional components for your chosen target SDK, Android Studio
will install them automatically. Click Next.
10. Customize the Activity window. Every app needs at least one activity. An activity represents
a single screen with a user interface and Android Studio provides templates to help you get
started. For the Hello World project, choose the simplest template (as of this writing, the
"Empty Activity" project template is the simplest template) available.
11. It is a common practice to call your main activity MainActivity. This is not a requirement.
12. Make sure the Generate Layout file box is checked (if visible).
13. Make sure the Backwards Compatibility (App Compat) box is checked.
14. Leave the Layout Name as activity_main. It is customary to name layouts after the activity
they belong to. Accept the defaults and click Finish.

After these steps, Android Studio:

 Creates a folder for your Android Studio Projects.


 Builds your project with Gradle (this may take a few moments). Android Studio uses Gradle
as its build system. See the Configure your build developer page for more information.
 Opens the code editor with your project.
 Displays a tip of the day.
o Android Studio offers many keyboard shortcuts, and reading the tips is a great way to learn
them over time.

The Android Studio window should look similar to the following diagram:
You can look at the hierarchy of the files for your app in multiple ways.

1. Click on the Hello World folder to expand the hierarchy of files (1),
2. Click on Project (2).
3. Click on the Android menu (3).
4. Explore the different view options for your project.

Task 3: Explore the project structure (Optional)

In this practical, you will explore how the project files are organized in Android Studio.

These steps assume that your Hello World project starts out as shown in the diagram above.

3.1 Explore the project structure and layout

In the Project > Android view of your previous task, there are three top-level folders below
your appfolder: manifests, java, and res.
1. Expand the manifests folder.

This folder contains AndroidManifest.xml. This file describes all of the components of your
Android app and is read by the Android run-time system when your program is executed.

2. Expand the java folder. All your Java language files are organized in this folder.
The java folder contains three subfolders:
o com.example.hello.helloworld (or the domain name you have specified): All the files for a
package are in a folder named after the package. For your Hello World application, there is
one package and it only contains MainActivity.java (the file extension may be omitted in the
Project view).
o com.example.hello.helloworld(androidTest): This folder is for your instrumented tests, and
starts out with a skeleton test file.
o com.example.hello.helloworld(test): This folder is for your unit tests and starts out with an
automatically created skeleton unit test file.
3. Expand the res folder. This folder contains all the resources for your app, including images,
layout files, strings, icons, and styling. It includes these subfolders:
o drawable. Store all your app's images in this folder.
o layout. Every activity has at least one layout file that describes the UI in XML. For Hello World,
this folder contains activity_main.xml.
o mipmap. Store your launcher icons in this folder. There is a sub-folder for each supported
screen density. Android uses the screen density, that is, the number of pixels per inch to
determine the required image resolution. Android groups all actual screen densities into
generalized densities, such as medium (mdpi), high (hdpi), or extra-extra-extra-high
(xxxhdpi). The ic_launcher.png folder contains the default launcher icons for all the densities
supported by your app.
o values. Instead of hardcoding values like strings, dimensions, and colors in your XML and Java
files, it is best practice to define them in their respective values file. This makes it easier to
change and be consistent across your app.
4. Expand the values subfolder within the res folder. It includes these subfolders:
o colors.xml. Shows the default colors for your chosen theme, and you can add your own colors
or change them based on your app's requirements.
o dimens.xml. Store the sizes of views and objects for different resolutions.
o strings.xml. Create resources for all your strings. This makes it easy to translate them to other
languages.
o styles.xml. All the styles for your app and theme go here. Styles help give your app a
consistent look for all UI elements.
3.2 The Gradle build system

Android Studio uses Gradle as its build system. As you progress through these practicals, you
will learn more about gradle and what you need to build and run your apps.

1. Expand the Gradle Scripts folder. This folder contains all the files needed by the build system.
2. Look for the build.gradle(Module:app) file. When you are adding app-specific dependencies,
such as using additional libraries, they go into this file.

Task 4: Create a virtual device (emulator)

In this task, you will use the Android Virtual Device (AVD) manager to create a virtual device
or emulator that simulates the configuration for a particular type of Android device.

Using the AVD Manager, you define the hardware characteristics of a device and its API level,
and save it as a virtual device configuration.

When you start the Android emulator, it reads a specified configuration and creates an
emulated device that behaves exactly like a physical version of that device , but it resides on
your computer .

Why: With virtual devices, you can test your apps on different devices (tablets, phones) with
different API levels to make sure it looks good and works for most users. You do not need to
depend on having a physical device available for app development.

4.1 Create a virtual device

In order to run an emulator on your computer, you have to create a configuration that
describes the virtual device.

1. In Android Studio, select Tools > Android > AVD Manager, or click the AVD Manager

icon in the toolbar.


2. Click the +Create Virtual Device…. (If you have created a virtual device before, the window
shows all of your existing devices and the button is at the bottom.)

The Select Hardware screen appears showing a list of preconfigured hardware devices. For
each device, the table shows its diagonal display size (Size), screen resolution in pixels
(Resolution), and pixel density (Density).

For the Nexus 5 device, the pixel density is xxhdpi, which means your app uses the launcher
icons in the xxhdpi folder of the mipmap folder. Likewise, your app will use layouts and
drawables from folders defined for that density as well.

3. Choose the Nexus 5 hardware device and click Next.


4. On the System Image screen, from the recommended tab, choose which version of the
Android system to run on the virtual device. You can select the latest system image.

There are many more versions available than shown in the recommended tab. Look at
the x86 Images and Other Images tabs to see them.

5. If a Download link is visible next to a system image version, it is not installed yet, and you
need to download it. If necessary, click the link to start the download, and click Finish when
it's done.
6. On System Image screen, choose a system image and click next.
7. Verify your configuration, and click Finish. (If the Your Android Devices AVD Manager
window stays open, you can go ahead and close it.)

Task 5. Run your app on an emulator

In this task, you will finally run your Hello World app.

5.1 Run your app on an emulator

1. In Android Studio, select Run > Run app or click the Run icon in the toolbar.
2. In the Select Deployment Target window, under Available Emulators, select Nexus 5 API
23 and click OK.

The emulator starts and boots just like a physical device. Depending on the speed of your
computer, this may take a while. Your app builds, and once the emulator is ready, Android
Studio will upload the app to the emulator and run it.

You should see the Hello World app as shown in the following screenshot.
OUTPUT
Note: When testing on an emulator, it is a good practice to start it up once, at the very
beginning of your session. You should not close the emulator until you are done testing your
app, so that your app doesn't have to go through the boot process again.

Task 6. Add log statements to your app

In this practical, you will add log statements to your app, which are displayed in the logging
window of the Android Monitor.

Why: Log messages are a powerful debugging tool that you can use to check on values,
execution paths, and report exceptions.

The Android Monitor displays information about your app.

1. Click the Android Monitor button at the bottom of Android Studio to open the Android
Monitor.

By default, this opens to the logcat tab, which displays information about your app as it is
running. If you add log statements to your app, they are printed here as well.

You can also monitor the Memory, CPU, GPU, and Network performance of your app from
the other tabs of the Android Monitor. This can be helpful for debugging and performance
tuning your code.

2. The default log level is Verbose. In the drop-down menu, change the log level to Debug.
Log statements that you add to your app code print a message specified by you in the logcat
tab of the Android Monitor. For example:

Log.d("MainActivity", "Hello World");


The parts of the message are:

 Log – The Log class. API for sending log messages.


 d – The Log level. Used to filter log message display in logcat. "d" is for debug. Other log levels
are "e" for error, "w" for warning, and "i" for info.
 "MainActivity" – The first argument is a tag which can be used to filter messages in logcat.
This is commonly the name of the activity from which the message originates. However, you
can make this anything that is useful to you for debugging.

By convention, log tags are defined as constants:

private static final String LOG_TAG = MainActivity.class.getSimpleName();

 "Hello world" – The second argument is the actual message.

6.1 Add log statements to your app

1. Open your Hello World app in Android studio, and open MainActivity file.
2. File > Settings > Editor > General >Auto Import (Mac: Android Studio > Preferences > Editor
> General >Auto Import). Select all check boxes and set Insert imports on paste to All.
Unambiguous imports are now added automatically to your files. Note the "add unambiguous
imports on the fly" option is important for some Android features such as NumberFormat. If
not checked, NumberFormat shows an error. Click on 'Apply' followed by clicking on the 'Ok'
button.
3. In the onCreate method, add the following log statement:
4. Log.d("MainActivity", "Hello World");
5. If the Android Monitor is not already open, click the Android Monitor tab at the bottom of
Android Studio to open it. (See screenshot.)
6. Make sure that the Log level in the Android Monitor logcat is set to Debug or Verbose
(default).
7. Run your app.

Solution Code:

package com.example.hello.helloworld;

import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;

public class MainActivity extends AppCompatActivity


{ @Override
protected void onCreate(Bundle savedInstanceState)
{ super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Log.d("MainActivity", "Hello World");
}
}
Output Log Message

03-18 12:20:23.184 2983-2983/com.example.hello.helloworld D/MainActivity: Hello World


Task 7: Explore the AndroidManifest.xml file

Every app includes an Android Manifest file ( AndroidManifest.xml).The manifest file contains
essential information about your app and presents this information to the Android runtime
system. Android must have this information before it can run any of your app's code.
In this practical you will find and read the AndroidManifest.xml file for the Hello World app.

7.1 Explore the AndroidManifest.xml file

1. Open your Hello World app in Android studio, and in the manifests folder,
openAndroidManifest.xml.
2. Read the file and consider what each line of code indicates. The code below is annotated to
give you some hints.

Annotated code:
<!-- XML version and character encoding -->
<?xml version="1.0" encoding="utf-8"?>
<!-- Required starting tag for the manifest -->
<manifest
<!-- Defines the android namespace. Do not change. -->
xmlns:android="http://schemas.android.com/apk/res/android"
<!-- Unique package name of your app. Do not change once app is
published. -->
package="com.example.hello.helloworld">
<!-- Required application tag -->
<application
<!-- Allow the application to be backed up and restored. –>
android:allowBackup="true"
<!-- Icon for the application as a whole,
and default icon for application components. –>
android:icon="@mipmap/ic_launcher"
<!-- User-readable for the application as a whole,
and default icon for application components. Notice that Android
Studio first shows the actual label "Hello World".
Click on it, and you will see that the code actually refers to a string
resource. Ctrl-click @string/app_name to see where the resource is
specified. This will be covered in a later practical . –>
android:label="@string/app_name"
<!-- Whether the app is willing to support right-to-left layouts.–>
android:supportsRtl="true"
<!-- Default theme for styling all activities. –>
android:theme="@style/AppTheme">
<!-- Declares an activity. One is required.
All activities must be declared,
otherwise the system cannot see and run them. –>
<activity
<!-- Name of the class that implements the activity;
subclass of Activity. –>
android:name=".MainActivity">
<!-- Specifies the intents that this activity can respond to.–>
<intent-filter>
<!-- The action and category together determine what
happens when the activity is launched. –>
<!-- Start activity as the main entry point.
Does not receive data. –>
<action android:name="android.intent.action.MAIN" />
<!-- Start this activity as a top-level activity in
the launcher . –>
<category android:name="android.intent.category.LAUNCHER" />
<!-- Closing tags –>
</intent-filter>
</activity>
</application>
</manifest>

Task 8. Explore the build.gradle file

Android Studio uses a build system called Gradle. Gradle does incremental builds, which allows
for shorter edit-test cycles.

In this task, you will explore the build.gradle file.


Why: When you add new libraries to your Android project, you may also have to update
your build.gradle file. It's useful to know where it is and its basic structure.

8.1 Explore the build.gradle(Module app) file

1. In your project hierarchy, find Gradle Scripts and expand it. There several build.gradle files.
One with directives for your whole project, and one for each app module. The module for
your app is called "app". In the Project view, it is represented by the app folder at the top-
level of the Project view.
2. Open build.gradle (Module.app).
3. Read the file and learn what each line of code indicates.

Solution:

// Add Android-specific build tasks


apply plugin: 'com.android.application'
// Configure Android specific build options.
android {
// Specify the target SDK version for the build.
compileSdkVersion 23
// The version of the build tools to use.
buildToolsVersion "23.0.2"
// Core settings and entries. Overrides manifest settings!
defaultConfig {
applicationId "com.example.hello.helloworld"
minSdkVersion 15
targetSdkVersion 23
versionCode 1
versionName "1.0"
}
// Controls how app is built and packaged.
buildTypes {
// Another common option is debug, which is not signed by default.
release {
// Code shrinker. Turn this on for production along with
// shrinkResources.
minifyEnabled false
// Use ProGuard, a Java optimizer.
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
// This is the part you are most likely to change as you start using
// other libraries.
dependencies {
// Local binary dependency. Include any JAR file inside app/libs.
compile fileTree(dir: 'libs', include: ['*.jar'])
// Configuration for unit tests.
testCompile 'junit:junit:4.12'
// Remote binary dependency. Specify Maven coordinates of the Support
// Library needed. Use the SDK Manager to download and install such
// packages.
compile 'com.android.support:appcompat-v7:23.2.1'
}

Task 9. [Optional] Run your app on a device

In this final task, you will run your app on a physical mobile device such as a phone or tablet.

Why: Your users will run your app on physical devices. You should always test your apps on
both virtual and physical devices.

What you need:

 An Android device such as a phone or tablet.


 A data cable to connect your Android device to your computer via the USB port.
 If you are using a Linux or Windows OS, you may need to perform additional steps to run on
a hardware device. Check the Using Hardware Devices documentation. On Windows, you may
need to install the appropriate USB driver for your device. See OEM USB Drivers.

9.1 [Optional] Run your app on a device

To let Android Studio, communicate with your device, you must turn on USB Debugging on
your Android device. This is enabled in the Developer options settings of your device. Note this
is not the same as rooting your device.

On Android 4.2 and higher, the Developer options screen is hidden by default. To show
Developer options and enable USB Debugging:
1. On your device, open Settings > About phone and tap Build number seven times.
2. Return to the previous screen (Settings). Developer options appears at the bottom of the list.
Click Developer options.
3. Choose USB Debugging.

Now you can connect your device and run the app from Android Studio.

1. Connect your device to your development machine with a USB cable.


2. In Android Studio, at the bottom of the window, click the Android Monitor tab. You should
see your device listed in the top-left drop-down menu.
3. Click the Run button in the toolbar. The Select Deployment Target window opens with
the list of available emulators and connected devices.
4. Select your device, and click OK.

Android Studio should install and runs the app on your device.

Conclusion:

 Thus we know how to Install and use the Android IDE. Also we understand the development
process for building Android apps. We have created an Android project from a basic app
template.

FAQs:-

What devices are supported for Google Play Instant?

Do developers need to build two different Android apps?

Can users choose to install the app permanently?

How do permissions work in Google Play Instant?

Which permissions are available to an instant app?


Assignment No. : 2

Title: Android UI Design: Design a User Interface using pre-built UI components

Title: Study of VB Environment (GUI)


Back to Index
Lab. Assignment No – 2
Aim: Android UI Design: Design a User Interface using pre-built UI components such as
structured layout objects, UI controls and special interfaces such as dialogs, notifications, and
menus. Also make this UI attractive using Android graphics platform OpenGL.

Objective:

Theory:
Your app's user interface is everything that the user can see and interact with. Android
provides a variety of pre-built UI components such as structured layout objects and UI
controls that allow you to build the graphical user interface for your app. Android also
provides other UI modules for special interfaces such as dialogs, notifications, and menus.

Layouts

A layout defines the structure for a user interface in your app, such as in an activity. All
elements in the layout are built using a hierarchy of View and ViewGroup objects. A View
usually draws something the user can see and interact with. Whereas a ViewGroup is an
invisible container that defines the layout structure for View and other ViewGroup objects, as
shown in figure 1.

Figure 1. Illustration of a view hierarchy, which defines a UI layout

The View objects are usually called "widgets" and can be one of many subclasses, such
as Button or TextView. The ViewGroup objects are usually called "layouts" can be one of many
types that provide a different layout structure, such as LinearLayout or ConstraintLayout .

You can declare a layout in two ways:

 Declare UI elements in XML. Android provides a straightforward XML vocabulary that


corresponds to the View classes and subclasses, such as those for widgets and layouts.
You can also use Android Studio's Layout Editor to build your XML layout using a drag-and-drop
interface.
 Instantiate layout elements at runtime. Your app can create View and ViewGroup objects
(and manipulate their properties) programmatically.

Load the XML Resource

When you compile your app, each XML layout file is compiled into a View resource. You should
load the layout resource from your app code, in your Activity.onCreate() callback
implementation. Do so by calling setContentView(), passing it the reference to your layout
resource in the form of: R.layout.layout_file_name. For example, if your XML layout is saved
as main_layout.xml, you would load it for your Activity like so:

JAVA
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main_layout);
}

The onCreate() callback method in your Activity is called by the Android framework when your
Activity is launched (see the discussion about lifecycles, in the Activities document).

Attributes

Every View and ViewGroup object supports their own variety of XML attributes. Some
attributes are specific to a View object (for example, TextView supports the textSize attribute),
but these attributes are also inherited by any View objects that may extend this class. Some
are common to all View objects, because they are inherited from the root View class (like the
idattribute). And, other attributes are considered "layout parameters," which are attributes
that describe certain layout orientations of the View object, as defined by that object's parent
ViewGroup object.

ID

Any View object may have an integer ID associated with it, to uniquely identify the View within
the tree. When the app is compiled, this ID is referenced as an integer, but the ID is typically
assigned in the layout XML file as a string, in the id attribute. This is an XML attribute common
to all View objects (defined by the View class) and you will use it very often. The syntax for an
ID, inside an XML tag is:

android:id="@+id/my_button"

The at-symbol (@) at the beginning of the string indicates that the XML parser should parse
and expand the rest of the ID string and identify it as an ID resource. The plus-symbol (+)
means that this is a new resource name that must be created and added to our resources (in
the R.javafile). There are a number of other ID resources that are offered by the Android
framework. When referencing an Android resource ID, you do not need the plus-symbol,
but must add the android package namespace, like so:

android:id="@android:id/empty"

With the android package namespace in place, we're now referencing an ID from
the android.R resources class, rather than the local resources class.

In order to create views and reference them from the app, a common pattern is to:

1. Define a view/widget in the layout file and assign it a unique ID:

<Button android:id="@+id/my_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/my_button_text"/>

2. Then create an instance of the view object and capture it from the layout (typically in
the onCreate() method):

JAVA
Button myButton = (Button) findViewById(R.id.my_button);

Defining IDs for view objects is important when creating a RelativeLayout. In a relative layout,
sibling views can define their layout relative to another sibling view, which is referenced by the
unique ID.

An ID need not be unique throughout the entire tree, but it should be unique within the part
of the tree you are searching (which may often be the entire tree, so it's best to be completely
unique when possible).
Layout Parameters

XML layout attributes named layout_something define layout parameters for the View that are
appropriate for the ViewGroup in which it resides.

Every ViewGroup class implements a nested class that extends ViewGroup.LayoutParams. This
subclass contains property types that define the size and position for each child view, as
appropriate for the view group. As you can see in figure 2, the parent view group defines
layout parameters for each child view (including the child view group).

Figure 2. Visualization of a view hierarchy with layout parameters associated with each view

Note that every LayoutParams subclass has its own syntax for setting values. Each child
element must define LayoutParams that are appropriate for its parent, though it may also
define different LayoutParams for its own children.

All view groups include a width and height ( layout_width and layout_height), and each view is
required to define them. Many LayoutParams also include optional margins and borders.

You can specify width and height with exact measurements, though you probably won't want
to do this often. More often, you will use one of these constants to set the width or height:

 wrap_content tells your view to size itself to the dimensions required by its content.
 match_parent tells your view to become as big as its parent view group will allow.

In general, specifying a layout width and height using absolute units such as pixels is not
recommended. Instead, using relative measurements such as density-independent pixel units
(dp), wrap_content, or match_parent, is a better approach, because it helps ensure that your
app will display properly across a variety of device screen sizes. The accepted measurement
types are defined in the Available Resources document.
Layout Position

The geometry of a view is that of a rectangle. A view has a location, expressed as a pair
of leftand top coordinates, and two dimensions, expressed as a width and a height. The unit for
location and dimensions is the pixel.

It is possible to retrieve the location of a view by invoking the methods getLeft() and getTop().
The former returns the left, or X, coordinate of the rectangle representing the view. The latter
returns the top, or Y, coordinate of the rectangle representing the view. These methods both
return the location of the view relative to its parent. For instance, when getLeft() returns 20,
that means the view is located 20 pixels to the right of the left edge of its direct parent.

In addition, several convenience methods are offered to avoid unnecessary computations,


namely getRight() and getBottom(). These methods return the coordinates of the right and
bottom edges of the rectangle representing the view. For instance, calling getRight() is similar
to the following computation: getLeft() + getWidth().

Size, Padding and Margins

The size of a view is expressed with a width and a height. A view actually possesses two pairs
of width and height values.

The first pair is known as measured width and measured height. These dimensions define how
big a view wants to be within its parent. The measured dimensions can be obtained by
calling getMeasuredWidth() and getMeasuredHeight().

The second pair is simply known as width and height, or sometimes drawing width and
drawing height. These dimensions define the actual size of the view on screen, at drawing time
and after layout. These values may, but do not have to, be different from the measured width
and height. The width and height can be obtained by calling getWidth() and getHeight().

To measure its dimensions, a view takes into account its padding. The padding is expressed in
pixels for the left, top, right and bottom parts of the view. Padding can be used to offset the
content of the view by a specific number of pixels. For instance, a left padding of 2 will push
the view's content by 2 pixels to the right of the left edge. Padding can be set using
the method and queried by
calling getPaddingLeft(), , getPaddingRight() and getPaddingBottom().

Even though a view can define a padding, it does not provide any support for margins.
However, view groups provide such a support. Refer
to ViewGroup andViewGroup.MarginLayoutParams for further information.
For more information about dimensions, see Dimension Values.

Common Layouts

Each subclass of the ViewGroup class provides a unique way to display the views you nest
within it. Below are some of the more common layout types that are built into the Android
platform.

Note: Although you can nest one or more layouts within another layout to achieve your UI
design, you should strive to keep your layout hierarchy as shallow as possible. Your layout
draws faster if it has fewer nested layouts (a wide view hierarchy is better than a deep view

Linear Layout

A layout that organizes its children into a single horizontal or vertical row. It creates a scrollbar
if the length of the window exceeds the length of the screen.

Relative Layout

Enables you to specify the location of child objects relative to each other (child A to the left of
child B) or to the parent (aligned to the top of the parent).
Web View

Displays web pages.

You can use built in vies to Design your activity as per requirement. You just drag and drop the
views and set their attributes.
Refer img to design and activity.

Dialogs

A dialog is a small window that prompts the user to make a decision or enter additional
information. A dialog does not fill the screen and is normally used for modal events that
require users to take an action before they can proceed.
Dialogs inform users about a task and can contain critical information, require decisions, or
involve multiple tasks.
Menus

Menus are a common user interface component in many types of applications. To provide a
familiar and consistent user experience, you should use the Menu APIs to present user actions
and other options in your activities.
Here, we are inflating the menu by calling the inflate() method of MenuInflater class. To
perform event handling on menu items, you need to override onOptionsItemSelected()
method of Activity class.
There are 3 types of menus in Android:
1. Option Menu:The options menu is the primary collection of menu items for an activity.
It's where you should place actions that have a overall impact on the app, such as
Search, Compose Email and Settings.

2. Context Menu: A context menu is a floating menu that appears when the user performs
a long-click on an element. It provides actions that affect the selected content or
context frame.

3. Pop-up Menu : A popup menu displays a list of items in a vertical list that is
anchored(sticked) to the view that invoked the menu. It's good for providing an
overflow of actions that relate to specific content or to provide options for a second
part of a command.
How to create a Menu?
For all menu types mentioned above, Android provides a standard XML format to define menu
items. Instead of building a menu in your activity's code, you should define a menu and all its
items in an XML menu resource. You can then inflate the menu resource i.e load the XML files
as a Menu object in your activity.

Why to use a separate menu resource?


Using a menu resource is a good practice for a few reasons:

 It's easier to visualize the menu structure in XML.


 It separates the content for the menu from your application's behavioral code.
 It allows you to create alternative menu configurations for different platform versions,
screen sizes, and other configurations by leveraging the app resources framework.

How to create a menu_file.xml file in menu directory?


To define the menu_file.xml file, first create a menu directory under res folder. This is done by
right clicking on res --> new --> Android resource directory.
Then a new window will appear. Type menu in the directory name and choose menu in the
Resource type. Then, click on OK.
A new menu directory would be made under res directory. Add menu_file.xml file in menu
directory by right clicking on menu --> New --> Menu resource file.
Give the name as menu_file.xml and click on Ok. The menu_file.xml file contains the following
tags:

 <menu>

It defines a Menu, which is a container for menu items. A <menu> element must be the
root node for the file and can hold one or more <item> and <group> elements.

 <item>
It creates a MenuItem, which represents a single item in a menu. This element may contain
a nested <menu> element in order to create a submenu.

 <group>

It is an optional, invisible container for <item> elements. It allows you to categorize menu
items so they share properties such as active state and visibility.
menu_file.xml
<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">

<item android:id="@+id/i1"
android:title="item"
>

<!-- "item" submenu -->


<menu>
<item android:id="@+id/i2"
android:title="subitem a"
/>
<item android:id="@+id/i3"
android:title="subitem b"
/>
</menu>
</item>
</menu>

The <item> element supports several attributes you can use to define an item's appearance and
behavior. The items in the above menu include the following attributes:

 android:id

A resource ID that's unique to the item, which allows the application to recognize the item
when the user selects it.

 android:icon
A reference to a drawable to use as the item's icon.

 android:title

A reference to a string to use as the item's title.


Activity_main2.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="comjdjaydeeppatil.trialscoe.Main2Activity">

<TextView
android:id="@+id/textView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="51dp"
android:textSize="35sp"
android:text="This is new activity"
android:layout_alignParentTop="true"
android:layout_alignParentStart="true" />
<TextView
android:id="@+id/t1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="I am context menu"
android:paddingBottom="30dp"
android:textAllCaps="true"
android:textSize="20sp"
android:layout_marginTop="11dp"
android:layout_below="@+id/textView"
android:layout_centerHorizontal="true" />

<Button
android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:onClick="pop"
android:text="I am Pop Menu"
android:textAllCaps="true"
android:layout_below="@+id/textView"
android:layout_centerHorizontal="true"
android:layout_marginTop="68dp" />
</RelativeLayout>

Main2Activity.java
package comjdjaydeeppatil.trialscoe;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.support.v7.widget.PopupMenu;
import android.view.ContextMenu;
import android.view.Menu;
import android.view.MenuInflater;
import android.view.MenuItem;
import android.view.View;
import android.widget.TextView;
import android.widget.Toast;

public class Main2Activity extends AppCompatActivity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main2);

registerForContextMenu((TextView)findViewById(R.id.t1));

@Override
public boolean onCreateOptionsMenu(Menu menu) {

MenuInflater mi = getMenuInflater();
mi.inflate(R.menu.menu_file,menu);
return true;
}

@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()){
case R.id.i1:
Toast.makeText(this,"Cicked Main Menu",Toast.LENGTH_SHORT).show();
break;
case R.id.i2:
Toast.makeText(this,"I am sub-menu 1",Toast.LENGTH_SHORT).show();
break;
case R.id.i3:
Toast.makeText(this,"I am sub-menu 2",Toast.LENGTH_SHORT).show();
break;
}
return true;
}

@Override
public void onCreateContextMenu(ContextMenu menu, View v,
ContextMenu.ContextMenuInfo menuInfo) {
super.onCreateContextMenu(menu, v, menuInfo);

MenuInflater mi = getMenuInflater();
mi.inflate(R.menu.menu_file,menu);
}

@Override
public boolean onContextItemSelected(MenuItem item) {
switch (item.getItemId()){
case R.id.i1:
Toast.makeText(this,"Clicked Main menu",Toast.LENGTH_SHORT).show();
break;
case R.id.i2:
Toast.makeText(this,"I am sub-menu 1",Toast.LENGTH_SHORT).show();
break;
case R.id.i3:
Toast.makeText(this,"I am sub-menu 2",Toast.LENGTH_SHORT).show();
break;
}
return true;

public void pop(View v){


PopupMenu popup = new PopupMenu(this,v);
MenuInflater mi = getMenuInflater();
mi.inflate(R.menu.menu_file,popup.getMenu());
popup.show();

public boolean onMenuItemClick(MenuItem item)

{
switch (item.getItemId()){
case R.id.i1:
Toast.makeText(this,"Clicked Main menu",Toast.LENGTH_SHORT).show();
break;
case R.id.i2:
Toast.makeText(this,"I am sub-menu 1",Toast.LENGTH_SHORT).show();
break;
case R.id.i3:
Toast.makeText(this,"I am sub-menu 2",Toast.LENGTH_SHORT).show();
break;
}
return true;

}
}
Conclusion:
Thus we studied how to Make Simple UI design using inbuilt views. Also we Studied Menus and
Dialog box to make app more attractive.
FAQs:-
1. What’s the difference between an implicit and an explicit intent?
2. When should you use a Fragment, rather than an Activity?
3. You’re replacing one Fragment with another — how do you ensure that the user
can return to the previous Fragment, by pressing the Back button?
4. How would you create a multi-threaded Android app without using the Thread
class?
5. What is a ThreadPool? And is it more effective than using several separate Threads?
6. What is the relationship between the lifecycle of an AsyncTask and the lifecycle of
an Activity? What problems can this result in, and how can these problems be
avoided?
Assignment No.: 3

Title: Android-database Connectivity

Back to Index

Lab. Assignment No – 3
Aim: Android-database Connectivity: Create a SQLite Database for an Android Application and
perform CRUD (Create, Read, Update and Delete) database operations.

Objective:To implement stand-alone database (SQLite) as a back end

Theory:

What is SQLite?

SQLite is an SQL Database. So in SQL database, we store data in tables. The tables are the
structure of storing data consisting of rows and columns.

What is CRUD?

As the heading tells you here, we are going to learn the CRUD operation in SQLite Database.
But what is CRUD? CRUD is nothing but an abbreviation for the basic operations that we
perform in any database. And the operations are

 Create
 Read
 Update
 Delete

Android SQLite

Android SQLite is a very lightweight database which comes with Android OS. Android SQLite
combines a clean SQL interface with a very small memory footprint and decent speed. For
Android, SQLite is “baked into” the Android runtime, so every Android application can create
its own SQLite databases.

Android SQLite native API is not JDBC, as JDBC might be too much overhead for a memory-
limited smartphone. Once a database is created successfully its located
in data/data//databases/ accessible from Android Device Monitor.

SQLite is a typical relational database, containing tables (which consists of rows and columns),
indexes etc. We can create our own tables to hold the data accordingly. This structure is
referred to as a schema.
Android SQLite SQLiteOpenHelper

Android has features available to handle changing database schemas, which mostly depend on
using the SQLiteOpenHelper class.

SQLiteOpenHelper is designed to get rid of two very common problems.

1. When the application runs the first time – At this point, we do not yet have a database.
So we will have to create the tables, indexes, starter data, and so on.
2. When the application is upgraded to a newer schema – Our database will still be on the
old schema from the older edition of the app. We will have option to alter the database
schema to match the needs of the rest of the app.

SQLiteOpenHelper wraps up these logic to create and upgrade a database as per our
specifications. For that we’ll need to create a custom subclass
of SQLiteOpenHelper implementing at least the following three methods.

1. Constructor: This takes the Context (e.g., an Activity), the name of the database, an
optional cursor factory (we’ll discuss this later), and an integer representing the version
of the database schema you are using (typically starting from 1 and increment later).

public DatabaseHelper(Context context)

{ super(context, DB_NAME, null, DB_VERSION);

1. onCreate(SQLiteDatabase db) : It’s called when there is no database and the app needs
one. It passes us a SQLiteDatabase object, pointing to a newly-created database, that we
can populate with tables and initial data.
2. onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) : It’s called when the
schema version we need does not match the schema version of the database, It passes
us a SQLiteDatabase object and the old and new version numbers. Hence we can figure
out the best way to convert the database from the old schema to the new one.

We define a DBManager class to perform all database CRUD(Create, Read, Update and Delete)
operations.

Opening and Closing Android SQLite Database Connection

Before performing any database operations like insert, update, delete records in a table, first
open the database connection by calling getWritableDatabase() method as shown below:
public DBManager open() throws SQLException

{ dbHelper = new DatabaseHelper(context);

database = dbHelper.getWritableDatabase();

return this;

The dbHelper is an instance of the subclass of SQLiteOpenHelper.

To close a database connection the following method is invoked.

public void close()

{ dbHelper.close();

Inserting new Record into Android SQLite database table

The following code snippet shows how to insert a new record in the android SQLite database.

public void insert(String name, String desc)

{ ContentValues contentValue = new

ContentValues();

contentValue.put(DatabaseHelper.SUBJECT, name);

contentValue.put(DatabaseHelper.DESC, desc);

database.insert(DatabaseHelper.TABLE_NAME, null, contentValue);

Content Values creates an empty set of values using the given initial size. We’ll discuss the other
instance values when we jump into the coding part.
Updating Record in Android SQLite database table

The following snippet shows how to update a single record.

public int update(long _id, String name, String desc)

{ ContentValues contentValues = new ContentValues();

contentValues.put(DatabaseHelper.SUBJECT, name);

contentValues.put(DatabaseHelper.DESC, desc);

int i = database.update(DatabaseHelper.TABLE_NAME, contentValues, DatabaseHelper._ID


+ " = " + _id,

null); return

i;

Android SQLite – Deleting a Record

We just need to pass the id of the record to be deleted as shown below.

public void delete(long _id) {

database.delete(DatabaseHelper.TABLE_NAME, DatabaseHelper._ID + "=" + _id, null);

Android SQLite Cursor

A Cursor represents the entire result set of the query. Once the query is fetched a call
to cursor.moveToFirst()is made. Calling moveToFirst() does two things:

 It allows us to test whether the query returned an empty set (by testing the return value)
 It moves the cursor to the first result (when the set is not empty)

The following code is used to fetch all records:

public Cursor fetch() {


String[] columns = new String[] { DatabaseHelper._ID,
DatabaseHelper.SUBJECT, DatabaseHelper.DESC };

Cursor cursor = database.query(DatabaseHelper.TABLE_NAME, columns, null, null, null,


null, null);

if (cursor != null)

{ cursor.moveToFirst(

);

return cursor;

Another way to use a Cursor is to wrap it in a CursorAdapter. Just as ArrayAdapter adapts


arrays, CursorAdapter adapts Cursor objects, making their data available to
an AdapterView like a ListView.

Implementation:

Step 1 – Create new Android project.

Step 2 – Add components in the main activity as shown in the picture below.
activity_main.xml

<?xml version="1.0" encoding="utf-8"?>


<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="wrap_content"
>

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content
"

android:text="Name"
android:id="@+id/textView"
android:layout_alignParentTop="true
"

android:layout_marginTop="44dp" />

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content
"

android:text="SurName"
android:id="@+id/textView2"
android:layout_below="@+id/textView
"

android:layout_marginTop="44dp" />

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content
"

android:text="Marks"
android:id="@+id/textView3"
android:layout_below="@+id/textView2
"

android:layout_marginTop="44dp" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content
" android:text="View All"
android:id="@+id/button2"
android:layout_marginTop="46dp"
android:layout_below="@+id/button"
android:layout_alignStart="@+id/button" />

<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Update"
android:id="@+id/button_update"
android:layout_below="@+id/button"
android:layout_alignStart="@+id/button" /
>

<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Add"
android:id="@+id/button"
android:layout_marginTop="13dp"
android:layout_below="@+id/textView3"
android:layout_centerHorizontal="true" /
>

<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content
" android:id="@+id/editText2"

android:layout_alignBaseline="@+id/textView2
"
android:layout_alignBottom="@+id/textView2"
android:layout_toRightOf="@+id/textView2"
android:layout_toEndOf="@+id/textView2"
android:layout_marginLeft="18dp"
android:layout_marginStart="18dp" />

<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content
" android:id="@+id/editText3"
android:layout_alignBaseline="@+id/textView3
"
android:layout_alignBottom="@+id/textView3"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true"
android:layout_alignLeft="@+id/editText"
android:layout_alignStart="@+id/editText" />
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/editTextId"
android:layout_alignParentTop="true"
android:layout_toRightOf="@+id/textView2"
android:layout_toEndOf="@+id/textView2" /
>

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content
"

android:text="ID"
android:id="@+id/textView4
"
android:layout_alignBaseline="@+id/editTextId
"
android:layout_alignBottom="@+id/editTextId"
android:layout_alignRight="@+id/textView"
android:layout_alignEnd="@+id/textView" />

<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/editText"
android:layout_below="@+id/textView4"
android:layout_alignLeft="@+id/editTextId"
android:layout_alignStart="@+id/editTextId"></EditText>

</RelativeLayout>

Step 3 – Now create a new Java classes called Student.java and MyHelper.java.

You can see Project Structure of above image to know where to add this java classes.

Add the following code in class Student.java

Student.java
package com.myapplication;

/**
* Created by jd on 22-Jan-19.
*/

public class Student {

private Integer id;


private String fname;
private String lname;

public Student(String fname, String lname) {


this.fname = fname;
this.lname = lname;
}

public String getFname() {


return fname;
}

public String getLname() {


return lname;
}
}

Add the following code in class MyHelper.java

MyHelper.java

package com.myapplication;

/**
* Created by jaydeep on 27-Sep-17.
*/
import
android.content.ContentValues;
import android.content.Context;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import
android.database.sqlite.SQLiteOpenHelper; public
class MyHelper extends SQLiteOpenHelper
{
public static final String DATABASE_NAME= "Student.db"; // DB Name
public static final String TABLE_NAME = "student_table"; // Table Name
public static final String COL_1 = "ID"; // Column 1.
public static final String COl_2 = "Name"; // Column 2.
public static final String COL_3 = "SurName"; // Column
3. public static final String COL_4 = "Marks"; // Column 4.

public MyHelper(Context context)


{
super(context,DATABASE_NAME, null, 1);
// SQLiteDatabase db = this.getWritableDatabase(); // it will create DB & Table.
} // So whenever the Constructor will be called , the Database will be created.

@Override
public void onCreate(SQLiteDatabase db)
{
db.execSQL("Create Table " + TABLE_NAME + " (ID INTEGER PRIMARY
KEY AUTOINCREMENT , NAME TEXT, SURNAME TEXT, MARKS
INTEGER) ");
}

@Override
public void onUpgrade(SQLiteDatabase sqLiteDatabase, int i, int i1)
{

}
public boolean insertData(String name, String surname, String marks)
{
SQLiteDatabase db = this.getWritableDatabase(); // it will create DB & Table.
ContentValues contentValues = new ContentValues(); // It is used to put the values in the
Column.
contentValues.put(COl_2,name);
contentValues.put(COL_3,surname)
; contentValues.put(COL_4,marks);
long result =
db.insert(TABLE_NAME,null,contentValues); if (result ==
-1)
return
false; else
return true;
}

public Cursor getAllData()


{
SQLiteDatabase db = this.getWritableDatabase();
Cursor res = db.rawQuery("select * from " + TABLE_NAME,null);
return res;
}

public boolean updateData(String id,String name, String surname, String marks)


{
SQLiteDatabase db = this.getWritableDatabase(); // it will create DB & Table.
ContentValues contentValues = new ContentValues(); // It is used to put the values in the
Column.
contentValues.put(COl_2,name);
contentValues.put(COL_3,surname);
contentValues.put(COL_4,marks);
db.update(TABLE_NAME,contentValues,"ID = ?", new String[]
{id}); return true;
}

Finally add this code in MainActivity.java file.

MainActivity.java

package com.myapplication;

import
android.database.Cursor;
import android.os.Bundle;
import android.support.v7.app.AlertDialog;
import
android.support.v7.app.AppCompatActivity;
import android.view.View;
import android.widget.Button;
import
android.widget.EditText;
import android.widget.Toast;

public class MainActivity extends AppCompatActivity


{
MyHelper mh;
EditText editName,editSurname,
editMarks,editTextId; Button btn;
Button btnViewAll;
Button btnviewUpdate;
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main)
;

mh = new MyHelper(this); // It is going to call the Constructor of this class.

editName = (EditText) findViewById(R.id.editText);


editSurname = (EditText) findViewById(R.id.editText2);
editMarks = (EditText) findViewById(R.id.editText3);
editTextId = (EditText) findViewById(R.id.editTextId);
btn = (Button) findViewById(R.id.button);
btnViewAll = (Button) findViewById(R.id.button2);
btnviewUpdate = (Button) findViewById(R.id.button_update);

addData();
viewAll();
updateData()
;

public void updateData()


{

btnviewUpdate.setOnClickListener(new View.OnClickListener()
{ @Override
public void onClick(View view)
{
boolean isUpdate =
mh.updateData(editTextId.getText().toString(),editName.getText().toString(),editSurname.
g etText().toString(),editMarks.getText().toString());
if(isUpdate== true)
Toast.makeText(MainActivity.this,"Data Updated",Toast.LENGTH_LONG).show();

else
Toast.makeText(MainActivity.this, " Data Not Updated "
,Toast.LENGTH_LONG).show();
}
});
}

public void addData()


{
btn.setOnClickListener(new View.OnClickListener()
{

@Override
public void onClick(View view) {
boolean isInserted = mh.insertData(editName.getText().toString()
,editSurname.getText().toString(),
editMarks.getText().toString()); if(isInserted == true)
Toast.makeText(MainActivity.this,"Data Inserted",Toast.LENGTH_LONG).show();

else
Toast.makeText(MainActivity.this, " Data Not Inserted "
,Toast.LENGTH_LONG).show();
}
});

public void viewAll()


{
btnViewAll.setOnClickListener(new View.OnClickListener()
{ @Override
public void onClick(View view)
{ Cursor res =
mh.getAllData();
if(res.getCount() == 0)
{
// show msg
showMessage("Error","Nothing
Found"); return;
}

StringBuffer buffer = new StringBuffer();


while(res.moveToNext())
{
buffer.append("Id : " + res.getString(0) + "\n");
buffer.append("Name : " + res.getString(1) + "\n");
buffer.append("SurName : " + res.getString(2) + "\n");
buffer.append("Marks : " + res.getString(3) + "\n");
}

// show all data


showMessage("Data", buffer.toString());
}
});
}

public void showMessage(String title , String message)


{
AlertDialog.Builder builder = new
AlertDialog.Builder(this); builder.setCancelable(true);
builder.setTitle(title);
builder.setMessage(message);
builder.show();
}
}
OUTPUT: Screenshots of application
Conclusion:

Thus we implement SQLite Application to Add and View and Update records.

FAQs:-

Explain what is SQLite transactions?

List out the areas where SQLite works well?

What is the difference between SQL and SQLite?

Mention what is .dump command is used for?


Explain how Boolean values in SQLite are stored?

what is the maximum size of a VARCHAR in SQLite?

List out the advantages of SQLite?

 It does not require separate server processor system to operate


 No setup or administration required SQlite comes with zero-configuration
 An SQLite database can be stored in a single cross-platform disk file
 SQLite is very compact less than 400 KiB
 SQLite is self-contained, which means no external dependencies
 It supports almost all types of O.S
 It is written in ANSI-C and provides easy to use API
Assignment No. : 04

Title: Sensors for building Smart Applications: Use any sensors


on the device to add rich location and motion capabilities to
your app, from GPS or network location to accelerometer,
gyroscope, temperature, barometer, and more.
Lab Assignment:-04
Aim: To Use any one sensors on the device to add rich location and motion capabilities to
android application.

Objective: To describe functioning of location and motions sensors


To describe typical methods of sensors.

Theory:

Overview:
Have you ever had a situation where you visited one place and you have some task to
do next time you visit the place? While travelling by bus/train, have you need ATM machine
and hospital based on location?
The application “Advanced GPS location finder to identify hospital location and ATM
location” solves all these problems. It offers below services
 Retrieves the user’s current geological coordinates.
 Once user is near the location, the location will be searching nearest places can be
viewed.
 User can edit/delete/update/enable/disable the nearest places.
 User can see the locations on Map to find out how far he is from the expected location.
MODULE DESCRIPTION
 Google Map & Searching Location
 Google Places search
 User Interface
 Database

1. Google Map and Searching Locations:


Google Map: This module is displays the Google map on your device with the options,
Google map and searching locations is the main module of the application. This module is
having the two sub modules,
 Satellite view – Viewing satellite image Google Map to the user
 Street view – Viewing Street View Google Map to the user
And locate the nearest hospitals and ATMs present at the surrounding locations with
markers identifications.

2. Google Places Search: Google provider is provides the World Wide search options for
Google places. By Using the Google search keyword to find the each and every Google Map
Locations in the World. Where the search option is done by adding Google Places JAR’s
like,
 google-api-client-1.10.3-beta.jar
 google-api-client-android2-1.10.3-beta.jar
 google-http-client-1.10.3-beta.jar
 google-oauth-client-1.10.1-beta.jar
 google-http-client-android2-1.10.3-beta.jar
 gson-2.1.jar
 guava-11.0.1.jar
 jackson-core-asl-1.9.4.jar
 jsr305-1.3.9.jar
 protobuf-java-2.2.0.jar

- Google Places API key like “AIzaSyCig25fzYJ6gcwZyrluEXCtZuNtgh1wcwk” and places


- Search URL’s
https://maps.googleapis.com/maps/api/place/search/json?

https://maps.googleapis.com/maps/api/place/search/json?

These all are used to search the Google Places in Google Map.

3. User Interface: The user interface is the important thing in android applications. In this
application, the user interface is designed by using the Android XML. For user interactions
and easy to handle application many of the User interface designs are used in this
applications.
4. The SQLite Database: SQLite Database is one of Main part this application,
because the searched location details are stored by using the SQLite Database in the
Android. For a feature reference, the location details are stored in SQLite database.

SOFTWARE SPECIFICATIONS
 Eclipse IDE for Java Developers - Eclipse 3.6.2 (Helios) or greater
 Eclipse JDT plug-in (included in most Eclipse IDE packages)
 JDK 6 (JRE alone is not sufficient)
 Android Development Tools plug-in (recommended)
OPERATING SYSTEMS
 Windows XP (32-bit)
 Vista (32- or 64-bit)
 Windows 7 (32- or 64-bit)
HARDWARE SPECIFICATIOS
 Hard disk - 40 GB
 Processor - Pentium IV 2.4 GHz
 Ram - 1 GB

Conclusion:

Thus, we have studied and implemented Sensors for building Smart Applications: Use any
sensors on the device to add rich location and motion capabilities to your app, from GPS or
network location.

FAQ’s

1. What is the Google Maps Platform?


2. Which API do I need?
3. What countries does the Google Maps Platform cover?
4. Can I put Google Maps on my site without using Google Maps Platform products?
5. How do I deliver Maps applications on mobile devices?
6. What is the purpose of a motion sensor?
7. What is a driveway motion detector?
8. Why is my motion sensor giving false detections?
9. Is there such a thing as a motion detector guard dog?
10. Can pets trigger motion sensors?
11. What is a motion flood light? Can I use a motion sensor besides for security purposes?
Assignment No.: 05

Title: Develop a Smart Light System (Light that automatically


switched on in evening and gets off in morning) using open
source Hardware platform like Arduino and some sensors
(Light dependent resistor) and control a LED.

Back to Index

Lab. Assignment No – 5
AIM :
Design a smart light system which operates controls LED light automatically switched on in
evening and gets off in morning using Arduino, LED and LDR interface.

OBJECTIVES:
To design a smart light system which operates controls LED light automatically switched on in
evening and gets off in morning by programmable control of a dark resistance and bright
resistance of LDR (light dependent resistance) using Arduino IDE and UNO board.

THEORY :

An LDR is a component that has a (variable) resistance that changes with the light intensity
that falls upon it. See fig 1 and Fig 2 for LDR view and symbol. This allows them to be used in
light sensing circuits.
Fig 1: A typical LDR Fig 2 : LDR Circuit Symbol

Negative indicated by flat side


of the housing and a short leg
; see fig 3 for and observe
LED which you are
connecting in this experiment.

Fig 3 : LED Anode and Cathode identification

Variation in resistance with changing light intensity


Fig 4 : Typical LDR resistance vs light intensity graph

The most common type of LDR has a resistance that falls with an increase in the light intensity
falling upon the device (as shown in the image above). The resistance of an LDR may typically
for example one can observe the following resistances (this also depends on size of LDR and
may vary in your case):

Daylight = 5000Ω
Dark = 20000000Ω

You can therefore see that there is a large variation between these figures. If you plotted this
variation on a graph you would get something similar to that shown by the graph shown above.

Light dependent resistance (LDR) values shows different resistance values in dark and bright
light see Fig 4 which indicate same thing using graph. This can be observed just by measuring
LDR resistance on multimeter by exposing LDR to bright light and recording the value of
resistance and other case by holding it in dark. These two thresholds are deciding dark and
bright resistance. So if we want to put on LED in evening then on port where LED is connected
need to put ON if dark resistance value or greater than value is available on analog port where
LDR is connected. In else part or by excessively specifying bright resistance value one can put
off LED by wring vice vers statement for LED light port. One cam also observes the values on
serial port by appropriate statements. The connections are shown in fig 5.
1. First connect LDR to any analog port (out of any six) of Arduino UNO board.
2. Connect LED to any digital pin (out of any eleven) of Arduino UNO board.
3. Observe values on serial port
4. Write statements in of Arduino IDE as per specifying dark and bright resitance value to put
ON/OFF LED.
5. Hold LDR in palm and cover it by all fingers; this time LED need to glow. Now expose LDR to
light and see; LED need to turn off automatically.
6. By looking room light bright resistance; by turning ON and OFF of bulb / Tube light one can see
same effect on automatic LDE ON /OFF.

INPUT : LED ( 1 qty.)


LDR (1 qty.)
One Arduino UNO Board
Wires for connection
OUTPUT : On Serial port of Arduino IDE for dark and bright resistance value.
Actual LED ON / OFF based on specified dark and bright resistance values .
Connection

Fig 5: Interfacing LDR and LED on Arduino board

CONCLUSION : This way we make can use the open source platforms like Arduino Uno and its
IDE to make build smart lighting system based using properties of LDR sensor that controls LED
ON/OFF operation automatically by turning it ON in dark and turning it OFF in bright light.
FAQ :
1. What is LDR?
2. How LDR functions?
3. What is NTC and PTC and what is its significance in real world? Is it possible to manage this
from software to do vice versa operation of LDR?
4. Is it possible to control different 2 different LEDs connected to different digital pins of Arduino
on the basis of different dark and bright conditions of LDR? If ‘yes’ how? If ‘no’ why?
Assignment No.: 06

Title: Develop an Android based FAN regulator using open


source Hardware platform like NodeMcu and actuator (a DC
Motor).

Back to Index

Lab. Assignment No – 6
AIM :
Design an Android based FAN regulator which operates controls Dynamic Fan Control using
NodeMUC ESP8266(wireless transceiver).
OBJECTIVES :
To design an Android based FAN regulator which operates controls Dynamic Fan Control
automatically using Android APP(BLYNK) by programmable control in Widget Box selecting
Slider using Arduino IDE and NodeMUC board.
THEORY :
A motor is an electrical machine which converts electrical energy into mechanical energy.
The principle of working of a DC motor is that "whenever a current carrying conductor is
placed in a magnetic field, it experiences a mechanical force". The direction of this force is
given by Fleming's left hand rule and its magnitude is given by F = BIL. Where, B = magnetic
flux density, I = current and L = length of the conductor within the magnetic field.
Fig 1 : Working of DC Motor

Fleming's left hand rule: If we stretch the first finger, second finger and thumb of our left hand
to be perpendicular to each other AND direction of magnetic field is represented by the first
finger, direction of the current is represented by second finger then the thumb represents the
direction of the force experienced by the current carrying conductor.
Above animation helps in understanding the
Working principle of a DC motor. When armature windings are connected to a DC supply,
current sets up in the winding. Magnetic field may be provided by field winding
(electromagnetism) or by using permanent magnets. In this case, current carrying armature
conductors experience force due to the magnetic field, according to the principle stated above.
Commutator is made segmented to achieve unidirectional torque. Otherwise, the direction of
force would have reversed every time when the direction of movement of conductor is
reversed the magnetic field.
INPUT : DC MOTOR( 1 qty.)
One NodeMUC Board
Wires for connection
OUTPUT : Actual FAN speed is is control by adjusting the slider in BLYNK app.
Connection : As shown in fig 2 the Interfacing DC motor on NodeMUC . Blynk App screen are
shown in Fig 3 ; which is used to control FAN or any output connected to NodeMUC remotely .

Fig 2 : Interfacing DC motor on NodeMUC board

/* Comment this out to disable prints and save space */


#define BLYNK_PRINT Serial
#include <ESP8266WiFi.h>
#include <BlynkSimpleEsp8266.h>
// You should get Auth Token in the Blynk App.
// Go to the Project Settings (nut icon).
char auth[] = "24b895ca4e4645d29f0f19b3047b33c6";
// Your WiFi credentials.
// Set password to "" for open networks.
char ssid[] = “Navjyot";
char pass[] = “waheguru";
void setup()
{
// Debug console
Serial.begin(9600);
Blynk.begin(auth, ssid, pass);
// You can also specify server:
//Blynk.begin(auth, ssid, pass, "blynk-cloud.com", 80);
//Blynk.begin(auth, ssid, pass, IPAddress(192.168.1.100), 8080);
}
void loop()
{
Blynk.run();
}
Fig 3 : Blynk App GUI

CONCLUSION : In this experiment we verified by interfacing DC motor on NodeMUC and it is


observed that DC motor can be controlled from wireless domain with authentic IP address.

FAQ :
1] Differentiate NodeMUC Vs Arduino UNO.
2] What is purpose of Rx and Tx pin in NodeMUC ?
3] Is it possible to control 230V; 50Hz operated Fan ? How ?
4] How relay operate?
Assignment No.: 07

Title: Arduino and Machine Learning: wireless multimodal sensing-


Draw inferences over the data coming from sensing hardware (e.g.
Temperature, Humidity, Luminosity, etc.,) and processing these
samples with the help of machine learning using open source
programming tools such as Python. (Any Application: Healthcare,
Smart City, Agriculture, etc.).

.Back to Index

Lab. Assignment No – 7
AIM:
Design a sophisticated system, which acquires data from multiple sensors and transmits it
through a wireless module via Arduino, and will be received by another Arduino and given to
the PC for analysis using Machine learning.

OBJECTIVES:
To Acquire temperature values from 4 LM35 sensors and 1 DHT11 temperature+humidity
sensor in real time and transmit that data to the remote Arduino wirelessly using NRF24l01+
transceiver modules and further give that data to PC via serial communication and analyze it
using Machine Learning in Python.

THEORY:
A] LM35 Sensor:

Fig 1.1 LM35 sensor Fig 1.2 LM35 Pinout


The LM35 series are precision integrated-circuit temperature devices with an output voltage
linearly proportional to the Centigrade temperature. The LM35 device has an advantage over
linear temperature sensors calibrated in Kelvin, as the user is not required to subtract a large
constant voltage from the output to obtain convenient Centigrade scaling. The LM35 device
does not require any external calibration or trimming to provide typical accuracies of ±¼°C at
room temperature and ±¾°C, over a full −55°C to 150°C temperature range. Lower cost is
assured by trimming and calibration at the wafer level. The low-output impedance, linear
output, and precise inherent calibration of the LM35 device makes interfacing to readout or
control circuitry especially easy. The device is used with single power supplies, or with plus and
minus supplies. As the LM35 device draws only 60 μA from the supply, it has very low self-
heating of less than 0.1°C in still air. The LM35 device is rated to operate over a −55°C to 150°C
temperature range, while the LM35C device is rated for a −40°C to 110°C range (−10° with
improved accuracy).
Features of LM35 are as follows:
 Calibrated Directly in Celsius (Centigrade)
 Linear + 10-mV/°C Scale Factor
 0.5°C Ensured Accuracy (at 25°C)
 Rated for Full −55°C to 150°C Range
 Suitable for Remote Applications
 Low-Cost Due to Wafer-Level Trimming
 Operates From 4 V to 30 V
 Less Than 60-μA Current Drain
 Low Self-Heating, 0.08°C in Still Air
 Non-Linearity Only ±¼°C Typical
 Low-Impedance Output, 0.1 Ω for 1-mA Load

Applications:
 Power Supplies
 Battery Management
 HVAC
 Appliances

For detail description please refer the datasheet - http://www.ti.com/lit/ds/symlink/lm35.pdf

B] DHT11 Sensor:

Fig. 2.1 DHT11 Fig 2.2 DHT11 pinout

DHT11 Temperature & Humidity Sensor features a temperature & humidity sensor complex
with a calibrated digital signal output. By using the exclusive digital-signal-acquisition
technique and temperature & humidity sensing technology, it ensures high reliability and
excellent long-term stability. This sensor includes a resistive-type humidity measurement
component and an NTC temperature measurement component, and connects to a high
performance 8-bit microcontroller, offering excellent quality, fast response, anti-interference
ability and cost- effectiveness.
Features of DHT11 are as follows:
 Operating Voltage: 3.5V to 5.5V
 Operating current: 0.3mA (measuring) 60uA (standby)
 Output: Serial data
 Temperature Range: 0°C to 50°C
 Humidity Range: 20% to 90%
 Resolution: Temperature and Humidity both are 16-bit
 Accuracy: ±1°C and ±1%

Applications:
 Measure temperature and humidity
 Local Weather station
 Automatic climate control
 Environment monitoring

For detail description please refer the datasheet - https://www.mouser.com/ds/2/758/DHT11-


Technical-Data-Sheet-Translated-Version-1143054.pdf

C] nRF24L01+ wireless transceiver:

Fig 3.1 nRF24L01+ Fig 3.2 nRF24L01+ pinout

The nRF24L01+ is a single chip 2.4GHz transceiver with an embedded baseband protocol
engine, suitable for ultra-low power wireless applications. The nRF24L01+ is designed for
operations in the world-wide ISM frequency band at 2.4-2.4835GHz. To design a radio system
with the nRF24L01+, you simply need an MCU and a few external passive components. One
can operate and configure the NRF24L01+ through a Serial Peripheral Interface (SPI). The
register map, which is accessible through the SPI, contains all configuration registers in the
nRF24L01+ and is accessible in all operation modes of the chip.
Features of nRF24L01+ are as follows:
 World Wide 2.4GHz ISM band Operation
 1 to 32 bytes dynamic payload size per packet
 Integrated voltage regulator
 1.9V to 3.6V Supply Range
 250kbps, 1 and 2 Mbps air data rate
For detail description please refer the datasheet -
https://www.sparkfun.com/datasheets/Components/SMD/nRF24L01Pluss_Preliminary_Produ
ct_Specification_v1_0.pdf

D] Machine Learning:
Machine learning is an application of artificial intelligence (AI) that provides systems the ability
to automatically learn and improve from experience without being explicitly programmed.
Machine learning focuses on the development of computer programs that can access data and
use it learn for themselves.
The process of learning begins with observations or data, such as examples, direct experience,
or instruction, in order to look for patterns in data and make better decisions in the future
based on the examples that we provide. The primary aim is to allow the computers learn
automatically without human intervention or assistance and adjust actions accordingly.
Machine learning enables analysis of massive quantities of data. While it generally delivers
faster, more accurate results in order to identify profitable opportunities or dangerous risks, it
may also require additional time and resources to train it properly. Combining machine
learning with AI and cognitive technologies can make it even more effective in processing large
volumes of information.
Support Vector Machine:
A Support Vector Machine models the situation by creating a feature space, which is a finite-
dimensional vector space, each dimension of which represents a "feature" of a particular
object. In the context of spam or document classification, each "feature" is the prevalence or
importance of a particular word. The goal of the SVM is to train a model that assigns new
unseen objects into a particular category. It achieves this by creating a linear partition of the
feature space into two categories. Based on the features in the new unseen objects (e.g.
documents/emails), it places an object "above" or "below" the separation plane, leading to a
categorisation (e.g. spam or non-spam). This makes it an example of a non-probabilistic linear
classifier. It is non-probabilistic, because the features in the new objects fully determine its
location in feature space and there is no stochastic element involved.

Fig 4 Example of Binary Support Vector Classifier

Procedure:
1. Transmitter - Connect the 4 LM35 sensors to the A0-A3 pins of the Arduino. The DHT11 to the
D7 pin and the Nrf24L01+ to the respective pins from D9 to D13 as given in the official library.
2. Receiver – Connect the nRF24L01+ exactly same as that of the transmitter.
3. Download the libraries for dht11 and nRF24L01+ from the ‘manage libraries’ of the Arduino IDE.
4. Upload the respective code into the transmitter and receiver Arduino Boards s and check the
output on the serial monitor if the Arduino IDE.
5. The same output is given to the machine learning algorithm in Python which has been have
trained to classify the atmosphere in real time.
6. We use a simple Support Vector Classifier, which is the widely used algorithm in machine
learning. It can classify the atmosphere with a high accuracy.

Connection with Arduino UNO Board:

Fig 5 Transmitter Connections


Fig 6 Receiver Connections
INPUT : LM35 analog temperature sensors – 4 Nos.
DHT11 module – 1 Nos.
NRF24L01+ -- 2 Nos.
Arduino UNO Boards – 2 Nos

OUTPUT : Arduino IDE Serial Monitor


Python programming platform(Prediction using SVM)

Fig 7 Serial Monitor Arduino - Comma separated 5 consecutive


Temperature Values in *Celcius and 1 Humidity Value in Percentage

CONCLUSION :
This way we make can use of the open source platforms like Arduino and Python to make our
own data and use for machine learning applications in real time without any limitations. The
sensors can be varied and different machine learning training algorithms can be implemented
for higher accuracy and performance.
FAQ :
1. 1What is the drawback of LM35 sensor?
2. 2What is the limitation of DHT11 sensor?
3. 3Why use nRF24L01+, when we have Bluetooth and XBEE modules? What are the
advantages and disadvantages?
4. 4What is machine learning? Why do we use it solve real world problems? Compare with
classical approach.
Assignment No.: 11

Title: Android Security: Authentication of two mobile devices

Back to Index

Lab. Assignment No – 11
AIM:
Design a system, which connect hardware to Android smartphone with unique identifier security
i.e. Authentication Token.
OBJECTIVES:
To design a system, which connect Arduino microcontroller to Android smartphone with unique
authentication identifier security.

Authentication

Authentication is the process of proving that people and organizations are who or what they
claim to be. For wireless networks, this is often done at two layers: the network layer and the
application layer. The network requires the user to be authenticated before that person is
granted access. This can be done implicitly, based on the device or modem being used, or
explicitly, using a variety of mechanisms. At the application layer, authentication is important
at two levels: the client and the enterprise server. To gain access to enterprise data, the client
has to prove to the server that it is what it says it is. At the same time, before a client allows an
outside server to connect to it—for example, to push some content—the server has to
authenticate itself to the client application.

Data Integrity

Data integrity is assurance that the data in question has not been altered or corrupted in any
way during the transmission from the sender to the receiver. This can be accomplished by
using data encryption in combination with a cryptographic checksum or Message
Authentication Code (MAC). This information is encoded into the message itself by applying an
algorithm to the message. When recipients receive the message, they compute the MAC and
compare it with the MAC encoded in the message to see if the codes are the same. If they are,
recipients can be confident that the message has not been tampered with. If the codes are
different, recipients can discard the data as inaccurate.

Confidentiality

Confidentiality is one of the most important aspects of security, and certainly the most talked
about. Confidentiality is about maintaining data privacy, making sure it cannot be viewed by
unwanted parties. Most often, when people are worried about the security of a system, they
are concerned that sensitive information, such as a credit card number or health records, can
be viewed by parties with malicious intent. The most common way of preventing this intrusion
is by encrypting the data. This process involves encoding the content of a message into a form
that is unreadable by anyone other than the intended recipient.

Authorization

Authorization is the process of determining the user’s level of access—whether a user has the
right to perform certain actions. Authorization is often closely tied to authentication. Once a
user is authenticated, the system can determine what that party is permitted to do. Access
control lists (ACLs) are often used to help determine this. For example, all users may have
read- only access to a set of data, while the administrator, or another trusted source, may also
have write access to the data.

Nonrepudiation

Nonrepudiation is about making parties accountable for transactions in which they have
participated. It involves identifying the parties in such a way that they cannot at a later time
deny their involvement in the transaction. In essence, it means that both the sender and the
recipient of a message can prove to a third party that the sender did indeed send the message
and the recipient received the identical message. To accomplish this, each transaction has to
be signed with a digital signature that can be verified and time-stamped by a trusted third
party.

THEORY:
1. Blynk

Blynk was designed for the Internet of Things. It can control hardware remotely, it can
display sensor data, and it can store data, visualize it and do many other cool things.
There are three major components in the platform:
a) Blynk App - allows to you create amazing interfaces for your projects using various widgets we
provide.
b) Blynk Server - responsible for all the communications between the smartphone and hardware.
You can use our Blynk Cloud or run your private Blynk server locally. Its open-source could
easily handle thousands of devices and can even be launched on a Raspberry Pi.
c) Blynk Libraries - for all the popular hardware platforms - enable communication with the
server and process all the incoming and out coming commands.

Now imagine every time you press a Button in the Blynk app, the message travels to
space the Blynk Cloud, where it magically finds its way to your hardware. It works the same in
the opposite direction and everything happens in a Blynk of an eye as shown in fig 1.
Fig 1: Blynk server authentication security
Features:
 Similar API & UI for all supported hardware & devices
 Connection to the cloud using:
- Wi-Fi
- Bluetooth and BLE
- Ethernet
- USB (Serial)
- GSM
 Set of easy-to-use Widgets
 Direct pin manipulation with no code writing
 Easy to integrate and add new functionality using virtual pins
 History data monitoring via Super Chart widget
 Device-to-Device communication using Bridge Widget
 Sending emails, tweets, push notifications, etc.

2. Light Sensor of android smartphone

Light is kind of environment sensors that allows you to measure level of light (measures the
ambient light level (illumination) in lux). In phones it is used to control screen brightness.
In order to accept data from it you need to:
BLYNK_WRITE(V1)
{
int lx = param.asInt();
}
Light doesn’t work in background.
3. Experimental setup:

INPUT : - Light sensor of android smartphone.


OUTPUT : - LED connected on D4 pin of NodeMCU.
- Luminance (Lux) value received from android smartphone displayed on serial
monitor of Arduino UNO.
.
Connection
We will switch on an LED connected to your Arduino using the Blynk App on your
smartphone
Connect an LED as shown here:

Fig 2: Interfacing LED with NodeMCU board


Getting Started With the Blynk App
1. Create a Blynk Account
After you download the Blynk App, you will need to create a New Blynk account as shown in
figure 3. An account is needed to save your projects and have access to them from multiple
devices from anywhere. It is also a security measure. You can always set up your own Private
Blynk Server and have full control.
2. Create a New Project
After you have successfully logged into your account, start by creating a new project as shown
in figure 4.
3. Choose Your Hardware
Select the NodeMCU hardware model as shown in figure 5.
4. Authentication Token
Authentication Token is a unique identifier, which is needed to connect your hardware to your
smartphone. Every new project you create will have its own Authentication Token. You will get
Authentication Token automatically on your email after project creation. You can also copy it
manually. Click on devices section and selected required device.
Do not share your Authentication Token with anyone, unless you want someone to have
access to your hardware. It is very convenient to send it over e-mail. Press the e-mail button
and the token will be sent to the e-mail address you used for registration. You can also tap on
the Token line and it will be copied to the clipboard.
Now press the “Create” button as shown in figure 6.
5. Add a Widget
Your project canvas is empty; let us add a light sensor to control our LED. Tap anywhere on the
canvas to open the widget box. All the available widgets are located here as shown in figure 7.
Now pick a light sensor. Each Widget has its own settings. Tap on the widget to get to them.
The most important parameter to set is PIN as shown in figure 8. The list of pins reflects
physical and virtual.
6. Run the Project
When you are done with the Settings - press the PLAY button as shown in figure 9. This will
switch you from EDIT mode to PLAY mode where you can interact with the hardware. While in
PLAY mode, you will not be able to drag or set up new widgets, press STOP and get back to
EDIT mode.

Fig 3 Fig 4 Fig 5 Fig 6


Fig 7 Fig 8 Fig 9
One can also observe the values on serial port by appropriate statements. The connections are
shown in fig 5.
7. First connect Led to digital pin number 4 (D4) of NodeMCU board.
8. Observe values on serial port
9. Write statements in of Arduino IDE as per specifying Luminance value to put ON/OFF LED.
10. Cover the smartphone light sensor; this time LED need to glow. Now expose light sensor to
light and see; LED need to turn off automatically.
11. By looking room light bright resistance; by turning ON and OFF of bulb / Tube light one can see
same effect on automatic Led ON /OFF.

CONCLUSION : This way we make can use the open source platforms like NodeMCU and its
IDE to make build smart lighting system based using android smartphone sensor that controls
LED ON/OFF operation automatically by turning it ON in dark and turning it OFF in bright light.
FAQ:
5. What is Blynk server?
6. What are feature of Blynk server how it provide security?
7. What is Authentication token?
8. Is it possible to control onboard digital and analog pins directly without using virtual variable? If
‘yes’ how? If ‘no’ why?
Assignment No. : 12

Title: Generations of Wireless Communication Technology

Back to Index
Lab. Assignment No – 12

Aim: To study and understand Generations of Wireless Communication Technology

Objective: To describe wireless communication methods.


To describe typical methods of wireless communication methods.

Theory:

 INTRODUCTION
Wireless communication is the transfer of information over a distance without the use of
enhanced electrical conductors or "wires”. The distances involved may be short (a few meters
as in television remote control) or long (thousands or millions of kilometers for radio
communications). When the context is clear, the term is often shortened to "wireless". It
encompasses various types of fixed, mobile, and portable two-way radios, cellular telephones,
Personal Digital Assistants (PDAs), and wireless networking.
In 1895, Guglielmo Marconi opened the way for modern wireless communications by
transmitting the three-dot Morse code for the letter ‘S’ over a distance of three kilometers
using electromagnetic waves. From this beginning, wireless communications has developed
into a key element of modern society. Wireless communications have some special
characteristics that have motivated specialized studies. First, wireless communications relies
on a scarce resource – namely, radio spectrum state. In order to foster the development of
wireless communications (including telephony and Broadcasting) those assets were privatized.
Second, use of spectrum for wireless communications required the development of key
complementary technologies; especially those that allowed higher frequencies to be utilized
more efficiently. Finally, because of its special nature, the efficient use of spectrum required
the coordinated development of standards.
The term is used to describe modern wireless connections such as those in cellular networks
and wireless broadband internet, mainly using radio waves. The Mobile wireless industry has
started its technology creation, revolution & evolution since early 1970s. In the past few
decades, mobile wireless technologies have been classified according to their generation,
which largely specifies the type of services and the data transfer speeds of each class of
technologies.
i. ZERO GENERATION TECHNOLOGY (0G – 0.5G)

0G refers to pre-cellular mobile telephony technology in 1970s. These mobile telephones were
usually mounted in cars or trucks, though briefcase models were also made. Mobile radio
telephone systems preceded modern cellular mobile telephony technology. Since they were
the predecessors of the first generation of cellular telephones, these systems are sometimes
referred to as 0G (zero generation) systems. Technologies used in 0G systems included PTT
(Push to Talk), MTS (Mobile Telephone System), IMTS (Improved Mobile Telephone Service),
AMTS
(Advanced Mobile Telephone System), OLT (Norwegian for Offentlig Landmobil Telefoni, Public
Land Mobile Telephony) and MTD . 0.5G is a group of technologies with improved feature than
the basic 0G technologies. These early mobile telephone systems can be distinguished from
earlier closed radiotelephone systems in that they were available as a commercial service that
was part of the public switched telephone network, with their own telephone numbers, rather
than part of a closed network such as a police radio or taxi dispatch system. These mobile
telephones were usually mounted in cars or trucks,
though briefcase models were also made. Typically, the transceiver (transmitter-receiver) was
mounted in the vehicle trunk and attached to the "head" (dial, display, and handset) mounted
near the driver seat. They were sold through various outlets, including two-way radio dealers.
The primary users were loggers, construction foremen, realtors, and celebrities, for basic voice
communication.
Early examples for this technology are:
1. The Autoradiopuhelin (ARP) launched in 1971 in Finland as the country's first public
commercial mobile phone network.
2. The B-Netz launched 1972 in Germany as the countries second public commercial mobile
phone network (but the first one that did not require human operators anymore to connect
calls).
ii. FIRST GENERATION TECHNOLOGY (1G)

In 1980 the mobile cellular era had started, and since then mobile communications have
undergone significant changes and experienced enormous growth. First-generation mobile
systems used analog transmission for speech services. In 1979, the first cellular system in the
world became operational by Nippon Telephone and Telegraph (NTT) in Tokyo, Japan. Two
years later, the cellular epoch reached Europe. The two most popular analogue systems were
Nordic Mobile Telephones (NMT) and Total Access Communication Systems (TACS). Other than
NMT and TACS, some other analog systems were also introduced in 1980s across the Europe.
All of these systems offered handover and roaming capabilities but the cellular networks were
unable to interoperate between countries. This was one of the inevitable disadvantages of
first- generation mobile networks.
In the United States, the Advanced Mobile Phone System (AMPS) was launched in 1982. The
system was allocated a 40-MHz bandwidth within the 800 to 900 MHz frequency range by the
Federal Communications Commission (FCC) for AMPS. In 1988, an additional 10 MHz
bandwidth, called Expanded Spectrum (ES) was allocated to AMPS. It was first deployed in
Chicago, with a service area of 2100 square miles2. AMPS offered 832 channels, with a data
rate of 10 kbps. Although Omni directional antennas were used in the earlier AMPS
implementation, it was realized that using directional antennas would yield better cell reuse. In
fact, the smallest reuse factor that would fulfill the 18db signal-to-interference ratio (SIR) using
120-degree directional antennas was found to be 7.
Hence, a 7-cell reuse pattern was adopted for AMPS. Transmissions from the base stations to
mobiles occur over the forward channel using frequencies between 869-894 MHz. The reverse
channel is used for transmissions from mobiles to base station, using frequencies between
824- 849 MHz. AMPS and TACS use the frequency modulation (FM) technique for radio
transmission. Traffic is multiplexed onto an FDMA (frequency division multiple access) system.
iii. SECOND GENERATION TECHNOLOGY (2G - 2.75G)

By the late 1980s, it was clear that the first generation cellular systems—based on analog
signaling techniques—were becoming obsolete. Advances in integrated circuit (IC) technology
had made digital communications not only practical, but, actually more economical than
analog technology. Digital communication enables advanced source coding techniques to be
utilized. This allows the spectrum to be used much more efficiently and, thereby, reduces the
amount of bandwidth required for voice and video. In addition, we can use error correction
coding to provide a degree of resistance to interference and fading that plagues analog
systems, and to allow a lower transmit power. Also, with digital systems, control information is
more efficiently handled, which facilitates network control. Second generation digital systems
can be classified by their multiple access techniques as either Frequency Division Multiple
Access (FDMA), Time Division Multiple Access (TDMA) or Code Division Multiple Access
(CDMA).
In FDMA, the radio spectrum is divided into a set of frequency slots and each user is assigned a
separate frequency to transmit. In TDMA, several users transmit at the same frequency but in
different time slots. CDMA uses the principle of direct sequence spread-spectrum: the signals
are modulated with high bandwidth spreading waveforms called signature waveforms or
codes. Although the users transmit at both the same frequency and time, separation of signals
is achieved because the signature waveforms have very low cross correlation.
In practice, the TDMA and CDMA schemes are combined with FDMA. Thus the term “TDMA” is
used to describe systems that first divide the channel into frequency slots and then divide each
frequency slot into multiple time slots. Similarly, CDMA is actually a hybrid of CDMA and FDMA
where the channel is first divided into frequency slots. Each slot is shared by multiple users
who each use a different code.
2.5G – GPRS (General Packet Radio Service)
2.5G, which stands for "second and a half generation," is a cellular wireless technology
developed in between its predecessor, 2G, and its successor, 3G. The term "second and a half
generation" is used to describe 2G-systems that have implemented a packet switched domain
in addition to the circuit switched domain. "2.5G" is an informal term, invented solely for
marketing purposes, unlike "2G" or "3G" which are officially defined standards based on those
defined by the International Telecommunication (ITU). GPRS could provide data rates from 56
kbit/s up to 115 kbit/s. It can be used for services such as Wireless Application Protocol (WAP)
access, Multimedia Messaging Service (MMS), and for Internet communication services such as
email and World Wide Web access. GPRS data transfer is typically charged per megabyte of
traffic transferred, while data communication via traditional circuit switching is billed per
minute of connection time, independent of whether the user actually is utilizing the capacity
or is in an idle state.
2.5G networks may support services such as WAP, MMS, SMS mobile games, and search and
directory.
2.75 – EDGE (Enhanced Data rates for GSM Evolution)
EDGE (EGPRS) is an abbreviation for Enhanced Data rates for GSM Evolution, is a digital mobile
phone technology which acts as a bolt-on enhancement to 2G and 2.5G General Packet Radio
Service (GPRS) networks. This technology works in GSM networks. EDGE is a superset to GPRS

Computer Lab-X (BEIT 2015 Course) Page 100 of 108


and can function on any network with GPRS deployed on it, provided the carrier implements
the

Computer Lab-X (BEIT 2015 Course) Page 101 of 108


necessary upgrades. EDGE technology is an extended version of GSM. It allows the clear and
fast transmission of data and information. It is also termed as IMT-SC or single carrier. EDGE
technology was invented and introduced by Cingular, which is now known as AT& T. EDGE is
radio technology and is a part of third generation technologies. EDGE technology is preferred
over GSM due to its flexibility to carry packet switch data and circuit switch data.
The use of EDGE technology has augmented the use of black berry, N97 and N95 mobile
phones. EDGE transfers data in fewer seconds if we compare it with GPRS Technology. For
example a typical text file of 40KB is transferred in only 2 seconds as compared to the transfer
from GPRS technology, which is 6 seconds. The biggest advantage of using EDGE technology is
one does not need to install any additional hardware and software in order to make use of
EDGE Technology. There are no additional charges for exploiting this technology. If a person is
an ex GPRS Technology user he can utilize this technology without paying any additional
charges.

iv. THIRD GENERATION TECHNOLOGY (3G – 3.75G)

3G refers to the third generation of mobile telephony (that is, cellular) technology. The third
generation, as the name suggests, follows two earlier generations. The first generation (1G)
began in the early 80's with commercial deployment of Advanced Mobile Phone Service
(AMPS) cellular networks. Early AMPS networks used Frequency Division Multiplexing Access
(FDMA) to carry analog voice over channels in the 800 MHz frequency band.
3G technologies enable network operators to offer users a wider range of more advanced
services while achieving greater network capacity through improved spectral efficiency.
Services include wide area wireless voice telephony, video calls, and broadband wireless data,
all in a mobile environment. Additional features also include HSPA data transmission
capabilities able to deliver speeds up to 14.4Mbit/s on the downlink and 5.8Mbit/s on the
uplink. Spectral efficiency or spectrum efficiency refers to the amount of information that can
be transmitted over a given bandwidth in a specific digital communication system. High-Speed
Packet Access (HSPA) is a collection of mobile telephony protocols that extend and improve
the performance of existing UMTS protocols.
3G technologies make use of TDMA and CDMA. 3G (Third Generation Technology)
technologies make use of value added services like mobile television, GPS (global positioning
system) and video conferencing. The basic feature of 3G Technology is fast data transfer rates.
3G technology is much flexible, because it is able to support the 5 major radio technologies.
These radio technologies operate under CDMA, TDMA and FDMA.
3.5G – HSDPA (High-Speed Downlink Packet Access)
High-Speed Downlink Packet Access(HSDPA) is a mobile telephony protocol, also called 3.5G
(or "3½G"), which provides a smooth evolutionary path for UMTS-based 3G networks allowing
for higher data transfer speeds. HSDPA is a packet-based data service in W-CDMA downlink
with data transmission up to 8-10 Mbit/s (and 20 Mbit/s for MIMO systems) over a 5MHz
bandwidth in WCDMA downlink. HSDPA implementations includes Adaptive Modulation and
Coding (AMC), Multiple-Input Multiple-Output (MIMO), Hybrid Automatic Request (HARQ),
fast cell search, and advanced receiver design.
3.75G – HSUPA (High-Speed Uplink Packet Access)
The 3.75G refer to the technologies beyond the well-defined 3G wireless/mobile technologies.
High Speed Uplink Packet Access (HSUPA) is a UMTS / WCDMA uplink evolution technology.
The HSUPA mobile telecommunications technology is directly related to HSDPA and the two
are complimentary to one another. HSUPA will enhance advanced person-to-person data
applications with higher and symmetric data rates, like mobile e-mail and real-time person-to
person gaming. Traditional useful applications along with many consumer applications will
benefit from enhanced uplink speed. HSUPA will initially boost the UMTS / WCDMA uplink up
to 1.4Mbps and in later releases up to 5.8Mbps.
v. FOURTH GENERATION (4G)
4G refers to the fourth generation of cellular wireless standards. It is a successor to 3G and 2G
families of standards. The nomenclature of the generations generally refers to a change in the
fundamental nature of the service, non-backwards compatible transmission technology and
new frequency bands. The first was the move from 1981 analogue (1G) to digital (2G)
transmission in 1992. This was followed, in 2002, by 3G multi-media support, spread spectrum
transmission and at least 200 Kbit/s, soon expected to be followed by 4G, which refers to all-IP
packet-switched networks, mobile ultra-broadband (gigabit speed) access and multi-carrier
transmission. Pre-4G technologies such as mobile WiMAX and first-release 3G Long Term
Evolution (LTE) have been available on the market since 2006and 2009 respectively.
It is basically the extension in the 3G technology with more bandwidth and services offers in
the 3G. The expectation for the 4G technology is basically the high quality audio/video
streaming over end to end Internet Protocol. If the Internet Protocol (IP) multimedia sub-
system movement achieves what it going to do, nothing of this possibly will matter. WiMAX or
mobile structural design will become progressively more translucent, and therefore the
acceptance of several architectures by a particular network operator ever more common.
Some of the companies trying 4G communication at 100 Mbps for mobile users and up to 1
Gbps over fixed stations. They planned on publicly launching their first commercial wireless
network around2010. As far as other competitor’s mobile communication companies working
on 4G technology even more quickly. Sprint Nextel was planned to launch WiMAX over 4 G
broadband mobile networks in United States. Some of the other developed countries like
United Kingdom stated a plan to sale via auction of 4G wireless frequencies couple of years
back. The word “MAGIC” also refers to 4G wireless technology which stands for Mobile
multimedia, Any-where, Global mobility solutions over, integrated wireless and Customized
services.

vi. FIFTH GENERATION (5G)

5G (5th generation mobile networks or 5th generation wireless systems) is a name used in
some research papers and projects to denote the next major phase of mobile
telecommunications standards beyond the upcoming 4G standards, which are expected to be
finalized between approximately 2011 and 2013. Currently 5G is not a term officially used for
any particular specification or in any official document yet made public by telecommunication
companies or standardization bodies such as 3GPP, WiMAX Forum or ITU-R. New 3GPP
standard releases beyond 4G and LTE Advanced are in progress, but not considered as new
mobile generations. 5G Technology stands for 5th Generation Mobile technology. 5G
technology has changed the means to use cell phones within very high bandwidth. User never
experienced ever before such
a high value technology. Nowadays mobile users have much awareness of the cell phone
(mobile) technology. The 5G technologies include all type of advanced features which makes
5G technology most powerful and in huge demand in near future.
The gigantic array of innovative technology being built into new cell phones is stunning. 5G
technology which is on hand held phone offering more power and features than at least 1000
lunar modules. A user can also hook their 5G technology cell phone with their Laptop to get
broadband internet access. 5G technology including camera, MP3 recording, video player,
large phone memory, dialing speed, audio player and much more you never imagine. For
children rocking fun Bluetooth technology and Piconets has become in market.
5G technology going to be a new mobile revolution in mobile market. Through 5G technology
now you can use worldwide cellular phones and this technology also strike the china mobile
market and a user being proficient to get access to Germany phone as a local phone. With the
coming out of cell phone alike to PDA now your whole office in your finger tips or in your
phone. 5G technology has extraordinary data capabilities and has ability to tie together
unrestricted call volumes and infinite data broadcast within latest mobile operating system. 5G
technology has a bright future because it can handle best technologies and offer priceless
handset to their customers. May be in coming days 5G technology takes over the world
market.
5G Technologies have an extraordinary capability to support Software and Consultancy. The
Router and switch technology used in 5G network providing high connectivity. The 5G
technology distributes internet access to nodes within the building and can be deployed with
union of wired or wireless network connections. The current trend of 5G technology has a
glowing future. GSM (Global System for Mobile Communication)
GSM or global system for mobile communication is a digital cellular system. It was originated in
Finland Europe. However now it is throughout the world. GSM (Global System for Mobile
Communication) accounts for 80% of total mobile phone technologies market. There are over
more than 3 billion users of GSM (Global System for Mobile Communication) now. GSM
technology got its popularity, when people used it to talk to their friends and relatives. The use
of GSM (Global System for Mobile Communication) is possible due to the SIM (subscribers
identity module) GSM (Global System for Mobile Communication) is easy to use, affordable
and helps you carry your cell phone everywhere. GSM (Global System for Mobile
Communication) is a 2G technology. There are many frequency ranges for GSM (Global System
for Mobile Communication) however 2G is the most used frequency.
GSM (Global System for Mobile Communication) offers moderate security. It allows for
encryption between the end user and the service base station. The use of various forms of
cryptographic modules is part of GSM technology.
EDGE Technology (Enhanced Data Rates for GSM Evolution Technology)
EDGE technology is an extended version of GSM. It allows the clear and fast transmission of
data and information. EDGE is also termed as IMT-SC or single carrier. EDGE technology was
invented and introduced by Cingular, which is now known as AT& T. EDGE is radio technology
and is a part of third generation technologies. EDGE technology is preferred over GSM due to
its flexibility to carry packet switch data and circuit switch data. EDGE is termed as backward
compatible technology; backward compatible technology is that technology which allows the
input generation of older devices. EDGE technology is supported by third generation
partnership projects; this association helps and supports the up gradation of GSM, EDGE
technology and
other related technologies. The frequency, capability and performance of EDGE technology is
more than the 2G GSM Technology. EDGE technology holds more sophisticated coding and
transmission of data. EDGE technology can help you connect to the internet.

This technology supports the packet switching system. EDGE develops a broadband internet
connection for its users. EDGE technology helps its users to exploit the multimedia services.
EDGE technology do not involve the expense of additional hardware and software
technologies. It only requires the base station to install EDGE technology transceiver. EDGE
technology is an improved technology which almost supports all the network vendors. All they
have to do is to upgrade their stations. EDGE technology has its edge because it can make use
of both switch circuit technology and packet circuit technology. EDGE technology is also
believed to support EGPRS or in other words enhanced general packet radio service. It is
important to have GPRS network if one wants to use EDGE technology because EDGE cannot
work without GSM Technology. Therefore it is an extended version of GSM Technology.

Conclusion:

Thus we have studied wireless communication technologies.

FAQs:-

1. Question 1. How Frequency Hopping Is Used For Security In Bluetooth?


Answer:
Blue tooth technology uses Adaptive Frequency Hopping and capable to reduce
interference between wireless technologies with the help of 2.4 GHz spectrum. In order to
take the advantage of the available frequency, Blue tooth's AFH works within the spectrum
and is performed by the technology detecting other devices in the spectrum and avoiding
the used frequencies. Efficient transmission within the spectrum at high degree of
interference immunity is achieved by adapting hopping among 79 frequencies at 1 MHz
intervals.
2. Question 2. Why Is Bluetooth 2.0 Better Than Previous Versions?
Answer:
Bluetooth 2.0 is better than its predecessors because:
o Bluetooth 2.0 is 3 times faster than 1.2
o An additional modulation scheme is used
o It is backwards compatible
o The number of connections running concurrently are more
o It is capable of recovering from errors and
o Power consumption is less.
3. Question 3. What Do You Mean By The Term Frequency-hopping Spread Spectrum (fhss)?
Answer:
o Flexibility and mobility are the growing reasons to use wireless LAN which uses
radio frequencies for transmitting data. Wireless LANs are established for
communicating with one another while on the go.
o The data transmitting on one frequency for a specific time limit and jumping
randomly to another and transmitting again is the process in FHSS. The RF circuits
can utilize class C amplification, efficient non-linear with a normal 1 MHz
bandwidth.
o FHSS systems are better for use within indoors and in severe multipath
environments. This is because of the frequency hopping scheme could defeat the
multipath by hopping to a new frequency.
4. Question 4. Explain The Term Airport In Bluetooth?
Answer:
Airport is standard (802.11) and has a faster transfer rate. Technologically it is more
advanced, more rugged and smaller and capable of being used anywhere. It is 10mb/sec, 50
meters, 10 active devices. Apparently, Bluetooth is 1mb/sec, 10 meters and 7 active
devices.
5. Question 5. What Method Is Used For Voice Transfer? Brief About The Method Used?
Answer:
For Voice transfer, SCO (Synchronous Connection Oriented) links are used for good
synchronization and reliability.
6. Question 6. Explain The Following Terms: Icmp, Arp, Multicast, and Broadcast?
Answer:
Internet Control Message Protocol: This protocol is used for while checking the
connectivity using ping command
Address Resolution Protocol: This protocol is used to know about the properties of TCP/IP.
For example, to know other system MAC addresses.
Multicast: Communication between single sender and a list of select recipients in a
network.
Broadcast: To send messages to all the recipients simultaneously in a network.
7. Question 7. What Is Tcp Connection Establishment And Tear Down?
Answer:
The following is the process for Tcp connection:
o As connection is made by synchronizing sender and receiver and the OS is
informed about the connection establishment.
o Sender starts sending information and gets acknowledged. Soon after the sender
sends the data, a timer is started.
o When the sender is not received any acknowledgement ever after exceeding the
timer's limit, the data is transmitted.
o If windowing occurs, buffer on the receiver is full, then a stop signal is sent by the
sender and stops sending information
o Soon after processing all data, a go signal is sent by the sender, followed by
transmitting data again.
8. Question 8. What Is Compulsory Tunnel?
Answer:
The tunnel creation in compulsory tunnel, no action from the user and without allowing the
user any choice is performed. Internet service provider access concentrator receives a
point-to-point protocol packets from the user. The encapsulation of packets in L2TP and
sending them through a tunnel to the L2TP network server is performed by the ISP. It is
mandatory that the ISP is L2TP-capable.
9. Question 9. What Is Voluntary Tunnel?
Answer:
The user creates a voluntary tunnel model. This model is typically uses a L2TP enabled
client. An L2TP packet is sent by user to the Internet Service Provider which will in turn
forward them on to LNS. The support of L2TP support is not needed by ISP. The L2TP tunnel
initiator resides on the same system effectively playing as a remote client.
10. Question 10. What Is Point-to-point Tunneling Protocol, Pptp?
Answer:
PPTP is one of the methods to implement Virtual Private Networks. Confidentiality or
encryption does not provided by PPTP. The reliability of PPTP is being tunneled for the
purpose of providing privacy. PPTP works by sending Point-to-Point Protocol to the peer
with the Generic Encapsulation protocol. Because of its easy way of configuration, it is most
popular protocol used in VPNs. It was the first protocol for VPNs and was supported by Dial-
up network of Microsoft.
11. Question 11. What Is Chap (challenge-handshake Authentication Protocol)?
Answer:
Challenge-Handshake Authentication Protocol authenticates to an authenticating entity of
a user or network. The entity can be an Internet access provider. CHAP is used by Point-to-
Point Protocol that servers for the validation of identifying of remote clients. The identity of
the client is verified by CHAP periodically with the use of a three-way handshake. This
situation occurs when an initial link is established and maybe repeated after at any time.
The shared secret is the base for verification. CHAP protection to play back attack by the
peer. This is done by using an incrementally changing identifier and of a variable challenge-
value. Both client and server need to know about the plain text of secret which uses CHAP.
12. Question 12. Explain The Concept Of Pki, Public Key Infrastructure?
Answer:
Public Key Infrastructure: A set of hardware, software, people, policies and procedures
comprises the Public Key Infrastructure (PKI) and digital certificates are revoked. The
arrangement of public key binding with respective of identification of users by means of a
Certificate Authority , is done in PKI. Every certificate authority should have a unique user
identity. The registration and issuance process that is depending on the level of assurance is
involved in binding, which may be carried out be software at a CA. The binding assurance
portrayed by PKI is known as Registration Authority. The public key certificates issued by
the CA are comprised unforgettable user identity, public key and their binding, validity
conditions and other attributes.
13. Question 13. Explain The Concepts Of Digital Certificates?
Answer:
A digital certificate is a credential which validates the certificate owner's identity. The
information provided by the digital certificate is known as ‘the subject distinguished name'.
Certificate Authority issues the digital certificates to the users or the organizations. The
trust in the certificate as a valid credential is provided as the foundation by the Certificate
Authority.
14. Question 14. Explain Disadvantages Of Symmetric Cryptosystems?
Answer:
The following are the disadvantages of Symmetric Cryptosystems:
o Key transportation is one disadvantage of Symmetric Cryptosystems. From the
sending system to the receiving system the secret key is to be transmitted before
the actual message transmission. Electronic communication, in every means is
insecure as it is not guarantee that no tapping communication channels can be
tapped. Personal exchanging of key exchange is the only source.
o Repudiate digital signatures cannot be provided.
15. Question 15. What Is Wireless Communication Concept?
Answer:
Wireless communication is the transfer of information between two or more points that
are not connected by any physical medium.Wireless communications can be via:
o Radio communication.
o Microwave communication.
o Light, Visible and Infrared communication.
16. Question 16. What Do You Mean By Handoff?
Answer:
When a mobile moves into a different cell while a conversation is in progress, the Mobile
Switching Center automatically transfers the call to a new channel belonging to the new
Base Station.
Types of handoff:-
o Hard Handoff
o Soft Handoff.
17. Question 17. What Do You Mean By Mobile Station Subsystem?
Answer:
It includes mobile equipment which refers o a physical terminal such as telephone which
includes the radio trans-receiver signal processor and the Subscriber Identity Module.
18. Question 18. What Do You Mean By Base Station Subsystem?
Answer:
It consists of one or more BTS and BSC. Each BTS is related to one cell which includes an
antenna, a video trans-receiver and a link to BSC.BSC controls multiple BTS units, manages
the hand offs of the mobiles and controls paging.
19. Question 19. What Do You Mean By Network And Switching Subsystem?
Answer:
It controls hand offs between cells in different BSSs, authenticates users, validates and
maintains their accounts.
It is mainly supported by four databases:-
o Home Location Register.
o Visitor Location Register.
o Authentication Center.
o Equipment Identity Register.
20. Question 20. What Are The Different Types Of Transmission Impairment?
Answer:
When the received signal is not as same as the transmitted signal then it is known as
Transmission impairment.
Three different types of transmission impairment are:-
o Attenuation.
o Noise.
o Delay Distortion.
21. Question 21. What Is the Difference between 3g and 4g?
Answer:
Following are the differences between 3G and 4G:-
o 3G stands for 3rd generation as it is just that in terms of the evolutionary path of
the mobile phone industry. 4G means 4th generation. This is a set of standard
that is being developed as a future successor of 3G in the very near future.
o 4G speeds are meant to exceed that of 3G.
o 3G uses the technique of circuit switching while 4G uses the technique of packet
switching.

You might also like