Gesture Recognition System: Submitted in Partial Fulfillment of Requirement For The Award of The Degree of
Gesture Recognition System: Submitted in Partial Fulfillment of Requirement For The Award of The Degree of
Gesture Recognition System: Submitted in Partial Fulfillment of Requirement For The Award of The Degree of
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted by
CERTIFICATE
2. Introduction
3. Architecture
4. Design
8. Conclusion
9. References
ARTIFICIAL NEURAL NETWORK
ABSTRACT
History of the ANNs stems from the 1940s, the decade of the first electronic
computer.
However, the first important step took place in 1957 when Rosenblatt
introduced the first concrete neural model, the perceptron. Rosenblatt also took
part in constructing the first successful neurocomputer, the Mark I Perceptron.
In 1986, The application area of the MLP networks remained rather limited
until the breakthrough when a general back propagation algorithm for a multi-
layered perceptron was introduced by Rummelhart and Mclelland.
In 1982, Hopfield brought out his idea of a neural network. Unlike the neurons
in MLP, the Hopfield network consists of only one layer whose neurons are
fully connected with each other.
Examinations of humans' central nervous systems inspired the concept of
artificial neural networks. In an artificial neural network, simple artificial nodes,
known as "neurons", "neurodes", "processing elements" or "units", are
connected together to form a network which mimics a biological neural
network.
There is no single formal definition of what an artificial neural network is.
However, a class of statistical models may commonly be called "neural" if it
possesses the following characteristics:
1. Contains sets of adaptive weights, i.e. numerical parameters that are tuned by
a learning algorithm, and
2. Capability of approximating non-linear functions of their inputs.
In most cases, neurons are generated by special types of stem cells. A type of
glial cell, called astrocytes (named for being somewhat star-shaped), have also
been observed to turn into neurons by virtue of the stem cell characteristic
pluripotency. In humans, neurogenesis largely ceases during adulthood but in
two brain areas, the hippocampus and olfactory bulb, there is strong evidence
for generation of substantial numbers of new neurons.[Since the 1960’s,
database and information technology has been evolving systematically from
primitive pro-cessing systems to sophisticated and powerful databases systems.
The research and development in database systems since the 1970’s has led to
the development of relational database systems, data modelling tools, and
indexing and data organization techniques. In addition, users gained convenient
and edible data access through query languages, query processing, and user
interfaces. E- Clientmethods for on-line transaction processing (OLTP), where a
query is viewed as a read-only transaction, have contributed substantially to the
evolution and wide acceptance of relational technology as a major tool for e-
client storage, retrieval, and management of large amounts of data.
ANN BASIC STRUCTURE
The idea of ANNs is based on the belief that working of human brain by
making the right connections, can be imitated using silicon and wires as living
neurons and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They
are connected to other thousand cells by Axons. Stimuli from external
environment or inputs from sensory organs are accepted by dendrites. These
inputs create electric impulses, which quickly travel through the neural network.
A neuron can then send the message to other neuron to handle the issue or does
not send it forward. The human Neural system working is as shown below:
ANNs are composed of multiple nodes, which imitate biological neurons of
human brain. The neurons are connected by links and they interact with each
other. The nodes can take input data and perform simple operations on the data.
The result of these operations is passed to other neurons. The output at each
node is called its activation or node value.
Each link is associated with weight. ANNs are capable of learning, which takes
place by altering weight values. The following illustration shows a simple ANN
The basic artificial neuron is as follows-
TYPES OF ANN
where n is the number of inputs and nh is the number of neurons in the hidden
layer.
FeedBack ANN :
Here, feedback loops are allowed. They are used in content addressable
memories.
Working of ANNs :
For example, pattern recognizing. The ANN comes up with guesses while
recognizing. Then the teacher provides the ANN with the answers. The network
then compares it guesses with the teacher’s “correct” answers and makes
adjustments according to errors.
In supervised training, both the inputs and the outputs are provided. The
network then processes the inputs and compares its resulting outputs against the
desired outputs. Errors are then propagated back through the system, causing
the system to adjust the weights which control the network. This process occurs
over and over as the weights are continually tweaked. The set of data which
enables the training is called the "training set." During the training of a network
the same set of data is processed many times as the connection weights are ever
refined.
At the present time, unsupervised learning is not well understood. This adaption
to the environment is the promise which would enable science fiction types of
robots to continually learn on their own as they encounter new situations and
new environments. Life is filled with situations where exact training sets do not
exist. Some of these situations involve military action where new combat
techniques and new weapons might be encountered. Because of this unexpected
aspect to life and the human desire to be prepared, there continues to be
research into, and hope for, this field. Yet, at the present time, the vast bulk of
neural network work is in systems with supervised learning. Supervised
learning is achieving results. This is also called Adaptive Learning.
Computational power
The multilayer perceptron is a universal function approximator, as proven by
the universal approximation theorem. However, the proof is not constructive
regarding the number of neurons required or the settings of the weights.
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a
specific recurrent architecture with rational valued weights (as opposed to full
precision real number-valued weights) has the full power of a Universal Turing
Machine[54] using a finite number of neurons and standard linear connections.
Further, it has been shown that the use of irrational values for weights results in
a machine with super-Turing power.
Capacity
Artificial neural network models have a property called 'capacity', which
roughly corresponds to their ability to model any given function. It is related to
the amount of information that can be stored in the network and to the notion of
complexity.
Convergence
Nothing can be said in general about convergence since it depends on a number
of factors. Firstly, there may exist many local minima. This depends on the cost
function and the model. Secondly, the optimization method used might not be
guaranteed to converge when far away from a local minimum. Thirdly, for a
very large amount of data or parameters, some methods become impractical. In
general, it has been found that theoretical guarantees regarding convergence are
an unreliable guide to practical application.
Generalization and statistics
In applications where the goal is to create a system that generalizes well in
unseen examples, the problem of over-training has emerged. This arises in
convoluted or over-specified systems when the capacity of the network
significantly exceeds the needed free parameters. There are two schools of
thought for avoiding this problem: The first is to use cross-validation and
similar techniques to check for the presence of overtraining and optimally select
hyper parameters such as to minimize the generalization error. The second is to
use some form of regularization. This is a concept that emerges naturally in a
probabilistic (Bayesian) framework, where the regularization can be performed
by selecting a larger prior probability over simpler models; but also in statistical
learning theory, where the goal is to minimize over two quantities: the
'empirical risk' and the 'structural risk', which roughly corresponds to the error
over the training set and the predicted error in unseen data due to overfitting.
CHARACTERISTICS OF ANN
Basically Computers are good in calculations that basically takes inputs process
then and after that gives the result on the basis of calculations which are done at
particular Algorithm which are programmed in the softwares but ANN improve
their own rules, the more decisions they make, the better decisions may
become.The Characteristics are basically those which should be present in
intelligent System like robots and other Artificial Intelligence Based
Applications.
Threshold
Sigmoid
Gaussian
APPLICATIONS OF ANN
They can perform tasks that are easy for a human but difficult for a machine −
Aerospace − Autopilot aircrafts, aircraft fault detection.
Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design,
transplant time optimizer.
Control − ANNs are often used to make steering decisions of physical vehicles.
The neural network needs training to operate.
The architecture of a neural network is different from the architecture of
microprocessors therefore needs to be emulated.
Requires high processing time for large neural networks.
CONCLUSION:
The computing world has a lot to gain from neural networks. Their ability to
learn by example makes them very flexible and powerful. Furthermore there is
no need to devise an algorithm in order to perform a specific task; i.e. there is
no need to understand the internal mechanisms of that task. They are also very
well suited for real time systems because of their fast response and
computational times which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology
and psychology. They are regularly used to model parts of living organisms and
to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that
someday 'conscious' networks might be produced. There is a number of
scientists arguing that consciousness is a 'mechanical' property and that
'conscious' neural networks are a realistic possibility.
Finally, we can say that even though neural networks have a huge potential we
will only get the best of them when they are integrated with computing, AI,
fuzzy logic and related subjects.