Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Artificial Neural Network (ANN)

Abstract

An Artificial Neural Network (ANN) is an information-processing


paradigm that is inspired by the way biological nervous systems, such as the
brain, process information. The key element of this paradigm is the novel
structure of the information processing system. It is composed of a large
number of highly interconnected processing elements (neurons) working in
simultaneously to solve specific problems. ANNs, like people, learn by
example.

An ANN is configured for a specific application, such as pattern


recognition or data classification, through a learning process. Learning in
biological systems involves adjustments to the synaptic connections that exist
between the neurons. This is true of ANNs as well.

Neural network simulations appear to be a recent development. However,


this field was established before the advent of computers, and has survived
several eras. Many important advances have been boosted by the use of
inexpensive computer emulations. The first artificial neuron was produced in
1943 by the neurophysiologist Warren McCulloch and the logician Walter Pitts.

There were some initial simulations using formal logic. McCulloch and
Pitts (1943) developed models of neural networks based on their understanding
of neurology. These models made several assumptions about how neurons
worked. Their networks were based on simple neurons, which were considered
to be binary devices with fixed threshold.

!1
Not only was neuroscience, but psychologists and engineers also
contributed to the progress of neural network simulations. Rosenblatt (1958)
stirred considerable interest and activity in the field when he designed and
developed the Perceptron. The Perceptron had three layers with the middle layer
known as the association layer. This system could learn to connect or associate a
given input to a random output unit.

Another system was the ADALINE (Adaptive Linear Element) which was
developed in 1960 by Widrow and Hoff (of Stanford University). The
ADALINE was an analogue electronic device made from simple components.
The method used for learning was different to that of the Perceptron, it
employed the Least-Mean-Squares (LMS) learning rule.

Progress during the late 1970s and early 1980s was important to the re-
emergence on interest in the neural network field.Significant progress has been
made in the field of neural networks-enough to attract a great deal of attention
and fund further research.

Neurally based chips are emerging and applications to complex problems


developing. Clearly, today is a period of transition for neural network
technology.

Neural networks, with their remarkable ability to derive meaning from


complicated or imprecise data, can be used to extract patterns and detect trends
that are too complex to be noticed by either humans or other computer
techniques. A trained neural network can be thought of as an "expert" in the
category of information it has been given to analyze. This expert can then be
used to provide projections given new situations of interest and answer "what
if" questions.

!2
!3

You might also like