Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Term End Examination - November 2011

Course :
Time

EEE408 - Neural Networks and Fuzzy Control

: Three Hours

Slot: G2
Max.Marks:100

Answer any FIVE Questions


(5 X 20 = 100 Marks)
1.

a)

List the comparison between artificial and biological neural network.

[5]

b)

Perform two training steps for the network with learning constant c=0.25 using the

[5]

following inputs. Train the network using Widrow-Hoff learning rule.

2
1

X1 = 0 , d1 = 1 ; X 2 = 2, d 2 = 1 . Find the final weights for bipolar continuous


1
1
f(net) and bipolar continuous neurons f(net), =1. The initial weight is [1 0 1]t.

2.

c)

Realize the Exclusive-OR function using Mccolloch-Pitts neuron.

[10]

a)

With neat architecture, explain the MADALINE algorithm.

[10]

b)

Find the weight matrix to store the following input output patterns using Hebb

[10]

rule.

A1=[1 0 1]; B1=[1 0]


A2=[0 1 0]; B2=[0 1].

(i) Using the binary step function (with target 0) as the activation function for both
layers, test the response of your network in both directions of each binary training
pattern. Initial activation of the other layers is set to zero
(ii) Using the bipolar step function (with target 0) as the activation function for both
layers. Convert the training patterns to bipolar form and test the network response
in both directions again. Initial activation is as in part(i).
(iii) Test the response of your network on each of the following noisy versions of the
bipolar form of the training patterns.[0 -1 1],[0 0 1]&[-1 0 -1].

3.

a)

Draw and explain the architecture of Maxnet and Hamming net with necessary
equations.
Page 1 of 3

[10]

b)

The four-dimensional vectors assigned to two classes. The four vectors and associated

[10]

classes are
Vector

Class

[1,0,1,0]

[0,0,1,1]

[1,1,0,0]

[1,0,0,1]

To initialize the network weights use the first two input vectors.

Initialize the

learning rate parameter to 0.1. Perform one epoch of training using LVQ algorithm.
Also draw the neural architecture of this LVQ and indicate the weights obtained after
one training epoch.

4.

a)

Use the Hebb rule to store the vectors [1, 1, 1, 1] and [1, 1, -1, -1] in an autoassociative

[10]

neural net.
(i)Find the weight matrix. (Do not set diagonal terms set to zero.)
(ii)Test the net, using the vector [1, 1, 1, 1] as input.
(iii)Test the net, using [1, 1, -1, -1] as input.
(iv)Test the net, using [1, 1, 1, 0]
b)

Design a Hopfield network to store [1 1 1 1], [-1 1 -1 1]. Test the performance of the

[10]

network with input vectors. [-1 1 -1 -1], [1 -1 1 1] using synchronized and


asynchronized updation. Also find the variation of energy for input [-1 1 -1 -1]

5.

a)

Explain the training algorithm used in self organizing feature maps with neat

[10]

architecture.
b)

For the given exemplar vectors e(1)=[-1 1 1 1] and e(2)=[1 -1 1 -1], use Hamming net
to find the exemplar that is closest to each of the bipolar input patterns. The input
patterns are [1 1 -1 -1],[-1 1 1 -1],[-1 -1 -1 -1], [-1 -1 1 1] and assume the initial weight
0 .5 0 .5 0 .5 0 .5
matrix as

0 .5 0 .5 0 .5 0 .5

Page 2 of 3

[10]

6.

a)

Write the properties of classical sets.

[5]

b)

The fuzzy sets A and B are given by

[5]

1 0 .5 0 .3 0 .2
0 .5 0 .7 0 .2 0 .4
A= +
+
+
+
+
+
and B =
.
4
5
3
4
5
2 3
2
Find
( i ) A B (ii) A B (iii) Complement of A and B.
c)

Explain the different methods of defuzzification with neat diagram.

[10]

Page 3 of 3

You might also like