Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

CHAPTER 11

Problem 11.1 :
(a)
F(z) =
4
5
+
3
5
z
1
X(z) = F(z)F

(z
1
) = 1 +
12
25
_
z +z
1
_
Hence :
=
_

_
1
12
25
0
12
25
1
12
25
0
12
25
1
_

_ =
_

_
3/5
4/5
0
_

_
and :
C
opt
=
_

_
c
1
c
0
c
1
_

_ =
1
=
1

_
1 a
2
a a
2
a 1 a
a
2
a 1 a
2
_

_
_

_
3/5
4/5
0
_

_
where a = 0.48 and = 1 2a
2
= 0.539. Hence :
C
opt
=
_

_
0.145
0.95
0.456
_

_
(b) The eigenvalues of the matrix are given by :
| I| = 0

1 0.48 0
0.48 1 0.48
0 0.48 1

= 0 = 1, 0.3232, 1.6768
The step size should range between :
0 2/
max
= 1.19
(c) Following equations (10-3-3)-(10-3-4) we have :
=
_
1 0.48
0.48 0.64
_
,
_
c
1
c
0
_
=
_
0.6
0.8
_

_
c
1
c
0
_
=
_
0
1.25
_
and the feedback tap is :
c
1
= c
0
f
1
= 0.75
251
Problem 11.2 :
(a)

max
=
2

max
=
2
1 +
1

2
+N
0
=
2
1.707 +N
0
(b) From (11-1-31) :
J

=
2
J
min
3

k=1

2
k
1 (1
k
)
2

1
2
J
min
3

k=1

k
Since
J

J
min
= 0.01 :

0.07
1 +N
0
0.06
(c) Let C

= V
t
C,

= V
t
, where V is the matrix whose columns form the eigenvectors of
the covariance matrix (note that V
t
= V
1
). Then :
C
(n+1)
= (I ) C
(n)
+
C
(n+1)
=
_
I VV
1
_
C
(n)
+
V
1
C
(n+1)
= V
1
_
I VV
1
_
C
(n)
+ V
1

C

(n+1)
= (I ) C

(n)
+

which is a set of three de-coupled dierence equations (de-coupled because is a diagonal


matrix). Hence, we can write :
c

k,(n+1)
= (1
k
) c

k,(n)
+

k
, k = 1, 0, 1
The steady-state solution is obtained when c

k,(n+1)
= c

k
, which gives :
c

k
=

k
, k = 1, 0, 1
or going back to matrix form :
C

=
1


C = VC

= V
1
V
1

C =
_
VV
1
_
1
=
1

which agrees with the result in Probl. 10.18(a).


252
Problem 11.3 :
Suppose that we have a discrete-time system with frequency response H(); this may be equal-
ized by use of the DFT as shown below :

a
n
A()
System H()
(channel)
y
n
Y()
Equalizer
E()
output

A() =
N1

n=0
a
n
e
jn
Y () =
N1

n=0
c
n
e
jn
= A()H()
Let :
E() =
A()Y

()
|Y ()|
2
Then by direct substitution of Y () we obtain :
E() =
A()A

()H

()
|A()|
2
|H()|
2
=
1
H()
If the sequence {a
n
} is suciently padded with zeros, the N-point DFT simply represents the
values of E(gw) and H() at =
2
N
k =
k
, for k = 0, 1, ...N 1 without frequency aliasing.
Therefore the use of the DFT as specied in this problem yields E (
k
) =
1
H()
, independent of
the properties of the sequence {a
n
} . Since H() is the spectrum of the discrete-time system, we
know that this is equivalent to the folded spectrum of the continuous-time system (i.e the system
which was sampled). For further details for the use of a pseudo-random periodic sequence to
perform equalization we refer to the paper by Qureshi (1985).
Problem 11.4 :
The MSE performance index at the time instant k is
J(c
k
) = E
_

n=N
c
k,n
v
kn
I
k

2
_

_
If we dene the gradient vector G
k
as
G
k
=
J(c
k
)
2c
k
253
then its l th element is
G
k,l
=
J(c
k
)
2c
k,l
=
1
2
E
_
_
2
_
_
N

n=N
c
k,n
v
kn
I
k
_
_
v

kl
_
_
= E
_

k
v

kl
_
= E
_

k
v

kl
_
Thus, the vector G
k
is
G
k
=
_
_
_
_
E[
k
v

k+N
]
.
.
.
E[
k
v

kN
]
_
_
_
_
= E[
k
V

k
]
where V
k
is the vector V
k
= [v
k+N
v
kN
]
T
. Since

G
k
=
k
V

k
, its expected value is
E[

G
k
] = E[
k
V

k
] = E[
k
V

k
] = G
k
Problem 11.5 :
The tap-leakage LMS algorithm is :
C(n + 1) = wC(n) + (n)V

(n) = wC(n) + (C(n) ) = (wI ) C(n)


Following the same diagonalization procedure as in Problem 11.2 or Section (11-1-3) of the book,
we obtain :
C

(n + 1) = (wI ) C

(n)

where is the diagonal matrix containing the eigenvalues of the correlation matrix . The
algorithm converges if the roots of the homogeneous equation lie inside the unit circle :
|w
k
| < 1, k = N, ..., 1, 0, 1, ..., N
and since > 0, the convergence criterion is :
<
1 +w

max
Problem 11.6 :
The estimate of g can be written as : g = h
0
x
0
+... +h
M1
x
M1
= x
T
h, where x, h are column
vectors containing the respective coecients. Then using the orthogonality principle we obtain
the optimum linear estimator h :
E [x] = 0 E
_
x
_
g x
T
h
__
= 0 E [xg] = E
_
xx
T
_
h
254
or :
h
opt
= R
1
xx
c
where the M M correlation matrix R
xx
has elements :
R(m, n) = E [x(m)x(n)] = E
_
g
2
_
u(m)u(n) +
2
w

nm
= Gu(m)u(n) +
2
w

nm
where we have used the fact that g and w are independent, and that E [g] = 0. Also, the column
vector c =E [xg] has elements :
c(n) = E [x(n)g] = Gu(n)
Problem 11.7 :
(a) The time-update equation for the parameters {H
k
} is :
H
(n+1)
k
= H
(n)
k
+
(n)
y
(n)
k
where n is the time-index, k is the lter index, and y
(n)
k
is the output of the k-th lter with
transfer function :
_
1 z
M
_
/
_
1 e
j2k/M
z
1
_
as shown in the gure below :



y
0
(n)
H
0
z
1
+
X
+
-



y
1
(n)
H
1
z
1
+
X
+
-
e
j2/M



y
M1
(n)
H
M1
z
1
+
X
+
-
e
j2(M1)/M

Comb. Filter
1/M
z
M
x(n)
X +
-
+
.
.
.
.
y(n)
+
Parallel Bank of Single Pole Filters

255
The error (n) is calculated as : (n) = I
n
y(n), and then it is fed back in the adaptive part
of the equalizer, together with the quantities y
(n)
k
, to update the equalizer parameters H
k
.
(b) It is straightforward to prove that the transfer function of the k-th lter in the parallel bank
has a resonant frequency at f
k
= 2
k
M
, and is zero at the resonant frequencies of the other lters
f
m
= 2
m
M
, m = k. Hence, if we choose as a test signal sinusoids whose frequencies coincide
with the resonant frequencies of the tuned circuits, this allows the coecient H
k
for each lter
to be adjusted independently without any interaction from the other lters.
Problem 11.8 :
(a) The gradient of the performance index J with respect to h is :
dJ
dh
= 2h + 40. Hence, the
time update equation becomes :
h
n+1
= h
n

1
2
(2h
n
+ 40) = h
n
(1 ) 20
This system will converge if the homogeneous part will vanish away as n , or equivalently
if : |1 | < 1 0 < < 2.
(b) We note that J has a minimum at h = 20, with corresponding value : J
min
= 372. To
illustrate the convergence of the algorithm lets choose : = 1/2. Then : h
n+1
= h
n
/2 10,
and, using induction, we can prove that :
h
n+1
=
_
1
2
_
n
h
0
10
_
n1

k=0
_
1
2
_
k
_
where h
0
is the initial value for h. Then, as n , the dependence on the initial condition h
0
vanishes and h
n
10
1
11/2
= 20, which is the desired value. The following plot shows the
expression for J as a function of n, for = 1/2 and for various initial values h
0
.
h0=25
h0=30
h0=0
0 2 4 6 8 10 12 14 16 18 20
400
350
300
250
200
150
100
50
0
50
Iteration n
J
(
n
)
256
Problem 11.9 :
The linear estimator for x can be written as : x(n) = a
1
x(n1)+a
2
x(n1) = [x(n 1) x(n 2)]
_
a
1
a
2
_
.
Using the orthogonality principle we obtain :
E
__
x(n 1)
x(n 2)
_

_
= 0 E
__
x(n 1)
x(n 2)
_ _
x(n) [x(n 1) x(n 2)]
_
a
1
a
2
___
= 0
or :
_

xx
(1)

xx
(2)
_
=
_

xx
(0)
xx
(1)

xx
(1)
xx
(0)
_ _
a
1
a
2
_

_
a
1
a
2
_
=
_
1 b
b 1
_
1
_
b
b
2
_
=
_
b
0
_
This is a well-known fact from Statistical Signal Processing theory : a rst-order AR process
(which has autocorrelation function (m) = a
|m|
) has a rst-order optimum (MSE) linear esti-
mator : x
n
= ax
n1
.
Problem 11.10 :
In Probl. 11.9 we found that the optimum (MSE) linear predictor for x(n), is x(n) = bx(n 1).
Since it is a rst order predictor, the corresponding lattice implementation will comprise of one
stage, too, with reection coecient a
11
. This coecient can be found using (11-4-28) :
a
11
=

xx
(1)

xx
(0)
= b
Then, we verify that the residue f
1
(n) is indeed the rst-order prediction error : f
1
(n) =
x(n) bx(n 1) = x(n) x(n) = e(n)
Problem 11.11 :
The system C(z) =
1
10.9z
1
has an impulse response : c(n) = (0.9)
n
, n 0. Then, we write the
input y(n) to the adaptive FIR lter :
y(n) =

k=0
c(k)x(n k) +w(n)
Since the sequence {x(n)} corresponds to the information sequence that is transmitted through
a channel, we will assume that is uncorrelated with zero mean and unit variance. Then the opti-
mum (according to the MSE criterion) estimator of x(n) will be : x(n) = [y(n) y(n 1)]
_
b
0
b
1
_
.
257
Using the orthogonality criterion we obtain the optimum coecients {b
i
}:
E
__
y(n)
y(n 1)
_

_
= 0 E
__
y(n)
y(n 1)
_ _
x(n) [y(n) y(n 1)]
_
b
0
b
1
___
= 0

_
b
0
b
1
_
=
_
E
_
y(n)y(n) y(n)y(n 1)
y(n 1)y(n) y(n 1)y(n 1)
__
1
_
E
_
y(n)x(n)
y(n 1)x(n)
__
The various correlations are as follows :
E [y(n)x(n)] = E
_

k=0
c(k)x(n k)x(n) +w(n)x(n)
_
= c(0) = 1
where we have used the fact that : E [x(n k)x(n)] =
k
, and that {w(n)} {x(n)} are indepen-
dent. Similarly :
E [y(n 1)x(n)] = E
_

k=0
c(k)x(n k 1)x(n) +w(n)x(n)
_
= 0
E [y(n)y(n)] = E
_
_

k=0

j=0
c(k)c(j)x(n k)x(n j)
_
_
+
2
w
=

j=0
c(j)c(j) +
2
w
=

j=0
(0.9)
2j
+
2
w
=
=
1
1 0.81
+
2
w
=
1
0.19
+
2
w
and :
E [y(n)y(n 1)] = E
_
_

k=0

j=0
c(k)c(j)x(n k)x(n 1 j)
_
_
=

j=0
c(j)c(j + 1) =

j=0
(0.9)
2j+1
= 0.9
1
1 0.81
= 0.9
1
0.19
Hence :
_
b
0
b
1
_
=
_
1
0.19
+ 0.1 0.9
1
0.19
0.9
1
0.19
1
0.19
+ 0.1
_
1
_
1
0
_
=
_
0.85
0.75
_
It is interesting to note that in the absence of noise (i.e when the term
2
w
= 0.1 is missing from
the diagonal of the correlation matrix), the optimum coecients are : B(z) = b
0
+ b
1
z
1
=
1 0.9z
1
, i.e. the equalizer function is the inverse of the channel function (in this case the
MSE criterion coincides with the zero-forcing criterion). However, we see that, in the presence
258
of noise, the MSE criterion gives a slightly dierent result from the inverse channel function, in
order to prevent excessive noise enhancement.
Problem 11.12 :
(a) If we denote by V the matrix whose columns are the eigenvectors {v
i
} :
V = [v
1
|v
2
|...|v
N
]
then its conjugate transpose matrix is :
V
t
=
_

_
v
t
1
v
t
2
...
v
t
N
_

_
and can be written as :
=
N

i=1

i
v
i
v
t
i
= VV
t
where is a diagonal matrix containing the eigenvalues of . Then, if we name X = V
1/2
V
t
,
we see that :
XX = V
1/2
V
t
V
1/2
V
t
= V
1/2

1/2
V
t
= VV
t
=
where we have used the fact that the matrix V is unitary : VV
t
= I. Hence, since XX = ,
this shows that the matrix X = V
1/2
V
t
=

N
i=1

1/2
i
v
i
v
t
i
is indeed the square root of .
(b) To compute
1/2
, we rst determine V, (i.e the eigenvalues and eigenvectors of the cor-
relation matrix). Then :

1/2
=
N

i=1

1/2
i
v
i
v
t
i
= V
1/2
V
t
259

You might also like