Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 19

Lecture 11:

Kalman Filters
CS 344R: Robotics
Benjamin Kuipers
Up To Higher Dimensions
• Our previous Kalman Filter discussion was
of a simple one-dimensional model.
• Now we go up to higher dimensions:
n
– State vector: x ∈ℜ
m
– Sense vector: z ∈ℜ
l
– Motor vector: u ∈ ℜ
• First, a little statistics.
Expectations
• Let x be a random variable.
• The expected value E[x] is the mean:
N
1
E[x] = ∫ x p(x) dx ≈ x = ∑ x i
N 1
– The probability-weighted mean of all possible
values. The sample mean approaches it.
• Expected value of a vector x is by
€ T
component.
E[x] = x = [x1,L x n ]
Variance and Covariance
• The variance is E[ (x-E[x])2 ]
N
1
σ = E[(x − x ) ] = ∑ (x i − x )
2 2 2

N 1
• Covariance matrix is E[ (x-E[x])(x-E[x])T ]
N
1
Cij =
N
∑(x ik − x i )(x jk − x j )
k =1

– Divide by N1 to make the sample variance an


unbiased estimator for the population variance.
Biased and Unbiased Estimators
• Strictly speaking, the sample variance
N
1
σ = E[(x − x ) ] = ∑ (x i − x )
2 2 2

N 1
is a biased estimate of the population
variance. An unbiased estimator is:
N
1
2
s = ∑
N −1 1
(x i − x ) 2

• But: “If the difference between N and N1


ever matters to you, then you are probably
up to no good anyway …” [Press, et al]

Covariance Matrix
• Along the diagonal, Cii are variances.
• Off-diagonal Cij are essentially correlations.

⎡C1,1 = σ 12 C1,2 C1,N ⎤


⎢ 2 ⎥
⎢ C2,1 C2,2 = σ 2 ⎥
⎢ O M ⎥
⎢ 2⎥
⎣ CN ,1 L CN ,N = σ N ⎦
Independent Variation
• x and y are
Gaussian random
variables (N=100)
• Generated with
x=1 y=3
• Covariance matrix:
⎡0.90 0.44⎤
Cxy = ⎢ ⎥
⎣0.44 8.82⎦
Dependent Variation
• c and d are random
variables.
• Generated with
c=x+y d=x-y
• Covariance matrix:

⎡10.62 −7.93⎤
Ccd = ⎢ ⎥
⎣−7.93 8.84 ⎦
Discrete Kalman Filter
n
• Estimate the state x ∈ℜ of a linear
stochastic difference equation
x k = Axk −1 + Buk + wk −1
– process noise w is drawn from N(0,Q), with
covariance matrix Q.
• with a measurement z ∈ℜ m
z k = Hx k + vk
– measurement noise v is drawn from N(0,R), with
covariance matrix R.
• A, Q are nxn. B is nxl. R is mxm. H is mxn.
Estimates and Errors
n
ˆ
• x k ∈ℜ is the estimated state at time-step k.
− n
ˆ
• x k ∈ℜ after prediction, before observation.
• Errors: e −
k = x k − xˆ −
k

e k = x k − xˆ k
• Error covariance matrices:
T
− − −
P = E[e e ]
k k k
T
Pk = E[e k e ]k

• Kalman Filter’s task is to update xˆ k Pk


Time Update (Predictor)
• Update expected value of x
ˆx−k = Axˆ k −1 + Bu k
• Update error covariance matrix P
− T
Pk = APk −1A + Q

• Previous statements were simplified


versions of the same idea:

xˆ (t ) = xˆ (t2 ) + u[t3 − t 2 ]
3
2 − 2 2
σ (t ) = σ (t2 ) + σ [t3 − t2 ]
3 ε
Measurement Update (Corrector)
• Update expected value
− −
xˆ k = xˆ + K k (z k − Hxˆ )
k k

– innovation is z k − Hx ˆk
• Update error covariance matrix

Pk = (I − K kH) P k

• Compare with previous form


− −
xˆ (t 3 ) = xˆ(t ) + K(t3 )(z3 − xˆ (t ))
3 3
2 2 −
σ (t 3 ) = (1− K(t 3 ))σ (t3 )
The Kalman Gain
• The optimal Kalman gain Kk is
− T − T −1
K k = Pk H (HPk H + R)
− T
PH k
= − T
HPk H + R

• Compare with previous form


2 −
σ (t3 )
K(t 3 ) = 2 − 2
σ (t3 ) + σ 3
Extended Kalman Filter
• Suppose the state-evolution and
measurement equations are non-linear:
x k = f (x k −1 ,uk ) + wk −1
z k = h(x k ) + vk
– process noise w is drawn from N(0,Q), with
covariance matrix Q.
– measurement noise v is drawn from N(0,R),
with covariance matrix R.
The Jacobian Matrix
• For a scalar function y=f(x),
Δy = f ′ (x)Δx
• For a vector function y=f(x),
⎡∂f1 ∂f1 ⎤
⎡Δy1 ⎤ ⎢ (x) L (x)⎥ ⎡Δx ⎤
⎢ ⎥ ⎢∂x1 ∂x n ⎥ ⎢ 1⎥
Δy = JΔx = ⎢ M⎥ = ⎢ M M ⎥⋅ ⎢ M ⎥
⎢ ⎥ ⎢∂f n ∂f n ⎥ ⎢ ⎥
⎣Δy n ⎦ ⎢ (x) L (x)⎥ ⎣Δx n ⎦

⎣∂x1 ∂x n ⎥

Linearize the Non-Linear
• Let A be the Jacobian of f with respect to x.
∂f i
A ij = (x k−1,uk )
∂x j
• Let H be the Jacobian of h with respect to x.
∂hi
H ij = (x k )
∂x j
• Then the Kalman Filter equations are
almost the same as before!
EKF Update Equations
• Predictor step: ˆx−k = f (xˆ k −1 ,uk )
− T
P = APk −1A + Q
k

− T − T −1
• Kalman gain: K k = P H (HP H + R)
k k

− −
• Corrector step: xˆ k = xˆ + K k (z k − h(xˆ ))
k k

Pk = (I − K kH) P k
“Catch The Ball” Assignment
• State evolution is linear (almost).
– What is A?
– B=0.
• Sensor equation is non-linear.
– What is y=h(x)?
– What is the Jacobian H(x) of h with respect to x?
• Errors are treated as additive. Is this OK?
– What are the covariance matrices Q and R?
TTD

• Intuitive explanations for APAT and HPHT


in the update equations.

You might also like