Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 56

The Unscented Kalman Filter

(UKF)

Chapter 7a
Rudolf E. Kalman

Hao Liu
Ke Zhang

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Contents of Chapter 7
Part A:
• 7.1 Introduction
• 7.2 Optimal Recursive Estimation & the EKF
• 7.3 The Unscented Kalman Filter
• 7.4 UKF Parameter Estimation
• 7.5 UKF Dual Estimation
Part B:
• 7.6 The Unscented Particle Filter
(Presented by other group next week)

Wednesday, 1 March, 2006 2

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Main Issues in Chap.7A
Why UKF is Needed
What can UKF Do
Who Contribute to UKF
How does UKF Work

The UKF Algorithms


The Applications of UKF

Wednesday, 1 March, 2006 3

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Outline
Introduction & Methodology (Presented by Ke)
Why UKF is Needed
What can UKF Do
Who Contribute to UKF
How does UKF Work
Algorithms & Applications (Presented by Hao)
Algorithms
UKF State Estimation
UKF Parameter Estimation
UKF Dual Estimation
Applications
Two Examples realized by MATLAB

Wednesday, 1 March, 2006 4

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Quick Review of the Former Presentations

Kalman Filter (KF) – Chapter 1


Linear dynamical system (Linear evolution functions)

xk 1  Fk 1,k xk  wk
y k  H k xk  vk

Extended Kalman Filter (EKF) – Chapter 1


Non-linear dynamical system (Non-linear evolution functions)

xk 1  F (k , xk )  wk
yk  h(k , xk) vk

Wednesday, 1 March, 2006 5

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Extended Kalman filter (Chapter 1)
• Consider the following non-linear system:

xk 1  fk (xk )  Gk wk
zk  mk (xk )  vk

• Assume that we can somehow determine a reference


trajectory xk
• Then:
xk 1  fk (xk )  fk (xk )  fk (xk )  Gkwk
 Fk xk  Fk xk  fk (xk )  Gk wk
• where   fk  i EKF uses first order terms of
Fk  the Taylor series expansion of
  xk  j the nonlinear functions.
xk

Wednesday, 1 March, 2006 6

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Extended Kalman filter (Chapter 1)
• For the measurement equation, we have:

zk  Mk xk  Mk xk  mk (xk )  vk

• We can then apply the standard Kalman filter to the


linearized model
• How to choose the reference trajectory?
• Idea of the extended Kalman filter is to re-linearize the
model around the most recent state estimate, i.e.

xk 1  fk (xk )
xk  ˆ
xk|k

Wednesday, 1 March, 2006 7

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Extended Kalman Filter (Chapter 1)
• Time-propagation (prediction):
ˆk|k 1  fk 1  x
x ˆk 1|k 1 
fk 1
Pk|k 1  Fk 1Pk 1|k 1Fk 1  Gk 1Qk 1Gk 1 where Fk 1 
x x  xk
• Measurement adaptation (correction):
ˆk|k 1  Kk zk  mk (x
ˆ k|k  x
x ˆk|k 1 )

Pk|k  I  KkMk  Pk|k 1 I  KkMk   KkRkKk


• Kalman gain:
1
mk
Kk  Pk|k 1Mk MkPk|k 1Mk  R k  where Mk 
  xk ˆk|k 1
xx

Wednesday, 1 March, 2006 8

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Use of Extended Kalman filter
The Extended Kalman Filter (EKF) has become a standard
technique used in a number of nonlinear estimation and
machine learning applications
State estimation (Chapter 1)
estimating the state of a nonlinear dynamic system
Parameter estimation (Chapter 2)
estimating parameters for nonlinear system identification
e.g., learning the weights of a neural network
dual estimation (Chapter 5)
both states and parameters are estimated simultaneously
e.g., the Expectation Maximization (EM) algorithm

Wednesday, 1 March, 2006 9

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Comments on EKF
The Extended Kalman Filter (EKF) has long been the de-facto
standard for nonlinear state space estimation

simplicity, robustness and suitability for real-time


implementations

Wednesday, 1 March, 2006 10

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
A Basic Flaw of Extended Kalman filter
The state distribution is approximated by a Gaussian Random
Variables, which is then propagated analytically through the
“first-order’ linearization of a nonlinear system.
These approximations can introduce large errors in the true
posterior mean and covariance of the transformed (Gaussian)
random variable, especially when the models are highly non-
linear, which may lead to
suboptimal performance
sometimes divergence of the filter
Local linearity assumption breaks down when the higher order
terms become significant.

Wednesday, 1 March, 2006 11

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Introduction to the Alternative Approach ——
the Unscented Kalman Filter (UKF)
UKF is an improvement on EKF
A central and vital operation performed in the Kalman Filter is the
propagation of a Gaussian random variable (GRV) through the system
dynamics
EKF
the state distribution is approximated by a GRV
which is then propagated analytically through the first-order linearization
of the nonlinear system.
UKF
The UKF addresses EKF’s problem by using a deterministic sampling
approach
This filter claims both higher accuracy and robustness for nonlinear
models
Remarkably, the computational complexity of the UKF is the same order
as that of the EKF

Wednesday, 1 March, 2006 12

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Idea of UKF
The state distribution is again approximated by a
GRV, but is represented using a minimal set of
carefully chosen sample points.
These sample points completely capture the true mean and
covariance of the GRV
and when propagated through the true nonlinear system,
captures the posterior mean and covariance
accurately to the 3rd order (Taylor series
expansion) for any nonlinearity

Wednesday, 1 March, 2006 13

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
The development of UKF
S. J. Julier and J. K. Uhlmann, A New Extension of the Kalman
Filter to Nonlinear Systems, 1997
demonstrated the substantial performance gains of the UKF in the
context of state-estimation for nonlinear control
Machine learning problems were not considered
Eric A. Wan and Rudolph van der Menve, The Unscented
Kalman Filter for Nonlinear Estimation, 2000
extend the use of the UKF to a broader class of nonlinear
estimation problems, including
nonlinear system identification
training of neural networks
and dual estimation problems

Wednesday, 1 March, 2006 14

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
What is UKF
UKF is a straightforward application of the unscented
transformation

Unscented Transformation
The unscented transformation (UT) is a method for calculating the
statistics of a random variable which undergoes a nonlinear
transformation
builds on the principle that it is easier to approximate a
probability distribution than an arbitrary nonlinear function

Wednesday, 1 March, 2006 15

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Wednesday, 1 March, 2006 16

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Unscented Transformation

Wednesday, 1 March, 2006 17

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Example of the UT for mean and covariance propagation

Only five sigma points are required


Wednesday, 1 March, 2006 18

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Definition of Sigma Points
Assume x has mean x and covariance Px
A set of 2 L  1 weighted samples or sigma points are chosen as follows:
0  x i0
i  x    L    Px i i  1,  , L
i  x   L    Px i  L i  L  1,  ,2 L
Where
   2 ( L   )  L is a scaling parameter
The constant  determines the spread of the sigma points around x and is
usually set to a small positive value (e.g., 1    10 4 ).
The constant  is a secondary scaling parameter which is usually set to 3-L
 is used to incorporate prior knowledge of the distribution of x (for Gaussian
distributions,   2 is optimal)
  
 L    Px is the i th column of the matrix square root of L   Px
i

Wednesday, 1 March, 2006 19

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Propagation of Sigma Points

Each sigma point is propagated through the nonlinear function


and the estimated mean and covariance of y are computed as
follows
Yi  f   i  i  0,,2 L
These estimates of the mean and covariance are accurate to the
second order of the Taylor series expansion of f  x  for any
nonlinear function
W0  
(m)
L   i  0
2L
y   Wi Yi
(m)

W0  
i 0
1 2  
(c )
2L  L  
Py   Wi  Yi  y  Yi  y 
(c) T

Wi  Wi  1
(m) (c )
i  1,,2 L
i 0
 2 L   

Wednesday, 1 March, 2006 20

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
What can UKF do
State Estimation of a discrete-time xk 1  F ( xk , vk )
nonlinear dynamic system
yk  H ( xk , nk)

Parameter Estimation in a nonlinear


mapping wk  wk 1  uk
y k  G ( x k , w)
yk  G ( xk , wk) ek

Dual Estimation of a discrete-time


nonlinear dynamic system xk 1  F ( xk , vk , w)
yk  H ( xk , nk , w)

Wednesday, 1 March, 2006 21

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Advantage of Unscented Kalman Filter

Approximates the distribution rather than the nonlinearity

Accurate to at least the 2nd order (3rd for Gaussian inputs)

No Jacobians or Hessians are calculated, same computational


complexity as EKF

Efficient “sampling” approach

Weights β and α can be modified to capture higher-order


statistics

Wednesday, 1 March, 2006 22

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
UKF Algorithm
• UKF Algorithm
• State estimation
• Parameter estimation
• Dual UKF
• An interesting topic
• Questions for discussion

Wednesday, 1 March, 2006 23

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
UKF Algorithm
• UKF Algorithm
• State estimation
• Parameter estimation
• Dual UKF
• An interesting topic
• Questions for discussion

Wednesday, 1 March, 2006 24

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 1 of UKF for state estimation
• Initialize with

Noise

Original state

Wednesday, 1 March, 2006 25

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 2 of UKF for state estimation

Weighted State Weighted Output

Wednesday, 1 March, 2006 26

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 2 of UKF for state estimation
• Weight covariance with respect to states

Wednesday, 1 March, 2006 27

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 3 of UKF for state estimation
• Weight covariance with respect to outputs

• Weight covariance with respect to states and outputs

Wednesday, 1 March, 2006 28

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 4 of UKF for state estimation
• Update state and state covariance

Wednesday, 1 March, 2006 29

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Implementation Variations
• UKF-additive (zero mean) noise case

Wednesday, 1 March, 2006 30

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Example

 x1(k)  0.1 0.25  x1(k  1)  0.25 0.15 u1(k  1)


x (k)          
 2   0.25 0.1  x2 (k  1)  0.15 0.1  u2 (k  1)
 x1(k)
y(k)  0.5 0.5    
 x2 (k)
Wednesday, 1 March, 2006 31

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Inputs and Outputs
Input Profile
2
Input variable 1 (u1)
0 Input variable 2 (u2)

-2

-4
0 5 10 15 20 25 30 35 40 45 50
State Profile
5
State 1 (x1)
State 2 (x2)
0

-5
0 5 10 15 20 25 30 35 40 45 50
Output
2
output (y)
0

-2
0 5 10 15 20 25 30 35 40 45 50
Wednesday, 1 March, 2006 32

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Scenario 1
• UKF • EKF
Process noise covariance Process noise covariance
Qh=[1 0;0 2]; EQ=[1 0;0 2];
Measurement noise covariance Measurement noise covariance
Rh=0.2; ER=0.2;

Wednesday, 1 March, 2006 33

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Scenario 2

• UKF • EKF
Process noise covariance Process noise covariance
Qh=[10 0;0 20]; EQ=[10 0;0 20];
Measurement noise covariance Measurement noise covariance
Rh=0.2; ER=0.2;

Wednesday, 1 March, 2006 34

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Scenario 3

• UKF • EKF
Process noise covariance Process noise covariance
Qh=[100 0;0 200]; EQ=[100 0;0 200];
Measurement noise covariance Measurement noise covariance
Rh=0.2; ER=0.2;

Wednesday, 1 March, 2006 35

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Scenario 1

Wednesday, 1 March, 2006 36

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Scenario 2

Wednesday, 1 March, 2006 37

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Scenario 3

Wednesday, 1 March, 2006 38

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Results of Scenario 1
performance of UKF and EKF
2
Real value
1 Prediction by UKF
Prediction by EKF
0

-1

-2
0 5 10 15 20 25 30 35 40 45 50

absolute error
2
UKF
1 EKF

-1

-2
0 5 10 15 20 25 30 35 40 45 50

Wednesday, 1 March, 2006 39

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Results of Scenario 2
performance of UKF and EKF
2
Real value
1 Prediction by UKF
Prediction by EKF
0

-1

-2
0 5 10 15 20 25 30 35 40 45 50

absolute error
1
UKF
0.5 EKF

-0.5

-1
0 5 10 15 20 25 30 35 40 45 50

Wednesday, 1 March, 2006 40

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Results of Scenario 3
performance of UKF and EKF
4
Real value
2 Prediction by UKF
Prediction by EKF
0

-2

-4
0 5 10 15 20 25 30 35 40 45 50

absolute error
0.5
UKF
EKF

-0.5
0 5 10 15 20 25 30 35 40 45 50
Wednesday, 1 March, 2006 41

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
UKF Algorithm
• UKF Algorithm
• State estimation
• Parameter estimation
• Dual UKF
• An interesting topic
• Questions for discussion

Wednesday, 1 March, 2006 42

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 1 of UKF for parameter estimation
• Initialize with

• Calculation of sigma points

Wednesday, 1 March, 2006 43

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 2 of UKF for parameter estimation

Wednesday, 1 March, 2006 44

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 3 of UKF for parameter estimation
• Weight covariance with respect to outputs

• Weight covariance with respect to states and outputs

Wednesday, 1 March, 2006 45

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Step 4 of UKF for parameter estimation

• Update state and state covariance

Wednesday, 1 March, 2006 46

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
An interesting topic
• UKF is a sort of Linear Regression
Kalman Filter (LRKF)
Tine Lefebvre, et al, Comment on “A New Method for the
Nonlinear Transformation of Means and Covariances in Fitlers
and Estimations”, IEEE Transactions on Automatic Control,
Vol.47, No,8. August 2002

Wednesday, 1 March, 2006 47

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Statistical Linear Regression of a
Nonlinear Function

Wednesday, 1 March, 2006 48

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Statistical Linear Regression of a
Nonlinear Function

Wednesday, 1 March, 2006 49

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Statistical Linear Regression of a
Nonlinear Function

Wednesday, 1 March, 2006 50

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
The UKF is a LRKF: Process Update
UKF LRKF

Wednesday, 1 March, 2006 51

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Proposed Adaptation to the UKF
• The weights can be chosen as real numbers, the UKF then
performs a weighted linear regression.
• Sometimes, the UKF is used with k < 0 (resulting in negative
weights) (see the aforementioned paper1 and [1]). In this
case, the calculated covariance matrices can be nonpositive,
semidefinite. To overcome this problem, the covariances are
artificially Increased

Wednesday, 1 March, 2006 52

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Questions for discussion
The constant α determines the spread of the sigma
points around , and is usually set to a small positive
value.
λ= α^2*(L+k)-L
given k=3-L
β=2
0.0001<α<1
then
λ= α^2*(L+k)-L=a^2*3-L

i  x  (  L   Px )i
 x   3  ( Px )i
Wednesday, 1 March, 2006 53

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Weights
0.00001^2*3-L< λ<3-L (L>3, negative)

 L
W  0
m
 1 2
L 3
 L
W0 
c
1     4  2  2
2

L 3
1 1
Wi  Wi 
m c

2( L   ) 6 2

Wednesday, 1 March, 2006 54

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Wednesday, 1 March, 2006 55

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering
Questions for discussion

How to guarantee the matrix square root?


0  x i0


i  x   L    Pk  i i  1, , L

  x 
i  L    Pk  i  L i  L  1, , 2 L

K k  Pxk yk Pyx1y x

Positive Definite ?
Pk  P  K k Py x y x K 
k
T
k

 Pk  ( Pxk yk Pyx1y x ) Py x y x ( Pxk yk Pyx1y x )T


 Pk  Pxk yk ( Pyx1y x )T PxTk yk
Wednesday, 1 March, 2006 56

κ Introduction to Kalman filtering


TRAIL Course on Kalman filtering

You might also like