1ACM 116: The Kalman filter
Example General Setup Derivation Numerical examples
Estimating the voltage 1D tracking 2D tracking
2Example: Navigation Problem Truck on frictionless straight rails Initial position X0 = 0 Movement is buffeted by random accelerations We measure the position every t seconds State variables (Xk, Vk) position and velocity at time kt
Xk = Xk1 + Vk1t+ ak1t2/2
Vk = Vk1 + ak1t
where ak1 is a random acceleration
ObservationsYk = Xk + Zk
where Zk is a noise term.
The goal is to estimate the position and velocity at all times.
3General SetupEstimation of a stochastic dynamic system
DynamicsXk = Fk1Xk1 +Bk1uk1 +Wk1
Xk: state of the system at time k uk1: control-input Wk1: noise
ObservationsYk = HkXk + Zk
Yk is observed Zk is noise
The noise realizations are all independent Goal: predict state Xk from past data Y0, Y1, . . . , Yk1.
4DerivationDerivation in the simpler model where the dynamics is of the form
Xk = ak1Xk1 +Wk1
and the observationsYk = Xk + Zk
The objective is to find, for each time k, the minimum MSE filter based onY0, Y1, . . . , Yk1
Xk =k
j=1
h(k1)j Ykj
To find the filter, we apply the orthogonality principle
E((Xk k
j=1
h(k1)j Ykj)Y`) = 0, ` = 0, 1, . . . , k 1.
5RecursionThe beautiful thing about the Kalman filter is that one can almost deduce theoptimal filter to predict Xk+1 from that predicting Xk.
h(k)j+1 = (ak h(k)1 )h(k1)j , j = 1, . . . , k.
Given the filter h(k1), we only need to find h(k)1 to get the filter at the nexttime step.
6How to find h(k)1 ?Observe that the next prediction is equal to
Xk+1 = h(k)1 Yk +
kj=1
(ak h(k)1 )h(k1)j Ykj
= akXk + h(k)1 (Yk Xk)
InterpretationXk+1 = akXk + h
(k)1 Ik
akXk is the prediction based on the estimate at time k h(k)1 Ik is a corrective term which is available since we now see Yk
h(k)1 is called the gain
Ik = Yk Xk is called the innovation
7Error of PredictionTo find h(k)1 , we look at the error of prediction
k = Xk Xk
We have the recursion
k+1 = (ak h(k)1 )k +Wk h(k)1 Zk 0 = Z0 E(k) = 0 E(2k+1) = [ak h(k)1 ]2E(2k) + E(W 2k ) + [h(k)1 ]2E(Z2k)
8To minimize the MSE k+1, we adjust h(k)1 so that
h(k)1E(2k+1) = 0 = 2(ak h(k)1 )E(2k) + 2h(k)1 E(Z2k)
which is given by
h(k)1 =
akE(2k)
E(2k) + E(Z2k)
Note that this gives the recurrence relation
E(2k+1) = ak(ak h(k)1 )E(2k) + E(W 2k )
9The Kalman Filter Algorithm Initialization X0 = 0, E(20) = E(Z20) Loop: for k = 0, 1, . . .
h(k)1 =
akE(2k)
E(2k) + E(Z2k)
Xk+1 = akXk + h(k)1 (Yk Xk)
E(2k+1) = ak(ak h(k)1 )E(2k) + E(W 2k )
10
Benefits Requires no knowledge about the structure of Wk and Zk (only variances) Easy implementation Many applications
Inertial guidance system Autopilot Satellite navigation system Many others
11
General Formulation
Xk = Fk1Xk1 +Wk1
Yk = HkXk + Zk
The covariance of Wk is Qk and that of Zk is Rk.
Two variables:
Xk|k estimate of the state at time k based upon Y0, . . . , Yk1 Ek|k error covariance matrix, Ek|k = Cov(Xk Xk|k)
12
Prediction
Xk+1|k = FkXk|k1
Ek+1|k = FkEk|k1F Tk +Qk
Update
Ik = Yk HkXk+1|k InnovationSk = HkEk+1|kHTk +Rk Innovation covariance
Kk = Ek+1|kHTk S1k Kalman Gain
Xk+1|k+1 = Xk+1|k +KkIk Updated state estimateEk+1|k+1 = (IdKkHk)Ek+1|k Updated error covariance
13
Estimating Constant VoltageWe wish to estimate some voltage which is almost constant except for somesmall random fluctuations. Our measuring device is imperfect (e.g. because ofa poor A/D conversion). The process is governed by:
Xk = X0 +Wk, k = 1, 2, . . .
with X0 = 0.5V , and the measurements are
Zk = Xk + Vk, k = 1, 2, . . .
where Wk, Vk are uncorrelated Gaussian white noise processes, withR := Var(Vk) = 0.01, Var(Wk) = 105.
14
0 5 10 15 20 25 30 35 40 45 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8Accurate knowledge of measurement variance, Rest=R=0.01
Iteration
Volta
ge
estimateexact processmeasurements
15
0 5 10 15 20 25 30 35 40 45 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8Optimistic estimate of measurement variance, Rest=0.0001
Iteration
Volta
ge
estimateexact processmeasurements
16
0 5 10 15 20 25 30 35 40 45 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8Pessimistic estimate of measurement variance, Rest=1
Iteration
Volta
ge
estimateexact processmeasurements
17
1D TrackingEstimation of the position of a vehicle.
Let X be a state variable (position and speed), and A is a transition matrix
A =
1 t0 1
.The process is governed by:
Xn+1 = AXn +Wn
where Wn is a zero-mean Gaussian white noise process. The observation is
Yn = CXn + Zn
where the matrix C only picks up the position and Zn is another zero-meanGaussian white noise process independent of Wn.
18
0 5 10 15 20 25!4
!2
0
2
4
6
8
Time
Posit
ion
Estimation of a moving vehicle in 1!D.
exact positionmeasured positionestimated position
19
2D ExampleGeneral setup
X(t+ 1) = FX(t) +W (t), W N(0, Q),Y (t) = HX(t) + V (t), V N(0, R)
Moving particle at constant velocity subject to random perturbations in itstrajectory. The new position (x1, x2) is the old position plus the velocity(dx1, dx2) plus noise w.
x1(t)
x2(t)
dx1(t)
dx2(t)
=1 0 1 0
0 1 0 1
0 0 1 0
0 0 0 1
x1(t 1)x2(t 1)dx1(t 1)dx2(t 1)
+w1(t 1)w2(t 1)dw1(t 1)dw2(t 1)
20
ObservationsWe only observe the position of the particle.
y1(t)y2(t)
=1 0 0 00 1 0 0
x1(t)
x2(t)
dx1(t)
dx2(t)
+v1(t)v2(t)
Source: http://www.cs.ubc.ca/murphyk/Software/Kalman/kalman.html
21
Implementation% Make a point move in the 2D plane% State = (x y xdot ydot). We only observe (x y).
% This code was used to generate Figure 17.9 of% "Artificial Intelligence: a Modern Approach",% Russell and Norvig, 2nd edition, Prentice Hall, in preparation.
% X(t+1) = F X(t) + noise(Q)% Y(t) = H X(t) + noise(R)
ss = 4; % state sizeos = 2; % observation sizeF = [1 0 1 0; 0 1 0 1; 0 0 1 0; 0 0 0 1];H = [1 0 0 0; 0 1 0 0];Q = 1*eye(ss);R = 10*eye(os);initx = [10 10 1 0];initV = 10*eye(ss);
22
seed = 8; rand(state, seed);randn(state, seed);T = 50;[x,y] = sample_lds(F,H,Q,R,initx,T);
23
Apply Kalman Filter[xfilt,Vfilt] = kalman_filter(y,F,H,Q,R,initx,initV);dfilt = x([1 2],:) - xfilt([1 2],:);mse_filt = sqrt(sum(sum(dfilt.2)))
figure;plot(x(1,:), x(2,:), ks-);hold onplot(y(1,:), y(2,:), g*);plot(xfilt(1,:), xfilt(2,:), rx:);hold offlegend(true, observed, filtered, 0)xlabel(X1), ylabel(X2)
24
!40 !30 !20 !10 0 10 2010
20
30
40
50
60
70
X1
X2
trueobservedfiltered
25
0 20 40 60 80 100 120 1400
20
40
60
80
100
120
140
X1
X2
trueobservedfiltered
26
Apply Kalman Smoother[xsmooth, Vsmooth] = kalman_smoother(y,F,H,Q,R,initx,initV);dsmooth = x([1 2],:) - xsmooth([1 2],:);mse_smooth = sqrt(sum(sum(dsmooth.2)))
figure;hold onplot(x(1,:), x(2,:), ks-);plot(y(1,:), y(2,:), g*);plot(xsmooth(1,:), xsmooth(2,:), rx:);hold offlegend(true, observed, smoothed, 0)xlabel(X1), ylabel(X2)
27
!!0 !#0 !20 !10 0 10 2010
20
#0
!0
50
'0
(0
)1
)2
*+,-./0-+1-203..*4-2
28
0 20 40 60 80 100 120 1400
20
40
60
80
100
120
140
X1
X2
trueobservedsmoothed