State Space Models

Post on 31-Dec-2015

38 views 0 download

description

State Space Models. Let { x t : t  T } and { y t : t  T } denote two vector valued time series that satisfy the system of equations:. y t = A t x t + v t (The observation equation) x t = B t x t- 1 + u t (The state equation). - PowerPoint PPT Presentation

transcript

State Space Models

Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations:

yt = Atxt + vt (The observation equation)

xt = Btxt-1 + ut (The state equation)

The time series { yt:t T} is said to have state-space representation.

Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying:

1. E(ut) = E(vt) = 0.

2. E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s.

3. E(ututˊ) = u and E(vtvtˊ) = v.

4. E(utvsˊ) = E(vtusˊ) = 0 for all t and s.

Example: One might be tracking an object with several radar stations. The process {xt:t T} gives the position of the object at time t. The process { yt:t T} denotes the observations at time t made by the several radar stations.

As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t T}, from the observations, {yt:t T} , made by the several radar stations

Example: Many of the models we have considered to date can be thought of a State-Space models

Autoregressive model of order p:

tptpttt uyyyy 2211

Define

Then tty x100

t

t

pt

t

y

y

y

1

1

x

and ttt uBxx 1 State equation

Observation equation

tt

p

u

1

0

0

100

010

1

21

x

Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values.

Suppose that the m states are denoted by the vectors:

1

0

0

,,

0

1

0

,

0

0

1

21

meee

Suppose that the n possible observations taken at each state are

1

0

0

,,

0

1

0

,

0

0

1

21

nfff

Let

ijmm

itjtij XXP , 1 ee

and

ijnm

itjtij XYP Βef ,

Note

i

im

i

i

itt XXE eΠe

2

1

1

Let

So that

itttt XXEX eu 1

itX eΠ

1 tt XX Π

ttt XX uΠ 1 The State Equation

with

0, 121 ttttt XEXXE uu

Also

Hence

tttttt XXXX uΠuΠ 11

and

tttttttt XXXX uuΠuuΠΠΠ 1111

1111 tttttttttt XXXXXX ΠuuΠΠΠuu

1111 ttttttttt XXXXXEXE ΠΠuuΣu

ΠΠ 11 diagdiag ttt XXXE

where diag(v) = the diagonal matrix with the components of the vector v along the diagonal

then

Since

ttt XX uΠ 1

and

ttt XX uΠ diagdiagdiag 1

11 diagdiag ttt XXXE Π

Thus

ΠΠΠΣu 11 diagdiag tt XX

We have defined

ijnm

itjtij XYP Βef ,

Hence

i

in

i

i

itt XYE eΒe

2

1

Let

tttt XYEY v

tt XY Β

Then

with

The Observation Equation

0v tt XE

ttt XY vΒ

and

ΒΒΒvvΣv ttttt XXXE diagdiag

Hence with these definitions the state sequence of a Hidden Markov Model satisfies:

with

The Observation Equation

0v tt XE

ttt XY vΒ

and ΒΒΒvvΣv ttttt XXXE diagdiag

ttt XX uΠ 1 The State Equation

with 0u tt XE

and ΠΠΠuuΣu 111 diagdiag ttttt XXXE

The observation sequence satisfies:

Kalman Filtering

We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT.We will consider finding the “best” linear predictor. We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s.

We will consider estimation of xt in terms of 1. y1, y2, y3, … , yt-1 (the prediction problem)

2. y1, y2, y3, … , yt (the filtering problem)

3. y1, y2, y3, … , yT (t < T, the smoothing problem)

For any vector x define:

where

sxsxsxs pˆˆˆˆ 21 x

is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys.

sx iˆ

The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes

2ˆ sxxE ii

Remark: The best predictor is the unique vector of the form:

Where C0, C1, C2, … ,Cs, are selected so that:

sss yCyCyCx 1100ˆ

sis i ,,2,1,0 ˆ yxx

sisE i ,,2,1,0 ˆ i.e. 0yxx

Remark: If x, y1, y2, … ,ys are normally distributed then:

sEs yyyxx ,,,ˆ 21

Cvuv ˆ

Let u and v, be two random vectors than

is the optimal linear predictor of u based on v if

1 vvvuC EE

Remark

State Space Models

Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations:

yt = Atxt + vt (The observation equation)

xt = Btxt-1 + ut (The state equation)

The time series { yt:t T} is said to have state-space representation.

Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying:

1. E(ut) = E(vt) = 0.

2. E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s.

3. E(ututˊ) = u and E(vtvtˊ) = v.

4. E(utvsˊ) = E(vtusˊ) = 0 for all t and s.

Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations:

yt = Atxt + vt

xt = Bxt-1 + ut

Let

Kalman Filtering:

stt Es yyyxx ,,,ˆ 21

and

suutt

stu ssE yyyxxxxΣ ,,,ˆˆ 21

Then

1ˆ1ˆ 1 tt tt xΒx

1ˆ1ˆˆ ttt tttttt xAyKxx

111 vΣAΣAAΣK tt

ttttt

ttt

where

One also assumes that the initial vector x0 has mean and covariance matrix an that

μx 0ˆ 0

The covariance matrices are updated

uΣBΣBΣ

11,1

1 ttt

ttt

with

11 tttt

ttt

ttt AΣKΣΣ

ΣΣ 000

Summary: The Kalman equations

uΣBΣBΣ

11,1

1 ttt

ttt1.

11 tttt

ttt

ttt AΣKΣΣ

1ˆ1ˆˆ ttt tttttt xAyKxx

111 vΣAΣAAΣK tt

ttttt

ttt

1ˆ1ˆ 1 tt tt xΒx

2.

3.

4.

5.

μx 0ˆ 0with ΣΣ 0

00and

Now stt Es yyyxx ,,,ˆ 21

hence

Proof:

121 ,,,1ˆ ttt Et yyyxx

1ˆ 1 ttxΒ

1211 ,,, tttE yyyuΒx

1211 ,,, ttE yyyxΒ

Note 121 ,,,1ˆ ttt Et yyyyy 121 ,,, ttttE yyyvxA

1ˆ,,, 121 tE ttttt xAyyyxA

proving (4)

Let 1ˆ tttt yye

1ˆ tttttt xAvxA tttt t vxxA 1ˆ

1ˆ tttt xAy

Let 1ˆ tttt xxd

Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:

ttttt EE eeeed 1 tttE eyyyd ,,,, 110 tttE yyyyd ,,,, 110

tttt tt eKxx 1ˆˆHence

tttttttt ttEE vxxAxxed 1ˆ1ˆ

1ˆ ttttt xAyewhere

1ˆˆ tt tt xx

1 ttttt EE eeedKand

Now

ttttt ttE Axxxx 1ˆ1ˆ

t

ttt AΣ 1

(5)

tttttt tEE vxxAee 1ˆ

Also

tttt t vxxA 1ˆ

tttttt ttE AxxxxA 1ˆ1ˆ

tttt tE vxxA 1ˆ

tttttt EtE vvAxxv 1ˆ

111 vΣAΣAAΣK tt

ttttt

tut

hence

vΣAΣA

tt

ttt1

(2)

Thus

1ˆ1ˆ 1 tt tt xΒx

1ˆ1ˆˆ ttt tttttt xAyKxx

111 vΣAΣAAΣK tt

ttttt

ttt

where

101 ,,1ˆ1ˆ

tttttt

tt ttE yyxxxxΣ Also

1ˆ 11 tE ttt xΒuΒx

10 ,,1ˆ tttt t yyxΒuΒx

(4)

(5)

(2)

uΣBΣBΣ

1

1,11 t

ttt

tt

11 tttt

ttt

ttt AΣKΣΣ

The proof that

will be left as an exercise.

Hence(3)

(1)

Example:

What is observe is the time series

tttt uxxx 2211 Suppose we have an AR(2) time series

ttt vxy

{ut|t T} and {vt|t T} are white noise time series with standard deviations u and v.

then

0,

01, 21

1

tt

t

tt

u

x

xuΒx

This model can be expressed as a state-space model by defining:

001 2

121

1

t

t

t

t

t u

x

x

x

x

ttt uΒxx 1or

can be written

The equation:

ttt vxy

tttt

tt vv

x

xy

Ax1

0,1

2vvΣ

Note:

00

02u

The Kalman equations

uΣBΣBΣ

11,1

1 ttt

ttt1.

11 tttt

ttt

ttt AΣKΣΣ

tt

ttt

ttss

ss

2212

12111Σ

tt

ttt

ttrr

rr

2212

1211Σ

1ˆ1ˆˆ ttt tttttt xAyKxx

111 vΣAΣAAΣK tt

ttttt

ttt

1ˆ1ˆ 1 tt tt xΒx

2.

3.

4.

5.

Let

The Kalman equations

uΣBΣBΣ

11,1

1 ttt

ttt1.

00

0

0

1

01

2

2

1

122

112

112

11121

2212

1211 utt

tt

tt

tt

rr

rr

ss

ss

1 2 1 1 2 211 11 1 12 1 2 22 2

1 112 11 1 12 2

122 11

2t t t tu

t t t

t t

s r r r

s r r

s r

1

2212

1211

2212

1211

0

101

0

1

vtt

tt

tt

tt

tss

ss

ss

ssK

111 vΣAΣAAΣK tt

ttttt

ttt2.

vt

tv

t

t

vt

t

t

s

ss

s

ss

s

11

12

11

11

1

11

12

11

11 tttt

ttt

ttt AΣKΣΣ

tt

tt

ttt

tt

tt

tt

ss

ss

ss

ss

rr

rr

2212

1211

2212

1211

2212

1211 01K

3.

tt

vt

tv

t

t

tt

tt

ss

s

ss

s

ss

ss1211

211

12

211

11

2212

1211

2

11

2

111111

vt

ttt

s

ssr

211

12111212

vt

tttt

s

sssr

2

11

2

122222

vt

ttt

s

ssr

011ˆ

2

121

1 tx

tx

tx

tx

t

t

t

t

1ˆ1ˆ 1 tt tt xΒx4.

1ˆ1ˆ1ˆ 2211 txtxtx ttt

1ˆ1ˆˆ ttt tttttt xAyKxx5.

1ˆ01

ˆ

ˆ

111 tx

txy

tx

tx

tx

tx

t

ttt

t

t

t

t K

ˆ

ˆ

211

12

211

11

11

txy

s

ss

s

tx

tx

tx

txtt

vt

tv

t

t

t

t

t

t

1ˆ1ˆˆ2

11

11

txys

stxtx tt

vt

t

tt

1ˆ1ˆˆ2

11

1211

txy

s

stxtx tt

vt

t

tt

Now consider finding

These can be found by successive backward recursions for t = T, T – 1, … , 2, 1

Kalman Filtering (smoothing):

Ttt ET yyyxx ,,,ˆ 21

where

suutt

stu ssE yyyxxxxΣ ,,,ˆˆ 21

1ˆˆ1ˆˆ 111 tTtT ttttt xxJxx

1111,11

t

ttt

ttt ΣΒΣJ

The covariance matrices satisfy the recursions

11

11

1,11,1

tt

ttT

tttt

ttT

tt JΣΣJΣΣ

1.

The backward recursions

1ˆˆ1ˆˆ 111 tTtT ttttt xxJxx

1111,11

t

ttt

ttt ΣΒΣJ

11

11

1,11,1

tt

ttT

tttt

ttT

tt JΣΣJΣΣ

2.

3.

tt

ttt

ttss

ss

2212

12111Σ

tt

ttt

ttrr

rr

2212

1211Σ

In the example:

0121

Β

txtx ttt

ttt

tt ˆ and 1ˆ,, 11 ΣΣ

- calculated in forward recursion