+ All Categories
Home > Documents > HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI....

HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI....

Date post: 29-Jan-2016
Category:
Upload: milo-baker
View: 216 times
Download: 0 times
Share this document with a friend
Popular Tags:
17
HMMs and Particle Filters
Transcript
Page 1: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

HMMs and Particle Filters

Page 2: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Observations and Latent States

Markov models don’t get used much in AI.

The reason is that Markov models assume that you know exactly what state you are in, at each time step.

This is rarely true for AI agents.

Instead, we will say that the agent has a set of possible latent states – states that are not observed, or known to the agent.

In addition, the agent has sensors that allow it to sense some aspects of the environment, to take measurements or observations.

Page 3: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Hidden Markov Models

Suppose you are the parent of a college student, and would like to know how studious your child is.

You can’t observe them at all times, but you can periodically call, and see if your child answers.

Sleep Study

0.50.60.4

0.5H1 H2 H3

Sleep Study

0.50.60.4

0.5

Sleep Study

0.50.60.4

0.5

O1 O2 O3Answer callor not?

Answer callor not?

Answer callor not?

Page 4: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Hidden Markov Models

H1 H2 H3…

O1 O2 O3

H1 H2 P(H2|H1)

Sleep Sleep 0.6

Study Sleep 0.5

H2 H3 P(H3|H2)

Sleep Sleep 0.6

Study Sleep 0.5

H4 H3 P(H4|H3)

Sleep Sleep 0.6

Study Sleep 0.5

H1 O1P(O1

|H1)

Sleep Ans 0.1

Study Ans 0.8

H2 O2P(O2

|H2)

Sleep Ans 0.1

Study Ans 0.8

H3 O3P(O3

|H3)

Sleep Ans 0.1

Study Ans 0.8

H1 P(H1)

Sleep 0.5

Study 0.5

Here’s the same model, with probabilities in tables.

Page 5: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Hidden Markov Models

HMMs (and MMs) are a special type of Bayes Net. Everything you have learned about BNs applies here.

H1 H2 H3…

O1 O2 O3

H1 H2 P(H2|H1)

Sleep Sleep 0.6

Study Sleep 0.5

H2 H3 P(H3|H2)

Sleep Sleep 0.6

Study Sleep 0.5

H4 H3 P(H4|H3)

Sleep Sleep 0.6

Study Sleep 0.5

H1 O1P(O1

|H1)

Sleep Ans 0.1

Study Ans 0.8

H2 O2P(O2

|H2)

Sleep Ans 0.1

Study Ans 0.8

H3 O3P(O3

|H3)

Sleep Ans 0.1

Study Ans 0.8

H1 P(H1)

Sleep 0.5

Study 0.5

Page 6: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Quick Review of BNs for HMMs

H1

O1

H1 H2

Page 7: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Hidden Markov Models

Suppose a parent calls and gets an answer at time step 1. What is P(H1=Sleep|O1=Ans)?

Notice: before the observation, P(Sleep) was 0.5. By making a call and getting an answer, the parent’s belief in Sleep drops to P(Sleep) = 0.111.

H1…

O1

H1 H2 P(H2|H1)

Sleep Sleep 0.6

Study Sleep 0.5

H1 O1P(O1

|H1)

Sleep Ans 0.1

Study Ans 0.8

H1 P(H1)

Sleep 0.5

Study 0.5

Page 8: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Hidden Markov ModelsSuppose a parent calls and gets an answer at time step 2. What is P(H2=Sleep|O2=Ans)?

H1

O1

H1 H2 P(H2|H1)

Sleep Sleep 0.6

Study Sleep 0.5

H1 O1P(O1

|H1)

Sleep Ans 0.1

Study Ans 0.8

H1 P(H1)

Sleep 0.5

Study 0.5

H2

O2

Page 9: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Quiz: Hidden Markov Models

H1

O1

H1 H2 P(H2|H1)

Sleep Sleep 0.6

Study Sleep 0.5

H1 O1P(O1

|H1)

Sleep Ans 0.1

Study Ans 0.8

H1 P(H1)

Sleep 0.5

Study 0.5

H2

O2

Suppose a parent calls twice, once at time step 1 and once at time step 2. The first time, the child does not answer, and the second time the child does.

Now what is P(H2=Sleep)?

Page 10: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Answer: Hidden Markov Models

H1

O1

H1 H2 P(H2|H1)

Sleep Sleep 0.6

Study Sleep 0.5

H1 O1P(O1

|H1)

Sleep Ans 0.1

Study Ans 0.8

H1 P(H1)

Sleep 0.5

Study 0.5

H2

O2

Suppose a parent calls twice, once at time step 1 and once at time step 2. The first time, the child does not answer, and the second time the child does.

Now what is P(H2=Sleep)?

Numerator:+

Denominator:+

It’s a pain to calculate by Enumeration.

Page 11: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Quiz: Complexity of Enumeration for HMMs

Suppose we have an HMM with T time steps.

To compute any query like P(Hi|O1, …, OT), we need to compute P(O1, …, OT).

How many terms are in this sum, if there are 2 possible values for each Hi?

Page 12: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Answer: Complexity of Enumeration for HMMs

Suppose we have an HMM with T time steps.

To compute any query like P(Hi|O1, …, OT), we need to compute P(O1, …, OT).

How many terms are in this sum, if there are 2 possible values for each Hi?

2T terms in this sum. Regular enumeration is an O(2T) algorithm.

This makes it intractable, for example, for sentences with 20 words, or DNA sequences with hundreds or millions of base pairs.

Page 13: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Specialized Inference Algorithm: Dynamic Programming

There is a fairly simple way to compute this sum exactly in O(T), or linear time, using dynamic programming.

Essentially, this works by computing partial sums, storing them, and re-using them during calculations of the sums for longer sequences.

This is called the forward algorithm.

We won’t cover this here, but you can see the book or online tutorials if you are interested.

Page 15: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Particle Filter Demos

Real robot localization with particle filter:https://www.youtube.com/watch?v=H0G1yslM5rc&feature=player_embedded

1-dimensional case:https://www.youtube.com/watch?v=qQQYkvS5CzU&feature=player_embedded

Page 16: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Particle Filter Algorithm

Inputs: – set of particles S, each with location si.loc and

weight si.w– Control vector u (where robot should move next)– Measurement vector z (sensor readings)

Outputs:– New particles S’, for the next iteration

Page 17: HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Particle Filter Algorithm

Init: For (N is number of particles in S):

Pick a particle sj from S randomly (with replacment), in proportion to the weights s.wCreate a new particle s’Sample Set Set Set

End ForFor :

Set

End For


Recommended