+ All Categories
Home > Documents > CSE-573 Artificial Intelligence

CSE-573 Artificial Intelligence

Date post: 22-Feb-2016
Category:
Upload: afia
View: 40 times
Download: 0 times
Share this document with a friend
Description:
CSE-573 Artificial Intelligence. Partially-Observable MDPS ( POMDPs). Todo. Key slides don’t have Y axis labeled – NOT value. Classical Planning. What action next?. Static. Predictable. Environment. Fully Observable . Discrete . Deterministic . Perfect. Percepts. Actions. - PowerPoint PPT Presentation
53
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Transcript
Page 1: CSE-573  Artificial Intelligence

CSE-573 Artificial Intelligence

Partially-Observable MDPS(POMDPs)

Page 2: CSE-573  Artificial Intelligence

Todo

Key slides don’t have Y axis labeled – NOT value

Page 3: CSE-573  Artificial Intelligence

Classical Planning

What action next?

Percepts Actions

Environment

Static

Fully Observable

Perfect

Predictable

Discrete

Deterministic

Page 4: CSE-573  Artificial Intelligence

Stochastic Planning (MDPs, Reinforcement Learning)

What action next?

Percepts Actions

Environment

Static

Fully Observable

Perfect

Stochastic

Unpredictable

Discrete

Page 5: CSE-573  Artificial Intelligence

Partially-Observable MDPs

What action next?

Percepts Actions

Environment

Static

Partially Observable

Noisy

Stochastic

Unpredictable

Discrete

Page 6: CSE-573  Artificial Intelligence

6

Classical Planning

hellheaven

• World deterministic• State observable• Sequential Plan

Reward100 -100

Page 7: CSE-573  Artificial Intelligence

7

MDP-Style Planning

hellheaven

• World stochastic• State observable• Policy

Page 8: CSE-573  Artificial Intelligence

8

Stochastic, Partially Observable

sign

hell?heaven?

Page 9: CSE-573  Artificial Intelligence

Markov Decision Process (MDP)

S: set of states A: set of actions Pr(s’|s,a): transition model R(s,a,s’): reward model : discount factor s0: start state

Page 10: CSE-573  Artificial Intelligence

Partially-Observable MDP

S: set of states A: set of actions Pr(s’|s,a): transition model R(s,a,s’): reward model : discount factor s0: start state E set of possible evidence (observations) Pr(e|s)

Page 11: CSE-573  Artificial Intelligence

11

Belief State

sign sign

50% 50%

State of agent’s mind Not just of world

Note: POMDP

Probs = 1

Page 12: CSE-573  Artificial Intelligence

Planning in Belief Space

sign sign

50% 50%sign sign

50% 50%

sign sign

50% 50%

sign sign

50% 50%

N

W

E

Exp. Reward: 0

Exp. Reward: 0

For now, assume movement is deterministic

Page 13: CSE-573  Artificial Intelligence

Partially-Observable MDP

S: set of states A: set of actions Pr(s’|s,a): transition model R(s,a,s’): reward model : discount factor s0: start state E set of possible evidence (observations) Pr(e|s)

Page 14: CSE-573  Artificial Intelligence

Evidence Model

S = {swb, seb, swm, sem swul, seul swur, seur} E = {heat} Pr(e|s):

Pr(heat | seb) = 1.0Pr(heat | swb) = 0.2Pr(heat | sother) = 0.0

sign

sign

seb

swb

Page 15: CSE-573  Artificial Intelligence

Planning in Belief Space

sign sign

50% 50%

sign sign

100% 0%

S

sign sign

17% 83%

heat

heat

Pr(heat | seb) = 1.0Pr(heat | swb) = 0.2

Page 16: CSE-573  Artificial Intelligence

Objective of a Fully Observable MDP

Find a policy : S → A

which maximizes expected discounted reward

• given an infinite horizon

• assuming full observability

Page 17: CSE-573  Artificial Intelligence

Objective of a POMDP

Find a policy : BeliefStates(S) → A A belief state is a probability distribution over states

which maximizes expected discounted reward

• given an infinite horizon

• assuming partial & noisy observability

Page 18: CSE-573  Artificial Intelligence

Planning in HW 4 Map Estimate Now “know” state Solve MDP

18

Page 19: CSE-573  Artificial Intelligence

Best plan to eat final food?

Page 20: CSE-573  Artificial Intelligence

Best plan to eat final food?

Page 21: CSE-573  Artificial Intelligence

Problem with Planning from MAP Estimate

Best action for belief state over k worlds may not be the best action in any one of those worlds

49% 51%10% 90%

Page 22: CSE-573  Artificial Intelligence

22

POMDPs In POMDPs we apply the very same idea as in MDPs. Since the state is not observable, the agent has to make

its decisions based on the belief state which is a posterior distribution over states.

Let b be the belief of the agent about the state under consideration.

POMDPs compute a value function over belief space:

Page 23: CSE-573  Artificial Intelligence

23

Problems Each belief is a probability distribution, thus, each value

in a POMDP is a function of an entire probability distribution.

This is problematic, since probability distributions are continuous.

How many belief states are there? For finite worlds with finite state, action, and

measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions.

Page 24: CSE-573  Artificial Intelligence

24

An Illustrative Example

2x1x 3u8.0

2z1z

3u

2.0

8.02.0

7.0

3.0

3.0

7.0

measurements action u3 state x2

payoff

measurements

1u 2u 1u 2u

100 50100 100

actions u1, u2

payoff

state x1

1z

2z

Page 25: CSE-573  Artificial Intelligence

25

What is Belief Space? 2x1x 3u8.0

2z1z

3u

2.0

8.02.0

7.0

3.0

3.0

7.0

measurements action u3 state x2

payoff

measurements

1u 2u 1u 2u

100 50100 100

actions u1, u2

payoff

state x1

1z

2z

Value

P(state=x1)

Page 26: CSE-573  Artificial Intelligence

26

The Parameters of the Example The actions u1 and u2 are terminal actions. The action u3 is a sensing action that potentially

leads to a state transition. The horizon is finite and =1.

Page 27: CSE-573  Artificial Intelligence

27

Payoff in POMDPs In MDPs, the payoff (or return)

depended on the state of the system. In POMDPs, however, the true state is

not exactly known. Therefore, we compute the expected

payoff by integrating over all states:

Page 28: CSE-573  Artificial Intelligence

28

Payoffs in Our Example If we are totally certain that we are

in state x1 and execute action u1, we receive a reward of -100

If, on the other hand, we definitely know that we are in x2 and execute u1, the reward is +100.

In between it is the linear combination of the extreme values weighted by the probabilities

2x1x 3u8.0

2z1z

3u

2.0

8.02.0

7.0

3.0

3.0

7.0

measurements action u3 state x2

payoff

measurements

1u 2u 1u 2u

100 50100 100

actions u1, u2

payoff

state x1

1z

2z

= 100 – 200 p1

Page 29: CSE-573  Artificial Intelligence

29

Payoffs in Our Example If we are totally certain that we are

in state x1 and execute action u1, we receive a reward of -100

If, on the other hand, we definitely know that we are in x2 and execute u1, the reward is +100.

In between it is the linear combination of the extreme values weighted by the probabilities

2x1x 3u8.0

2z1z

3u

2.0

8.02.0

7.0

3.0

3.0

7.0

measurements action u3 state x2

payoff

measurements

1u 2u 1u 2u

100 50100 100

actions u1, u2

payoff

state x1

1z

2z

= 100 – 200 p1

= 150 p1 – 50

Page 30: CSE-573  Artificial Intelligence

30

Payoffs in Our Example (2)

Page 31: CSE-573  Artificial Intelligence

31

The Resulting Policy for T=1 Given a finite POMDP with time horizon = 1 Use V1(b) to determine the optimal policy.

Corresponding value:

Page 32: CSE-573  Artificial Intelligence

32

Piecewise Linearity, Convexity The resulting value function V1(b) is

the maximum of the three functions at each point

It is piecewise linear and convex.

Page 33: CSE-573  Artificial Intelligence

33

Pruning

With V1(b), note that only the first two components contribute.

The third component can be safely pruned

Page 34: CSE-573  Artificial Intelligence

34

Increasing the Time Horizon Assume the robot can make an observation before

deciding on an action.

V1(b)

Page 35: CSE-573  Artificial Intelligence

35

Increasing the Time Horizon What if the robot can observe before acting? Suppose it perceives z1: p(z1 | x1)=0.7 and p(z1| x2)=0.3. Given the obs z1 we update the belief using ...?

3.04.0)1(3.07.0)( where)(

7.0' 11111

11 pppzp

zppp

Now, V1(b | z1) is given by

Bayes rule.

Page 36: CSE-573  Artificial Intelligence

36

Value Function

b’(b|z1)

V1(b)

V1(b|z1)

Page 37: CSE-573  Artificial Intelligence

37

Expected Value after Measuring

But, we do not know in advance what the next measurement will be,

So we must compute the expected belief

2

1111

2

1

111

2

1111

)|(

)()|()(

)|()()]|([)(

ii

i i

ii

iiiz

pxzpV

zppxzpVzp

zbVzpzbVEbV

Page 38: CSE-573  Artificial Intelligence

38

Expected Value after Measuring

But, we do not know in advance what the next measurement will be,

So we must compute the expected belief

Page 39: CSE-573  Artificial Intelligence

39

Resulting Value Function The four possible combinations yield the

following function which then can be simplified and pruned.

Page 40: CSE-573  Artificial Intelligence

40

Value Function

b’(b|z1)

p(z1) V1(b|z1)

p(z2) V2(b|z2)

\bar{V}1(b)

Page 41: CSE-573  Artificial Intelligence

41

State Transitions (Prediction) When the agent selects u3 its state may change. When computing the value function, we have to take

these potential state changes into account.

Page 42: CSE-573  Artificial Intelligence

42

Resulting Value Function after executing u3

Taking the state transitions into account, we finally obtain.

Page 43: CSE-573  Artificial Intelligence

43

Value Function after executing u3\bar{V}1(b)

\bar{V}1(b|u3)

Page 44: CSE-573  Artificial Intelligence

44

Value Function for T=2 Taking into account that the agent can either

directly perform u1 or u2 or first u3 and then u1 or u2, we obtain (after pruning)

Page 45: CSE-573  Artificial Intelligence

45

Graphical Representation of V2(b)

u1 optimal u2 optimal

unclear

outcome of measuring is important here

Page 46: CSE-573  Artificial Intelligence

46

Deep Horizons We have now completed a full backup in belief space. This process can be applied recursively. The value functions for T=10 and T=20 are

Page 47: CSE-573  Artificial Intelligence

47

Deep Horizons and Pruning

Page 48: CSE-573  Artificial Intelligence

48

Why Pruning is Essential Each update introduces additional linear

components to V. Each measurement squares the number of

linear components. Thus, an unpruned value function for T=20 includes

more than 10547,864 linear functions. At T=30 we have 10561,012,337 linear functions. The pruned value functions at T=20, in comparison,

contains only 12 linear components. The combinatorial explosion of linear components in

the value function are the major reason why exact solution of POMDPs is usually impractical

Page 49: CSE-573  Artificial Intelligence

50

POMDP Approximations Point-based value iteration

QMDPs

AMDPs

Page 50: CSE-573  Artificial Intelligence

51

Point-based Value Iteration Maintains a set of example beliefs

Only considers constraints that maximize value function for at least one of the examples

Page 51: CSE-573  Artificial Intelligence

52

Point-based Value Iteration

Exact value function PBVI

Value functions for T=30

Page 52: CSE-573  Artificial Intelligence

53

QMDPs QMDPs only consider state

uncertainty in the first step

After that, the world becomes fully observable.

Page 53: CSE-573  Artificial Intelligence

56

POMDP Summary POMDPs compute the optimal action in partially

observable, stochastic domains. For finite horizon problems, the resulting value

functions are piecewise linear and convex. In each iteration the number of linear constraints

grows exponentially. Until recently, POMDPs only applied to very

small state spaces with small numbers of possible observations and actions. But with PBVI, |S| = millions


Recommended