+ All Categories
Home > Documents > Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes - MIT - Massachusetts Institute of

Date post: 12-Sep-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
50
Markov Decision Processes: Reactive Planning to Maximize Reward Brian C. Williams 16.410 November 8 th , 2004 Slides adapted from: Manuela Veloso, Reid Simmons, & Tom Mitchell, CMU 1 11/8/2004
Transcript
Page 1: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes:Reactive Planning to Maximize Reward

Brian C. Williams16.410November 8th, 2004

Slides adapted from:Manuela Veloso,Reid Simmons, &Tom Mitchell, CMU

111/8/2004

Page 2: Markov Decision Processes - MIT - Massachusetts Institute of

Reading and Assignments

• Markov Decision Processes• Read AIMA Chapter 17, Sections 1 – 3.

This lecture based on development in:

“Machine Learning” by Tom MitchellChapter 13: Reinforcement Learning

2

Page 3: Markov Decision Processes - MIT - Massachusetts Institute of

How Might a Mouse Search a Maze for Cheese?

Cheese

• State Space Search?• As a Constraint Satisfaction Problem?• Goal-directed Planning?• Linear Programming?

What is missing?3

Page 4: Markov Decision Processes - MIT - Massachusetts Institute of

Ideas in this lecture

• Problem is to accumulate rewards, rather than to achieve goal states.

• Approach is to generate reactive policies for how to act in all situations, rather than plans for a single starting situation.

• Policies fall out of value functions, which describe the greatest lifetime reward achievable at every state.

• Value functions are iteratively approximated.

4

Page 5: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Examples: TD-Gammon [Tesauro, 1995]Learning Through Reinforcement

Learns to play Backgammon

States:• Board configurations (1020)

Actions:• Moves

Rewards:• +100 if win• - 100 if lose• 0 for all other states

• Trained by playing 1.5 million games against self.

Currently, roughly equal to best human player.5

Page 6: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Examples: Aerial Robotics [Feron et al.]Computing a Solution from a Continuous Model

6

Page 7: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes

• Motivation• What are Markov Decision Processes (MDPs)?

• Models• Lifetime Reward• Policies

• Computing Policies From a Model• Summary

7

Page 8: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Problem

Agent

Environment

State Reward Action

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Given an environment model as a MDP create a policy for acting that maximizes lifetime reward

8

Page 9: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Problem: Model

Agent

Environment

State Reward Action

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Given an environment model as a MDP create a policy for acting that maximizes lifetime reward

9

Page 10: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes (MDPs)Process:Model:

• Finite set of states, S• Finite set of actions, A• (Probabilistic) state

transitions, δ(s,a)• Reward for each state

and action, R(s,a)

• Observe state st in S• Choose action at in A• Receive immediate reward rt

• State changes to st+1

10

G

10

1010

• Legal transitions shown• Reward on unlabeled transitions is 0.

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Example:

s1 a1

Page 11: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Environment Assumptions

• Markov Assumption: Next state and reward is a function only of the current state and action:

• st+1 = δ(st, at)• rt = r(st, at)

• Uncertain and Unknown Environment:δ and r may be nondeterministic and unknown

11

Page 12: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Nondeterministic Example

12

S1Unemployed

D

R

S2Industry

D

S3Grad School

D

R

S4Academia

D

R

R

0.1

0.9

1.0

0.9

0.1

1.0

0.9

0.1

1.00.90.1

1.0

R – ResearchD – Development

Today we only considerthe deterministic case

Page 13: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Problem: Model

Agent

Environment

State Reward Action

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Given an environment model as a MDP create a policy for acting that maximizes lifetime reward

13

Page 14: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Problem: Lifetime Reward

Agent

Environment

State Reward Action

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Given an environment model as a MDP create a policy for acting that maximizes lifetime reward

14

Page 15: Markov Decision Processes - MIT - Massachusetts Institute of

Lifetime Reward• Finite horizon:

• Rewards accumulate for a fixed period.• $100K + $100K + $100K = $300K

• Infinite horizon:• Assume reward accumulates for ever• $100K + $100K + . . . = infinity

• Discounting:• Future rewards not worth as much

(a bird in hand …)• Introduce discount factor γ

$100K + γ $100K + γ 2 $100K. . . converges• Will make the math work

15

Page 16: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Problem: Lifetime Reward

Agent

Environment

State Reward Action

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Given an environment model as a MDP create a policy for acting that maximizes lifetime reward

V = r0 + γ r1 + γ 2 r2 . . . 16

Page 17: Markov Decision Processes - MIT - Massachusetts Institute of

MDP Problem: Policy

Agent

Environment

State Reward Action

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Given an environment model as a MDP create a policy for acting that maximizes lifetime reward

V = r0 + γ r1 + γ 2 r2 . . . 17

Page 18: Markov Decision Processes - MIT - Massachusetts Institute of

18

Assume deterministic world

Policy π : S A• Selects an action for each state.

G

10

1010

π

G

10

1010

π∗

Optimal policy π∗ : S A• Selects action for each state that maximizes lifetime

reward.

Page 19: Markov Decision Processes - MIT - Massachusetts Institute of

• There are many policies, not all are necessarily optimal.• There may be several optimal policies.

G

10

1010G

10

1010

G

10

1010

19

Page 20: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes

• Motivation• What are Markov Decision Processes (MDPs)?

• Models• Lifetime Reward• Policies

• Computing Policies From a Model• Summary

20

Page 21: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes

• Motivation• Markov Decision Processes• Computing Policies From a Model

• Value Functions• Mapping Value Functions to Policies• Computing Value Functions through Value Iteration• An Alternative: Policy Iteration (appendix)

• Summary

21

Page 22: Markov Decision Processes - MIT - Massachusetts Institute of

Value Function Vπ for a Given Policy π

• Vπ(st) is the accumulated lifetime reward resulting from starting in state st and repeatedly executing policy π:

Vπ(st) = rt + γ rt+1 + γ 2 rt+2 . . .Vπ(st) = ∑i γ i rt+I

where rt, rt+1 , rt+2 . . . are generated by following π, starting at st .

10 100

99 10

G

10

1010

π

Assume γ = .9

22

Page 23: Markov Decision Processes - MIT - Massachusetts Institute of

An Optimal Policy π* Given Value Function V*Idea: Given state s1. Examine all possible actions ai in state s.2. Select action ai with greatest lifetime reward.

G10

1010

π

10 100

99 10

Lifetime reward Q(s, ai) is:• the immediate reward for taking action r(s,a) …• plus life time reward starting in target state V( δ(s, a) ) …• discounted by γ.

π*(s) = argmaxa [r(s,a) + γV∗( δ(s, a) )]

Must Know:• Value function• Environment model.

• δ : S x A → S• r : S x A → ℜ

23

Page 24: Markov Decision Processes - MIT - Massachusetts Institute of

Example: Mapping Value Function to Policy

• Agent selects optimal action from V:π(s) = argmaxa [r(s,a) + γV(δ(s, a)]

90

81

100

90

0

100

G100

100

Model + V: γ = 0.9

24

Page 25: Markov Decision Processes - MIT - Massachusetts Institute of

Example: Mapping Value Function to Policy

• Agent selects optimal action from V:π(s) = argmaxa [r(s,a) + γV(δ(s, a)]

90

81

100

90

0

100

G100

100

Model + V:a

b• a: 0 + 0.9 x 100 = 90• b: 0 + 0.9 x 81 = 72.9

select a

γ = 0.9

Gπ:

25

Page 26: Markov Decision Processes - MIT - Massachusetts Institute of

Example: Mapping Value Function to Policy

• Agent selects optimal action from V:π(s) = argmaxa [r(s,a) + γV(δ(s, a)]

90

81

100

90

0

100

G100

100

Model + V:

ab

• a: 100 + 0.9 x 0 = 100• b: 0 + 0.9 x 90 = 81

select a

γ = 0.9

π:

G

26

Page 27: Markov Decision Processes - MIT - Massachusetts Institute of

Example: Mapping Value Function to Policy

• Agent selects optimal action from V:π(s) = argmaxa [r(s,a) + γV(δ(s, a)]

90

81

100

90

0

100

G100

100

Model + V:

a b

• a: ?• b: ?• c: ?

select ?

γ = 0.9

c

π:

G

27

Page 28: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes

• Motivation• Markov Decision Processes• Computing Policies From a Model

• Value Functions• Mapping Value Functions to Policies• Computing Value Functions through Value Iteration• An Alternative: Policy Iteration

• Summary

28

Page 29: Markov Decision Processes - MIT - Massachusetts Institute of

Value Function V∗ for an optimal policy π∗

Example

SA SB

B

ARA

RB

A

RA

B

RB

• Optimal value function for a one step horizon:V*1(s) = maxai [r(s,ai)]

B

RA

RB

SA

SB

AMax

SA

V*1(SA)

SBV*1(SB)

. . .29

Page 30: Markov Decision Processes - MIT - Massachusetts Institute of

Value Function V∗ for an optimal policy π∗

• Optimal value function for a one step horizon:V*1(s) = maxai [r(s,ai)]

• Optimal value function for a two step horizon:

SA SB

B

ARA

RB

A

RA

B

RB

SA

SB

B

ASA

SB

. . .

γRA +A

B MaxSA

V*2(SA)

SBV*2(SB)

. . .

V*1(SA)

RB

+ γ V*1(SB)

V*2(s) = maxai [r(s,ai) + γV 1∗(δ(s, ai))]

Instance of the Dynamic Programming Principle:

• Reuse shared sub-results

• Exponential saving

Example

30

Page 31: Markov Decision Processes - MIT - Massachusetts Institute of

Value Function V∗ for an optimal policy π∗

Example

SA SB

B

ARA

RB

A

RA

B

RB

• Optimal value function for a one step horizon:V*1(s) = maxai [r(s,ai)]

• Optimal value function for a two step horizon:V*2(s) = maxai [r(s,ai) + γV 1

∗(δ(s, ai))]

• Optimal value function for an n step horizon:V*n(s) = maxai [r(s,ai) + γV n-1

∗(δ(s, ai))]

31

Page 32: Markov Decision Processes - MIT - Massachusetts Institute of

Value Function V∗ for an optimal policy π∗

Example

SA SB

B

ARA

RB

A

RA

B

RB

• Optimal value function for a one step horizon:V*1(s) = maxai [r(s,ai)]

• Optimal value function for a two step horizon:V*2(s) = maxai [r(s,ai) + γV 1

∗(δ(s, ai))]

• Optimal value function for an n step horizon:V*n(s) = maxai [r(s,ai) + γV n-1

∗(δ(s, ai))]

Optimal value function for an infinite horizon:

V*(s) = maxai [r(s,ai) + γV∗(δ(s, ai))] 32

Page 33: Markov Decision Processes - MIT - Massachusetts Institute of

Solving MDPs by Value IterationInsight: Can calculate optimal values iteratively using

Dynamic Programming.

Algorithm:• Iteratively calculate value using Bellman’s Equation:

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

• Terminate when values are “close enough”|V*t+1(s) - V∗

t (s) | < ε

• Agent selects optimal action by one step lookahead on V∗:π*(s) = argmaxa [r(s,a) + γV∗(δ(s, a)]

33

Page 34: Markov Decision Processes - MIT - Massachusetts Institute of

Convergence of Value Iteration

• If terminate when values are “close enough”|Vt+1(s) - V t (s) | < ε

Then:Maxs in S |Vt+1(s) - V∗ (s) | < 2εγ/(1 - γ)

• Converges in polynomial time.• Convergence guaranteed even if updates are performed

infinitely often, but asynchronously and in any order.

34

Page 35: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

35

0

0

0

0

0

0

G100

100

G100

100

V∗ t V∗

t+1

0

γ = 0.9

a

b

• a: 0 + 0.9 x 0 = 0• b: 0 + 0.9 x 0 = 0

Max = 0

Page 36: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

0

0

0

0

0

0

G100

100

G100

100

V∗ t V∗

t+1

0

γ = 0.9

100ab

c

• a: 100 + 0.9 x 0 = 100• b: 0 + 0.9 x 0 = 0• c: 0 + 0.9 x 0 = 0

Max = 100 36

Page 37: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

0

0

0

0

0

0

G100

100

G100

100

V∗ t V∗

t+1

0

γ = 0.9

100 0a

• a: 0 + 0.9 x 0 = 0Max = 0

37

Page 38: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

0

0

0

0

0

0

G100

100

G100

100

V∗ t V∗

t+1

0

γ = 0.9

100 0

0

38

Page 39: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

0

0

0

0

0

0

G100

100

G100

100

V∗ t V∗

t+1

0

γ = 0.9

100 0

0 0

39

Page 40: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

0

0

0

0

0

0

G100

100

G100

100

V∗ t V∗

t+1

0

γ = 0.9

100 0

0 0 100

40

Page 41: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

γ = 0.9

V∗ t V∗

t+1

0

0

100

0

0

100

G100

100

G100

100

90 100 0

0 90 100

41

Page 42: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

γ = 0.9

V∗ t V∗

t+1

90

0

100

90

0

100

G100

100

G100

100

90 100 0

81 90 100

42

Page 43: Markov Decision Processes - MIT - Massachusetts Institute of

Example of Value Iteration

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

γ = 0.9

V∗ t V∗

t+1

90

81

100

90

0

100

G100

100

G100

100

90 100 0

81 90 100

43

Page 44: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes

• Motivation• Markov Decision Processes• Computing policies from a modelValue Functions

• Mapping Value Functions to Policies• Computing Value Functions through Value Iteration• An Alternative: Policy Iteration (appendix)

• Summary

44

Page 45: Markov Decision Processes - MIT - Massachusetts Institute of

Appendix: Policy Iteration

Idea: Iteratively improve the policy1. Policy Evaluation: Given a policy πi calculate Vi = Vπi,

the utility of each state if πi were to be executed. 2. Policy Improvement: Calculate a new maximum expected

utility policy πi+1 using one-step look ahead based on Vi.

• πi improves at every step, converging if πi = πi+1.• Computing Vi is simpler than for Value iteration (no max):

V*t+1(s) ← r(s, πi(s)) + γV∗ t(δ(s, πi(s)))]

• Solve linear equations in O(N3)• Solve iteratively, similar to value iteration.

45

Page 46: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes

• Motivation• Markov Decision Processes• Computing policies from a model

• Value Iteration• Policy Iteration

• Summary

46

Page 47: Markov Decision Processes - MIT - Massachusetts Institute of

Markov Decision Processes (MDPs)Model:

• Finite set of states, S• Finite set of actions, A• Probabilistic state

transitions, δ(s,a)• Reward for each state

and action, R(s,a)

Process:• Observe state st in S• Choose action at in A• Receive immediate reward rt

• State changes to st+1

s0 r0

a0 s1a1

r1

s2a2

r2

s3

Deterministic Example:

G

10

1010

s1 a1

47

Page 48: Markov Decision Processes - MIT - Massachusetts Institute of

Crib Sheet: MDPs by Value Iteration

48

Insight: Can calculate optimal values iteratively using Dynamic Programming.

Algorithm:• Iteratively calculate value using Bellman’s Equation:

V*t+1(s) ← maxa [r(s,a) + γV∗ t(δ(s, a))]

• Terminate when values are “close enough”|V*t+1(s) - V∗

t (s) | < ε

• Agent selects optimal action by one step lookahead on V∗:π*(s) = argmaxa [r(s,a) + γV∗(δ(s, a)]

Page 49: Markov Decision Processes - MIT - Massachusetts Institute of

Ideas in this lecture

• Objective is to accumulate rewards, rather than goal states.

• Objectives are achieved along the way, rather than at the end.

• Task is to generate policies for how to act in all situations, rather than a plan for a single starting situation.

• Policies fall out of value functions, which describe the greatest lifetime reward achievable at every state.

• Value functions are iteratively approximated.

49

Page 50: Markov Decision Processes - MIT - Massachusetts Institute of

How Might a Mouse Search a Maze for Cheese?

Cheese

• By Value Iteration?• What is missing?

50


Recommended