+ All Categories
Home > Documents > Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as...

Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as...

Date post: 22-Aug-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
21
Approximate Dynamic Programming Recall our general DP formulation for problems with disturbances: And the (backwards) DP algorithm:
Transcript
Page 1: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Approximate Dynamic Programming

• Recall our general DP formulation for problems with disturbances:

• And the (backwards) DP algorithm:

Page 2: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Approximate Dynamic Programming

• Our goal, as always, is to compute the optimal closed-loop policies:

• The key challenges we observed were:• How to compute the expectation w.r.t. to 𝑤𝑖?

• How to perform the optimization on the right-hand side?

• How to overcome the fact the above has to be done for every possible state?

• But how hard are these challenges?

Page 3: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Example: Traveling Salesman Problem

A

DC

B

Distances:

A

AB

ABC ABD

ABCD ABDC

AC

ACB ACD

ACBD ACDB

AD

ADB ADC

ADBC ADCB

A

5

20 4

115

20 3 4 3

20 204 433

1515 5

5

11

• The number of stages in the DP is exponential with the number of nodes!

Page 4: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Example: Traveling Salesman Problem

• If the curse of dimensionality is present and the number of stages, even in thedeterministic case explodes with the problem size, then how this is achieved?

Page 5: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Markov Decision Processes (MPD)

• Where we highlight the direct relation between MDP and the DP framework:

• What elements can be approximated in the graphical model above?

• Before we address approximation in DP, let’s recall the MDP formulation:

𝑥0 𝑥2𝑥1 𝑥3

𝑢0 𝑢1 𝑢2𝑢0 = 𝜇0(𝑥0) 𝑢1 = 𝜇1(𝑥1) 𝑢2 = 𝜇2(𝑥2)

Page 6: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Q-factor reformulation

• Consider the DP recursion:

• Let’s define the Q-function (or Q-factors):

• Where 𝐽𝑘+1∗ 𝑥𝑘+1 are the optimal cost-to-go functions for each stage k. Then we can

write:

Page 7: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Approximation in Value Space

• In addition, we re-write the DP recursion in terms of the Q-factors:

• Suppose we had a function ሚ𝐽𝑘+1(𝑥𝑘+1) that approximates the cost-to-go for each stage 𝑘.And for each stage we compute the following minimization:

• Note that the policy 𝜋 = ( 𝜇0, … , 𝜇𝑁−1) is admissible and sub-optimal.

Page 8: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Approximation in Value Space

• We can write the same sub-optimal policy in terms of the now approximate Q-factors:

• How to obtain a good approximation ሚ𝐽𝑘(𝑥𝑘) is the central focus of the first family ofReinforcement Learning algorithms we will study: The Value Space approximationmethods.

Page 9: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Example: Multistep Lookahead

• As a simple but very important example is the case where ሚ𝐽𝑘+1(𝑥𝑘+1) is itself given by aone-stage DP recursion:

• Where ሚ𝐽𝑘+2(𝑥𝑘+2) is yet another approximation of the cost-to-go, now from stage 2.

• For the 𝑙-step lookahead, ሚ𝐽𝑘+1(𝑥𝑘+1) is given by:

Page 10: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Example: Multistep Lookahead

• For problems with very large horizon (or infinite), we can use a “large-enough” lookahead 𝑙to let the final approximation ሚ𝐽𝑘+𝑙(𝑥𝑘+𝑙) to be very simple (for example equal to zero).

• Recall our last example about doing LQR with a horizon 𝑀 ≪ 𝑁, where instead of using thelimiting algebraic Riccati Equation we solved a sub-problem with truncated horizon:

• After solving the M-stage discrete Riccati Equation we applied:

Page 11: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Approximation in Policy Space

• Consider again the MDP:

• Suppose we now approximate the policy functions, by some parametric function, like aNeural Network:

𝑥0 𝑥2𝑥1 𝑥3

𝑢0 𝑢1 𝑢2𝑢0 = 𝜇0(𝑥0) 𝑢1 = 𝜇1(𝑥1) 𝑢2 = 𝜇2(𝑥2)

𝑥𝑘 𝑢𝑘Sampling

Page 12: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Example: Randomized Policy

• Let’s define as 𝜋𝜃(𝑢𝑘|𝑥𝑘) the probability distribution of the controls/actions 𝑢𝑘 given thestate 𝑥𝑘. And let 𝜃 be the parameters of the Neural Network.

• Like we did in the HMM case, we can write the probability of a whole trajectory as:

• Then we can optimize over all possible sequences (Policy Gradient):

𝜏

Page 13: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Model-based X Model-free

• Let’s now address the expectation issue:

• How the probabilities are computed? Do we have the distributions?

• Model-based case: In this case we know the distributions in closed-form. That is we have𝑝 𝑤𝑘 𝑥𝑘 , 𝑢𝑘 , for every triplet (𝑥𝑘 , 𝑢𝑘 , 𝑤𝑘). Moreover, the functions 𝑓𝑘 and 𝑔𝑘 are known.Expecations are computed algebraic calculations.

• Model-free case: In this case, we need to rely on Monte-Carlo simulations to computeexpectations. Moreover, we may not the functions 𝑓𝑘 and 𝑔𝑘 and we also have to rely onsimulations to obtain the system transitions and costs/rewards.

Page 14: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Model-based X Model-free

Dynamics Model

Policy (control law)

System(Agent)

𝑢𝑘

𝑤𝑘 ∼ 𝑝(𝑤𝑘|𝑥𝑘 , 𝑢𝑘)

𝜇𝑘(𝑥𝑘)

Environment

𝑥𝑘+1 = 𝑓𝑘(𝑥𝑘 , 𝑢𝑘 , 𝑤𝑘)

𝑔𝑘(𝑥𝑘 , 𝑢𝑘 , 𝑤𝑘) 𝑥𝑘

Agent Environment

𝑥𝑘𝑢𝑘

𝑥𝑘+1𝑔𝑘

Simulation Software/Hardware

Model-based Case Model-free Case

Page 15: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Imperfect Information Case: POMPD’s

• Like the HMM , most often in practical application we do not have access to perfect stateinformation. Hence the graphical model can be adapted to:

• Notice now that policy 𝜇𝑘 𝑧𝑘 is given the observation 𝑧𝑘 and not the state 𝑥𝑘!

• Can the policy depend on the whole history? That is:

𝑥0 𝑥2𝑥1 𝑥3

𝑢0𝑢1 𝑢2𝑢0 = 𝜇0(𝑧0)

𝑧0𝑧1

𝑢1 = 𝜇1(𝑧1) 𝑧2𝑢2 = 𝜇2(𝑧2)

Page 16: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

DP with Imperfect Information

• We will present here the most general form of the DP formulation where the closed looppolicy 𝜇𝑘(⋅) depends on the whole history 𝐼𝑘 = 𝑧0, 𝑧1, … , 𝑧𝑘 , 𝑢0, 𝑢1… , 𝑢𝑘−1 :

• Notice that the 𝑣𝑘′ s can be seen as observation noise and we can draw a direct relationship

between the DP formulation above and the POMPD’s (we leave it as an exercise).

• And we assume that:

Page 17: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

DP with Imperfect Information

• Recall the idea of sufficient statistics, which for the HMM were the counts of transitions.Suppose we are able to find a sufficient statistics function 𝑆𝑘(𝐼𝑘) for every informationvector 𝐼𝑘 .

• The intuition is that 𝑆𝑘 contains all the relevant information about 𝐼𝑘. So we would be ableto write the optimal policy as:

• For some functions ҧ𝜇𝑘′ 𝑠.

• Like we did on the EM Algorithm, let’s consider here the conditional probability of the state𝑥𝑘 given the history 𝐼𝑘(there, this probability could be seen as a belief!)

Page 18: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

DP with Imperfect Information

• Namely let 𝑏𝑘 be the belief state:

• Suppose we had in hand a way of computing the beliefs (“The E-Step”), via some recursiveformula:

• Then we re-write the (backwards) recursion as a Perfect Information DP:

Page 19: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

DP with Imperfect Information

• And it follows that:

• Note how nice this formulation is!

• The “states” now are the beliefs 𝑏𝑘. The dynamics are given by the forward recursion:

• The controls are the same. Lastly 𝑧𝑘 plays the role of the “disturbance”.

• It makes sense, since from stage 𝑘 we only have knowledge of the history 𝐼𝑘, hence thefuture observations (𝑧𝑘+1, … 𝑧𝑁) are considered in expectation.

Page 20: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

DP with Imperfect Information

• This reformulation is called the Belief MDP reduction of MOMPD’s.

• Lastly, as we run the DP forward, our tasks are decomposed two parts as well (!)

• First, we have the estimator part which computed the belief:

• Given the history 𝐼𝑘 gathered so far. Then, we have the actuator part which computes:

• This separation leads to yet another family of Approximation Methods, which works on thebeliefs, instead of the actual system states.

Page 21: Approximate Dynamic Programming · 2020. 6. 11. · Approximate Dynamic Programming •Our goal, as always, is to compute the optimal closed-loop policies: •The key challenges we

Other dimensions for approximations

• We saw three main types of approximations that can be done:• Approximations in the Value Space• Approximations in the Policy Space• Approximations in computing expectations (simulations)

• Other aspects of approximations are:• Offline X Online methods: Multi-parametric programs and online querying• Problem Decomposition: Benders Decomposition, Lagrange Relaxations• Aggregation methods: features extraction, state reduction

• There are a huge number of algorithms, ideas in all the areas above as this is a very activearea of research. We will explore the main algorithms as they often the base for the moresophisticated ideas.


Recommended