+ All Categories
Home > Documents > Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control:...

Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control:...

Date post: 25-Jun-2020
Category:
Upload: others
View: 21 times
Download: 0 times
Share this document with a friend
27
Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) x(0) & t f given (reminder) continuous time LQR t f given, x(0) & x(t f ) [partly] given t f unconstrained, x(0), x(t f ) [partly] given (example: min time) Integral constraint over path (Eg. fixed path length) Equality constraints over functions of the state/control 1
Transcript
Page 1: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

Calculus of Variations and Optimal Control:

Continuous Systems(various problem conditions)

• x(0) & tf given (reminder)

•continuous time LQR

• tf given, x(0) & x(tf) [partly] given

• tf unconstrained, x(0), x(tf) [partly] given (example: min time)

• Integral constraint over path (Eg. fixed path length)

• Equality constraints over functions of the state/control 1

Page 2: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

Reminder:when x(0) and tf are known

• The Lagrangian (augmented cost) is:

• Its variation is

• Zeroing it out (+ keeping dynamics requirements) leads to the Euler Lagrange equations:

2

xtftf x0 •

tf t0 tf •

.minimun jerk –

tf t0 tf •

xtf tf x0

x(t) = f(x(t),u(t), t) x(t0) given t ! [t0, tf ]

u

J = ! (x(tf ), tf ) +

! tf

t0

L (x(t),u(t), t) dt

J

JA = ! (x(tf ), tf ) +

! tf

t0

L (x(t),u(t), t) dt +

! tf

t0

"T (t) [f(x(t),u(t), t) " x(t)] dt

(1)

1

H = H (x(t),u(t), t) = L (x(t),u(t), t) + !T (t)f (x(t),u(t), t)

!T (t)x(t)

J = " (x(tf ), tf )!!T (tf )x (tf )+!T (t0)x (t0)+

! tf

t0

"

H (x(t),u(t), t) + !T x(t)#

dt

(2)J J

#J =

$%$"

$x! !T

&

#x

'

t=tf

+ [!T #x]t=t0 +

! tf

t0

$%$H

$x+ !T

&

#x +$H

$u#u

'

dt

(3)#J

!(t) #x (#u(1 : t))

!T (tf ) =$"

$x(tf ), !T (t) = !

$H(t)

$x(t)= !

$L

$x! !T (t)

$f

$x

tf !

#J = !T (t0)#x(t0) +

! tf

t0

$H

$u#udt

#x(t0) = 0 x(0)u

$H

$u= 0 , t " [t0, tf ]

x(t) = f(x(t),u(t), t)

!T (t) = !$L

$x! !T (t)

$f

$x(4)

!T (tf ) =$"

$x(tf )(5)

$H

$u= 0 (6)

x0 given

x ! u ! xtwo point boundary-value problem

2

H = H (x(t),u(t), t) = L (x(t),u(t), t) + !T (t)f (x(t),u(t), t)

!T (t)x(t)

J = " (x(tf ), tf )!!T (tf )x (tf )+!T (t0)x (t0)+

! tf

t0

"

H (x(t),u(t), t) + !T x(t)#

dt

(2)J J

#J =

$%$"

$x! !T

&

#x

'

t=tf

+ [!T #x]t=t0 +

! tf

t0

$%$H

$x+ !T

&

#x +$H

$u#u

'

dt

(3)#J

!(t) #x (#u(1 : t))

!T (tf ) =$"

$x(tf ), !T (t) = !

$H(t)

$x(t)= !

$L

$x! !T (t)

$f

$x

tf !

#J = !T (t0)#x(t0) +

! tf

t0

$H

$u#udt

#x(t0) = 0 x(0)u

$H

$u= 0 , t " [t0, tf ]

x(t) = f(x(t),u(t), t)

!T (t) = !$L

$x! !T (t)

$f

$x(4)

!T (tf ) =$"

$x(tf )(5)

$H

$u= 0 (6)

x0 given

x ! u ! xtwo point boundary-value problem

2

Page 3: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• If and f are not explicit functions of t (happens often) then the Hamiltonian is constant (i.e. invariant) on the system’s trajectory!

• On the extremal path both terms zero out, meaning that H(t) = constant

3

Page 4: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

Continuous Time LQR

• Given x(0), tf and a linear system:

• Find a path that brings (some of) x close to zero without allowing it to vary to much on the way and without too much control energy. I.e. find u that minimizes

where S,Q and R are positive definite.

• Note: this can easily be generalized to time varying systems and costs.

• Solution will be very similar to the discrete case.

• The resulting u turns out to be a continuous state feedback rule.

4

Page 5: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

Solution:

• The Hamiltonian is

• The end constraint on λ is

• λ’s diff. eq. is

• The third Euler Lagrange condition gives

Rearranging

• This is a two point boundary value problem with x known at t0 and λ known at tf. The two linear differential equations are coupled by u. 5

Page 6: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Plugging into the state dynamics gives

• One can show (see Bryson & Ho, sec. 5.2) that there exists a time varying matrix S(t) that provides a linear relation between λ and x

Plugging this into the above state dynamics gives

i.e. is a linear state feedback. Similarly to the discrete case, to know the control rule we simply need to find S(t).

• We plug into and get

• Next we plug (✽) into the above equation and after a bit of rearranging get

• Since x(t)≠0 (otherwise there is no need for regulation) we can drop x and get... 6

(✽)

Page 7: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

...

• This is a quadratic differential equation in S with a boundary condition

• This equation is known as a matrix Riccati equation. It can be solved (e.g. by numeric integration from Sf backwards) to get S(t).

• Usually, this dynamic system is stable and reaches a steady state S as t→∞. The Matlab function care solves for S∞ (if a real valued solution exists).

• S∞ is good for regulating the system for a long duration (i.e. forever).

• Since this differential equation is quadratic, it may have more than one solution. The desired solution is PSD. Starting from S=0 (instead of S=Sf) and numerically integrating until convergence (i.e. till ) will give the PSD solution (see Bryson & Ho, sec. 5.4).

• We will see (in a future lecture) that the HJB equations show that J=xT(t0)S(t0)xT(t0). (This is also true for the discrete case, were we used the notation P instead of S).

7

Page 8: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

tf given , x(0) & x(tf) partly or fully given

• Same variation of J holds:

• Suppose xk(tf) is given, then δxk(tf) = 0 and therefore we do not require

in order to zero out the variation

• Similarly, if xk(t0) is not given, then δxk(t0) ≠ 0 and to zero out δJ we require that

• Note that if the system is not “controllable” the condition may be impossible to satisfy. 8

x = Ax + BR!1BT ! (10)

S(t) Bryson & Ho, p150

!(t) = S(t)x(t) (11)

S(tf ) = Sf

x = Ax + BR!1BT S(t)x(t) (12)

u = BR!1BT S(t)x(t)S(t)

! x

Sx + Sx = !Qx!ASx

!S + SA + AT S! SBR!1BT S + A

"x = 0

x(t) "= 0

S = !SA!AT S + SBR!1BT S!A

S(tf ) = Sf SRiccati

S(t)

S S •S Matlab care tf # $

•S = 0 PSD S

Bryson&Ho p. S # 0167-8

tf t0 tf

J

"J =#$

#$

#x! !T

%"x

&

t=tf

+ [!T "x]t=t0 +' tf

t0

#$#H#x

+ !T

%"x +

#H#u

"u&

dt

(13)

4

xk(tf )!xk(tf ) = 0

"#

"xk(tf )! $T

k = 0

xk(t0) tf!J = 0 !xk(t0) = 0

$k(t0) = 0

!J !xk(t0)!H!u = 0

controllable

x(t) = f(x(t),u(t), t) (14)

$T (t) = !"H"x

= !"L"x

! $T (t)"f"x

(15)

$Tk (tf ) =

"#

"xk(tf )or xk(tf ) given (16)

"H"u

= 0 (17)

"k $k(0) = 0 or xk(0) given (18)

xk(tf )!xk(tf ) = 0

"#

"xk(tf )! $T

k = 0

xk(t0) tf!J = 0 !xk(t0) = 0

$k(t0) = 0

!J !xk(t0)!H!u = 0

controllable

x(t) = f(x(t),u(t), t) (14)

$T (t) = !"H"x

= !"L"x

! $T (t)"f"x

(15)

$Tk (tf ) =

"#

"xk(tf )or xk(tf ) given (16)

"H"u

= 0 (17)

"k $k(0) = 0 or xk(0) given (18)

Page 9: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• The Euler Lagrange equations are now

9

xk(tf )!xk(tf ) = 0

"#

"xk(tf )! $T

k = 0

xk(t0) tf!J = 0 !xk(t0) = 0

$k(t0) = 0

!J !xk(t0)!H!u = 0

controllable

x(t) = f(x(t),u(t), t) (14)

$T (t) = !"H"x

= !"L"x

! $T (t)"f"x

(15)

$Tk (tf ) =

"#

"xk(tf )or xk(tf ) given (16)

"H"u

= 0 (17)

"k $k(0) = 0 or xk(0) given (18)

Page 10: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Find a path x(t), starting at rest (zero velocity & acceleration) from x0 at time t0 and ending at rest at xf at time tf so that the squared cumulative change in acceleration (jerk) along the path is minimal.

• Define the state to be x = [x, v, a] (position, velocity, acceleration) and the control signal to be

• The cost is:

where and ϕ = 0

• We solve for the one dimensional (scalar position) case but solution holds for the multidimensional case as well.

10

Minimum Jerk

minimum jerk

t0 xf x0 x(t)xf tf x0

L = u(t) = a [x, v, a]! = 0 1

2

! tf

t0u2(t)dt

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#000

$

%

!f!u

"f

"u=

"

#001

$

%

[#1, #2, #3]T = #(t) = !"L"x!#T (t)

"f"x

= 0![#1,#2,#3]

"

#0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =12c1t

2 ! c2t + c3

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

Page 11: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• The dynamics equations are

• The Jacobian is therefore

• and (remembering )

• so, by we get

11

xk(tf )!xk(tf ) = 0

"#

"xk(tf )! $T

k = 0

xk(t0) tf!J = 0 !xk(t0) = 0

$k(t0) = 0

!J !xk(t0)!H!u

= 0controllable

x(t) = f(x(t),u(t), t) (14)

$T (t) = !"H

"x= !

"L

"x! $T (t)

"f

"x(15)

$Tk (tf ) =

"#

"xk(tf )or xk(tf ) given (16)

"H

"u= 0 (17)

"k $k(0) = 0 or xk(0) given (18)

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

xk(tf )!xk(tf ) = 0

"#

"xk(tf )! $T

k = 0

xk(t0) tf!J = 0 !xk(t0) = 0

$k(t0) = 0

!J !xk(t0)!H!u

= 0controllable

x(t) = f(x(t),u(t), t) (14)

$T (t) = !"H

"x= !

"L

"x! $T (t)

"f

"x(15)

$Tk (tf ) =

"#

"xk(tf )or xk(tf ) given (16)

"H

"u= 0 (17)

"k $k(0) = 0 or xk(0) given (18)

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

#T (t) = !"L

"x! #T (t)

"f

"x

= 0 ! [#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

%

[#1, #2, #3]T = [0,!#1,!#2]

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

#T (t) = !"L

"x! #T (t)

"f

"x

= 0 ! [#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

%

[#1, #2, #3]T = [0,!#1,!#2]

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

Page 12: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• We can solve these differential equations:

• The Hamiltonian ( H = L+λTf ) is

• so

12

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

[#1, #2, #3]T = #(t) = !

"L

"x!#T (t)

"f

"x= 0![#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

% = [0,!#1,!#2]

#1 = c1

#2 = !c1t + c2

#3 =1

2c1t

2 ! c2t + c3

6

minimum jerk

t0 xf x0 x(t)xf tf x0

u(t) = a [x, v, a]

J =1

2

! tf

t0

u2(t)dt

! = 0 L = 12u2

f1 = x = v

f2 = v = a

f3 = a = u

"f

"x=

"

#

0 1 00 0 10 0 0

$

%

!L!x

"L

"x=

"

#

000

$

%

!f!u

"f

"u=

"

#

001

$

%

#T (t) = !"L

"x! #T (t)

"f

"x

= 0 ! [#1, #2, #3]

"

#

0 1 00 0 10 0 0

$

%

[#1, #2, #3]T = [0,!#1,!#2]

6

!1 = c1

!2 = !c1t + c2

!3 =1

2c1t

2 ! c2t + c3

H = u2(t) + !T f

=1

2u2(t) + !1v + !2a + !3u

"H

"u= 0

0 = u + !3

u =1

2c1t

2 ! c2t + c3

a = u

a =1

6c1t

3 !1

2c2t

2 + c3t + c4

v = a

v =1

24c1t

4 !1

6c2t

3 +1

2c3t

2 + c4t + c5

x = v

x =1

120c1t

5 !1

24c2t

4 +1

6c3t

3 +1

2c4t

2 + c5t + c6

c6 = x0 c4 = c5 = 0 x(0) = x0 v(0) = a(0) = 0

a =1

6c1t

3 !1

2c2t

2 + c3t

v =1

24c1t

4 !1

6c2t

3 +1

2c3t

2

x =1

120c1t

5 !1

24c2t

4 +1

6c3t

3 + x0

tf x(tf ) = xf v(tf ) = a(tf ) = 0

!

"

00

xf ! x0

#

$ =

!

"

16t3f ! 1

2t2f tf

124

t4f ! 16t3f

12t2f

1120

t5f ! 124

t4f16t3f

#

$

!

"

c1

c2

c3

#

$

!1 = c1

!2 = !c1t + c2

!3 =1

2c1t

2 ! c2t + c3

H = u2(t) + !T f

=1

2u2(t) + !1v + !2a + !3u

"H

"u= 0

0 = u + !3

u = !1

2c1t

2 + c2t ! c3

a = u

a = !1

6c1t

3 +1

2c2t

2 ! c3t + c4

v = a

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2 + c4t + c5

x = v

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 +1

2c4t

2 + c5t + c6

c6 = x0 c4 = c5 = 0 x(0) = x0 v(0) = a(0) = 0

a = !1

6c1t

3 +1

2c2t

2 ! c3t

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 + x0

tf x(tf ) = xf v(tf ) = a(tf ) = 0

!

"

00

xf ! x0

#

$ =

!

"

! 16t3f

12t2f !tf

! 124

t4f16t3f ! 1

2t2f

! 1120

t5f ! 124

t4f ! 16t3f

#

$

!

"

c1

c2

c3

#

$

Page 13: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Plugging u into the dynamics equations:

• Using the initial conditions x(0) = x0 and v(0) = a(0) = 0 we see that c4 = c5 =0 and c6 = x0 leaving us with

13

!1 = c1

!2 = !c1t + c2

!3 =1

2c1t

2 ! c2t + c3

H = u2(t) + !T f

=1

2u2(t) + !1v + !2a + !3u

"H

"u= 0

0 = u + !3

u = !1

2c1t

2 + c2t ! c3

a = u

a = !1

6c1t

3 +1

2c2t

2 ! c3t + c4

v = a

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2 + c4t + c5

x = v

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 +1

2c4t

2 + c5t + c6

c6 = x0 c4 = c5 = 0 x(0) = x0 v(0) = a(0) = 0

a = !1

6c1t

3 +1

2c2t

2 ! c3t

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 + x0

tf x(tf ) = xf v(tf ) = a(tf ) = 0

!

"

00

xf ! x0

#

$ =

!

"

! 16t3f

12t2f !tf

! 124

t4f16t3f ! 1

2t2f

! 1120

t5f ! 124

t4f ! 16t3f

#

$

!

"

c1

c2

c3

#

$

!1 = c1

!2 = !c1t + c2

!3 =1

2c1t

2 ! c2t + c3

H = u2(t) + !T f

=1

2u2(t) + !1v + !2a + !3u

"H

"u= 0

0 = u + !3

u = !1

2c1t

2 + c2t ! c3

a = u

a = !1

6c1t

3 +1

2c2t

2 ! c3t + c4

v = a

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2 + c4t + c5

x = v

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 +1

2c4t

2 + c5t + c6

c6 = x0 c4 = c5 = 0 x(0) = x0 v(0) = a(0) = 0

a = !1

6c1t

3 +1

2c2t

2 ! c3t

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 + x0

tf x(tf ) = xf v(tf ) = a(tf ) = 0

!

"

00

xf ! x0

#

$ =

!

"

! 16t3f

12t2f !tf

! 124

t4f16t3f ! 1

2t2f

! 1120

t5f ! 124

t4f ! 16t3f

#

$

!

"

c1

c2

c3

#

$

!1 = c1

!2 = !c1t + c2

!3 =1

2c1t

2 ! c2t + c3

H = u2(t) + !T f

=1

2u2(t) + !1v + !2a + !3u

"H

"u= 0

0 = u + !3

u = !1

2c1t

2 + c2t ! c3

a = u

a = !1

6c1t

3 +1

2c2t

2 ! c3t + c4

v = a

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2 + c4t + c5

x = v

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 +1

2c4t

2 + c5t + c6

c6 = x0 c4 = c5 = 0 x(0) = x0 v(0) = a(0) = 0

a = !1

6c1t

3 +1

2c2t

2 ! c3t

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 + x0

tf x(tf ) = xf v(tf ) = a(tf ) = 0

!

"

00

xf ! x0

#

$ =

!

"

! 16t3f

12t2f !tf

! 124

t4f16t3f ! 1

2t2f

! 1120

t5f ! 124

t4f ! 16t3f

#

$

!

"

c1

c2

c3

#

$

Page 14: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Using the final conditions, x(tf) = xf and v(tf) = a(tf) = 0 we get 3 equations in 3 unknowns (with tf as a parameter):

• The solutions of above equations yields:

• Resulting in

where

14

c3 = (xf ! x0)60

t3f

c2 = (xf ! x0)360

t4f

c1 = (xf ! x0)720

t5f

x

x(t) = x0 + (xf ! x0)(6!5 ! 15!4 + 10!3)

! = ttf

10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x(t)

10 20 30 40 50 60 70 80 90 100

2

4

6

8

10

12

14

16

18

x 10!3

v(t)

10 20 30 40 50 60 70 80 90

!5

!4

!3

!2

!1

0

1

2

3

4

5

x 10!4

a(t)

10 20 30 40 50 60 70 80 90

!2

!1

0

1

2

3

4

5

x 10!5

u(t)

c3 = (xf ! x0)60

t3f

c2 = (xf ! x0)360

t4f

c1 = (xf ! x0)720

t5f

x

x(t) = x0 + (xf ! x0)(6!5 ! 15!4 + 10!3)

! = ttf

10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x(t)

10 20 30 40 50 60 70 80 90 100

2

4

6

8

10

12

14

16

18

x 10!3

v(t)

10 20 30 40 50 60 70 80 90

!5

!4

!3

!2

!1

0

1

2

3

4

5

x 10!4

a(t)

10 20 30 40 50 60 70 80 90

!2

!1

0

1

2

3

4

5

x 10!5

u(t)

c3 = (xf ! x0)60

t3f

c2 = (xf ! x0)360

t4f

c1 = (xf ! x0)720

t5f

x

x(t) = x0 + (xf ! x0)(6!5 ! 15!4 + 10!3)

! = ttf

10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x(t)

10 20 30 40 50 60 70 80 90 100

2

4

6

8

10

12

14

16

18

x 10!3

v(t)

10 20 30 40 50 60 70 80 90

!5

!4

!3

!2

!1

0

1

2

3

4

5

x 10!4

a(t)

10 20 30 40 50 60 70 80 90

!2

!1

0

1

2

3

4

5

x 10!5

u(t)

!1 = c1

!2 = !c1t + c2

!3 =1

2c1t

2 ! c2t + c3

H = u2(t) + !T f

=1

2u2(t) + !1v + !2a + !3u

"H

"u= 0

0 = u + !3

u = !1

2c1t

2 + c2t ! c3

a = u

a = !1

6c1t

3 +1

2c2t

2 ! c3t + c4

v = a

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2 + c4t + c5

x = v

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 +1

2c4t

2 + c5t + c6

c6 = x0 c4 = c5 = 0 x(0) = x0 v(0) = a(0) = 0

a = !1

6c1t

3 +1

2c2t

2 ! c3t

v = !1

24c1t

4 +1

6c2t

3 !1

2c3t

2

x = !1

120c1t

5 +1

24c2t

4 !1

6c3t

3 + x0

tf x(tf ) = xf v(tf ) = a(tf ) = 0

!

"

00

xf ! x0

#

$ =

!

"

! 16t3f

12t2f !tf

! 124

t4f16t3f ! 1

2t2f

! 1120

t5f ! 124

t4f ! 16t3f

#

$

!

"

c1

c2

c3

#

$

Page 15: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

15

c3 = (xf ! x0)60

t3f

c2 = (xf ! x0)360

t4f

c1 = (xf ! x0)720

t5f

x

x(t) = x0 + (xf ! x0)(6!5 ! 15!4 + 10!3)

! = ttf

10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x(t)

10 20 30 40 50 60 70 80 90 100

2

4

6

8

10

12

14

16

18

x 10!3

v(t)

10 20 30 40 50 60 70 80 90

!5

!4

!3

!2

!1

0

1

2

3

4

5

x 10!4

a(t)

10 20 30 40 50 60 70 80 90

!2

!1

0

1

2

3

4

5

x 10!5

u(t)

Page 16: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• When tf is not constrained it becomes a parameter of the variation problem and affects the cost of the solution. All the previous optimality conditions still apply and an extra condition is added:

Proof sketch (see Bryson & Ho, sec 2.7 for real proof):

• The Lagrange multipliers augmented cost is, as before

• The variation is now also in tf. To simplify the proof we assume that x(tf) is unchanged by the variation in tf, that is x(t)=x(tf) for t∈[tf , tf + δt].

16

What If tf Is Not Given?

Page 17: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• The resulting variation is

where δJ is the variation given tf

• Note that the same result is reached without the assumption on x not varying.

17

Page 18: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• This is an example of a common case where tf is not given.

• problem setting: given dynamical system

and some of the start/end state values xk(t0) and xk(tf) find the control that minimizes the following simple cost

• That is

18

Minimum Time Problems

Page 19: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• The Euler Lagrange equations are

• We have 2n boundary constraints for 2n differential equations, m optimality constraints for m control variables and one constraint for tf.

19

Page 20: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Suppose we have an extra requirement from the optimal trajectory:

• E.g. given fixed amount of fuel, reach the destination with an empty tank (while obeying some optimality criterions) . N(x,u,t) would be the fuel consumption and c the given fuel amount.

• Solution: (by reduction to a regular problem with an extra state variable that has given boundary values)Add a new state variable, xn+1 with the following dynamics function f

20

Integral Constraint On Path

Page 21: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• It is clear that

• Now require that

• Obviously, the augmented system obeys the integral constraint

• (see Bryson & Ho Sec 3.1)

21

Page 22: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Maximum area under curve of given length:

• We formulate it as a control problem: A particle moves with constant velocity (1) along the horizontal axis, starting at time 0 and ending at time a. The position along the x1 axis changes as a function of the angle θ which is the control signal. The particle’s path (in the x1-t plane) must be p and the area under the particle is maximized.

• We assume that -π/2<θ<π/2 (true if p< πa)22

Integral Constraint Eg.

Page 23: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• The corresponding equations are

• The cost is

23

Page 24: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Solution:

• Add another state variable, x2, and replace the integral constraint with

• The Hamiltonian is

we know that it is a constant because L and f are not explicit functions of time (see previous slide).

• The Lagrange multipliers obay

24

Page 25: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• The Lagrange equation for u gives

so

i.e

• This means that under this control scheme the sine of θ (t) is linear.

• One can then use the given boundary conditions to find H,c1,c2, solve x1 and show that the optimal path is an ark of a circle whose center is at , its radius is p/2 and α obeys

25

L = !x1 , ! = 0

u = "

f1 = x1 = tan "

x1(0) = 0 , x1(a) = 0

tf = a

p =

! a

0

1

cos "dt

p < #a

J = !

! a

0

x1dt

f2 = x2 =1

cos ", x2(0) = 0 , x2(a) = p

H = L + $T f = !x1 + $1 tan " +$2

cos "= const (20)

f L H =

$

$1 = !%H

%x1

= 1

$2 = !%H

%x2

= 0

$1 = t + c1

$2 = c2

%H

%u=

%H

%"=

$1

cos2 "+ $2

sin "

cos2 "= 0

$1 = !$2 sin " = !c2 sin "

Page 26: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• Suppose we wish to find a trajectory that obeys an extra set of equality constraints, c(x,u,t)=0.

• This is no different than requiring that = 0.

• We therefore treat the new constraint in a similar manner. We rewrite the Hamiltonian to be:

where ν is an extra vector of Lagrange multipliers that gets the exact same treatment as λ.

• The rest of the Euler Lagrange constraint derivation is unchanged.

• (see Bryson & Ho, sec. 3.3)

26

Equality Constraints OverFunctions of the State/Control

Page 27: Calculus of Variations and Optimal Control: …Calculus of Variations and Optimal Control: Continuous Systems (various problem conditions) • x(0) & t f given (reminder) •continuous

• functions of the state variables given at t0 and/or given or unconstrained terminal time tf. (Eg. manipulate a robotic hand from one curved wall to another along the shortest path).

• via point problems (trajectory is constrained to obey some rule at a certain time or position along its path).

• problems where the Lagrange equations do not produce a constraint on the control signal (although it is obviously constrained). Eg. bang-bang control.

• ...

27

Some Other Variation Variants


Recommended