+ All Categories
Home > Documents > Lecture 09 EAILC

Lecture 09 EAILC

Date post: 06-Feb-2016
Category:
Upload: nguyenhuynhvuong93
View: 224 times
Download: 0 times
Share this document with a friend
Description:
Lecture 09 EAILC
Popular Tags:
65
Lecture 9: Stochastic / Predictive Self-Tuning Regulators. Minimum Variance Control. Moving Average Controller. c Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 1/14
Transcript
Page 1: Lecture 09 EAILC

Lecture 9: Stochastic / Predictive Self-Tuning Regulators.

• Minimum Variance Control.

• Moving Average Controller.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 1/14

Page 2: Lecture 09 EAILC

The stochastic model

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise,

A(q) = qn + a1 qn−1 + · · · + an, deg{A} = n

B(q) = b1 qn−d0 + · · · + bn, deg{B} = n − d0

C(q) = qn + c1 qn−1 + · · · + cn, deg{C} = n

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 2/14

Page 3: Lecture 09 EAILC

The stochastic model

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise,

A(q) = qn + a1 qn−1 + · · · + an, deg{A} = n

B(q) = b1 qn−d0 + · · · + bn, deg{B} = n − d0

C(q) = qn + c1 qn−1 + · · · + cn, deg{C} = n

Equivalently,

y(t) = −a1 y(t − 1) − · · · − an y(t − n)

+ bd0u(t − d0) + · · · + bn u(t − n)

+ e(t) + c1 e(t − 1) + · · · + cn e(t − n)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 2/14

Page 4: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 5: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

Apparently, one can also make deg{C} = deg{A} = n byrenaming signals.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 6: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

Apparently, one can also make deg{C} = deg{A} = n byrenaming signals. Let us see it on the example

y(t + 1) + a y(t) = b u(t) + c e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 7: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

Apparently, one can also make deg{C} = deg{A} = n byrenaming signals. Let us see it on the example

y(t + 1) + a y(t) = b u(t) + c e(t)

It can be rewritten as

(q + a) y(t) = b u(t) + c e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 8: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

Apparently, one can also make deg{C} = deg{A} = n byrenaming signals. Let us see it on the example

y(t + 1) + a y(t) = b u(t) + c e(t)

It can be rewritten as

(q + a) y(t) = b u(t) + c q e(t − 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 9: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

Apparently, one can also make deg{C} = deg{A} = n byrenaming signals. Let us see it on the example

y(t + 1) + a y(t) = b u(t) + c e(t)

It can be rewritten as

(q + a) y(t) = b u(t) + q enew(t)

introducing enew(t) = c e(t − 1).

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 10: Lecture 09 EAILC

Remarks on modeling noise.

Assuming that C is stable is not restrictive:Typically, such a model is obtained experimentally fromspectrum characteristics. It does not change if the unstableroots are substituted by stable, which are symmetrical to themwith respect to the unite circle.

Apparently, one can also make deg{C} = deg{A} = n byrenaming signals. Let us see it on the example

y(t + 1) + a y(t) = b u(t) + c e(t)

It can be rewritten as

y(t + 1) + a y(t) = b u(t) + enew(t + 1)

with enew(t) = c e(t − 1) being white noise withvar{enew(t)} = c2 var{e(t)}.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 3/14

Page 11: Lecture 09 EAILC

Minimum-variance control: Example

Consider the model

y(t) = −a y(t − 1) + b u(t − 1) + c e(t − 1) + e(t)

where |c| < 1 and {e(t)} is a sequence of random variableswith E{e(t)} = 0 and var{e(t)} = σ2.

Our goal is to regulate y(t) to 0 as close as possible.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 4/14

Page 12: Lecture 09 EAILC

Minimum-variance control: Example

Consider the model

y(t) = −a y(t − 1) + b u(t − 1) + c e(t − 1) + e(t)

where |c| < 1 and {e(t)} is a sequence of random variableswith E{e(t)} = 0 and var{e(t)} = σ2.

Our goal is to regulate y(t) to 0 with minimal variance.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 4/14

Page 13: Lecture 09 EAILC

Minimum-variance control: Example

Consider the model

y(t) = −a y(t − 1) + b u(t − 1) + c e(t − 1) + e(t)

where |c| < 1 and {e(t)} is a sequence of random variableswith E{e(t)} = 0 and var{e(t)} = σ2.

Our goal is to regulate y(t) to 0 with minimal variance.

In the absence of noise:

y(t) = −a y(t − 1) + b u(t − 1)

the best strategy (dead beat design) is

u(t) =a

by(t) =⇒ y(t + 1) = 0

What should we do when noise is present?

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 4/14

Page 14: Lecture 09 EAILC

Minimum-variance control: Example

Consider the model

y(t) = −a y(t − 1) + b u(t − 1) + c e(t − 1) + e(t)

where |c| < 1 and {e(t)} is a sequence of random variableswith E{e(t)} = 0 and var{e(t)} = σ2.

Our goal is to regulate y(t) to 0 with minimal variance.

In the absence of noise or when c = 0:

y(t) = −a y(t − 1) + b u(t − 1) + e(t)

the best strategy (dead beat design) is

u(t) =a

by(t) =⇒ y(t + 1) = e(t + 1)

What should we do when noise dynamics are present?

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 4/14

Page 15: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 16: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

Let us try to compute e(t) symbolically

(q + a) y(t) = b u(t) + (q + c) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 17: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

Let us try to compute e(t) symbolically

e(t) =q + a

q + cy(t) −

b

q + cu(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 18: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

Let us try to compute e(t) symbolically

e(t) =q + a

q + cy(t) −

b

q + cu(t)

Substituting back into the model

y(t+1) = −a y(t)+b u(t)+e(t+1)+c

(q + a

q + cy(t) −

b

q + cu(t)

)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 19: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

Let us try to compute e(t) symbolically

e(t) =q + a

q + cy(t) −

b

q + cu(t)

Substituting back into the model

y(t + 1) =(c − a) q

q + cy(t) +

b q

q + cu(t)

︸ ︷︷ ︸

known and independent from e(t + 1)!

+e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 20: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

Let us try to compute e(t) symbolically

e(t) =q + a

q + cy(t) −

b

q + cu(t)

Substituting back into the model

y(t + 1) =(c − a) q

q + cy(t) +

b q

q + cu(t)

︸ ︷︷ ︸

known and independent from e(t + 1)!

+e(t + 1)

Clearly, var{y(t + 1)} ≥ var{e(t + 1)} = σ2.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 21: Lecture 09 EAILC

Minimum-variance control: Example (cont’d)

We have the process

y(t + 1) + a y(t) = b u(t) + e(t + 1) + c e(t)

Let us try to compute e(t) symbolically

e(t) =q + a

q + cy(t) −

b

q + cu(t)

Substituting back into the model

y(t + 1) =(c − a) q

q + cy(t) +

b q

q + cu(t)

︸ ︷︷ ︸

known and independent from e(t + 1)!

+e(t + 1)

The minimal variance is achieved with: u(t) = −b

c − ay(t).

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 5/14

Page 22: Lecture 09 EAILC

Minimum-variance control: General case

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise with var{e(t)} = σ2.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 6/14

Page 23: Lecture 09 EAILC

Minimum-variance control: General case

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise with var{e(t)} = σ2.

Additional assumptions:

• The polynomials A and B are monic (right rescaling).

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 6/14

Page 24: Lecture 09 EAILC

Minimum-variance control: General case

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise with var{e(t)} = σ2.

Additional assumptions:

• The polynomials A and B are monic (right rescaling).

• deg{A} = deg{C} = n ≥ 1 (always can be done).

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 6/14

Page 25: Lecture 09 EAILC

Minimum-variance control: General case

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise with var{e(t)} = σ2.

Additional assumptions:

• The polynomials A and B are monic (right rescaling).

• deg{A} = deg{C} = n ≥ 1 (always can be done).

• deg{A} − deg{B} = d0 ≥ 1 – known relative degree.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 6/14

Page 26: Lecture 09 EAILC

Minimum-variance control: General case

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise with var{e(t)} = σ2.

Additional assumptions:

• The polynomials A and B are monic (right rescaling).

• deg{A} = deg{C} = n ≥ 1 (always can be done).

• deg{A} − deg{B} = d0 ≥ 1 – known relative degree.

• The polynomial C is stable (always can be done).

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 6/14

Page 27: Lecture 09 EAILC

Minimum-variance control: General case

Assume the plant is represented by the special ARMAX model

A(q) y(t) = B(q) u(t) + C(q) e(t)

where {e(t)} is white noise with var{e(t)} = σ2.

Additional assumptions:

• The polynomials A and B are monic (right rescaling).

• deg{A} = deg{C} = n ≥ 1 (always can be done).

• deg{A} − deg{B} = d0 ≥ 1 – known relative degree.

• The polynomial C is stable (always can be done).

• The polynomial B is stable – a serious restriction.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 6/14

Page 28: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have the process

A(q) y(t) = B(q) u(t) + C(q) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 7/14

Page 29: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have the process

A(q) y(t) = B(q) u(t) + C(q) e(t)

Let us solve it for y(t)

y(t) =B(q)

A(q)u(t) +

C(q)

A(q)e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 7/14

Page 30: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have the process

A(q) y(t) = B(q) u(t) + C(q) e(t)

Shifting by the relative degree

y(t + d0) =B(q)

A(q)u(t + d0) +

C(q)

A(q)e(t + d0)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 7/14

Page 31: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have the process

A(q) y(t) = B(q) u(t) + C(q) e(t)

Shifting by the relative degree and rewriting

y(t + d0) =B(q)

A(q)u(t + d0) +

C(q) qd0−1

A(q)e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 7/14

Page 32: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have the process

A(q) y(t) = B(q) u(t) + C(q) e(t)

Shifting by the relative degree and rewriting

y(t + d0) =B(q)

A(q)u(t + d0) +

C(q) qd0−1

A(q)e(t + 1)

Using polynomial long division

C(q) qd0−1

A(q)= F (q) +

G(q)

A(q)

where G(q)A(q)

is strictly proper and

deg{F} = deg{C(q) qd0−1} − deg{A} = d0 − 1.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 7/14

Page 33: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =B(q)

A(q)u(t + d0) +

(

F (q) +G(q)

A(q)

)

e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 34: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

Finally the process can be represented by

y(t + d0) =qd0 B(q)

A(q)︸ ︷︷ ︸

proper fraction

u(t)+q G(q)

A(q)︸ ︷︷ ︸

proper fraction

e(t)+ F (q) e(t +1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 35: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

Finally the process can be represented by

y(t + d0) =qd0 B(q)

A(q)︸ ︷︷ ︸

proper fraction

u(t)+q G(q)

A(q)︸ ︷︷ ︸

proper fraction

e(t)+ F (q) e(t +1)

Solving the model for e(t), we have

e(t) =A(q)

C(q)y(t) −

B(q)

C(q)u(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 36: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

Finally the process can be represented by

y(t + d0) =qd0 B(q)

A(q)︸ ︷︷ ︸

proper fraction

u(t)+q G(q)

A(q)︸ ︷︷ ︸

proper fraction

e(t)+ F (q) e(t +1)

Solving the model for e(t), we have

e(t) =A(q)

C(q)y(t) −

B(q)

C(q)u(t)

After substituting it back

y(t+d0) =qd0 B

Au(t)+

q G

A

(A

Cy(t) −

B

Cu(t)

)

+F e(t+1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 37: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

Finally the process can be represented by

y(t + d0) =qd0 B(q)

A(q)︸ ︷︷ ︸

proper fraction

u(t)+q G(q)

A(q)︸ ︷︷ ︸

proper fraction

e(t)+ F (q) e(t +1)

Solving the model for e(t), we have

e(t) =A(q)

C(q)y(t) −

B(q)

C(q)u(t)

After substituting it back and collecting terms

y(t + d0) =q G

Cy(t) +

q B

A C

(

qd0 C − G)

u(t) + F e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 38: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

Finally the process can be represented by

y(t + d0) =qd0 B(q)

A(q)︸ ︷︷ ︸

proper fraction

u(t)+q G(q)

A(q)︸ ︷︷ ︸

proper fraction

e(t)+ F (q) e(t +1)

Solving the model for e(t), we have

e(t) =A(q)

C(q)y(t) −

B(q)

C(q)u(t)

After substituting it back and collecting terms

y(t + d0) =q G

Cy(t) +

q B

C

(qd0 C − G)

Au(t) + F e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 39: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

Finally the process can be represented by

y(t + d0) =qd0 B(q)

A(q)︸ ︷︷ ︸

proper fraction

u(t)+q G(q)

A(q)︸ ︷︷ ︸

proper fraction

e(t)+ F (q) e(t +1)

Solving the model for e(t), we have

e(t) =A(q)

C(q)y(t) −

B(q)

C(q)u(t)

Substituting F (q) from C(q) qd0−1

A(q)= F (q) + G(q)

A(q)

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t) + F (q) e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 8/14

Page 40: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 41: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

Clearly, var{y(t + d0)} ≥ var{F (q) e(t + 1)}

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 42: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

var{y(t+d0)} ≥ var{F (q) e(t+1)} =(

1 + f21 + · · · + f2

d0−1

)

σ2

where F (q) = qd0−1 + f1 qd0−2 + · · · + fd0−1.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 43: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

var{y(t+d0)} ≥ var{F (q) e(t+1)} =(

1 + f21 + · · · + f2

d0−1

)

σ2

where F (q) = qd0−1 + f1 qd0−2 + · · · + fd0−1.

The minimal variance is achieved with: u(t) = −G(q)

B(q) F (q)y(t) .

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 44: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

var{y(t+d0)} ≥ var{F (q) e(t+1)} =(

1 + f21 + · · · + f2

d0−1

)

σ2

where F (q) = qd0−1 + f1 qd0−2 + · · · + fd0−1.

The minimal variance is achieved with: u(t) = −G(q)

B(q) F (q)y(t) .

The closed-loop system is

A(q) y(t) = B(q) u(t) + C(q) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 45: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

var{y(t+d0)} ≥ var{F (q) e(t+1)} =(

1 + f21 + · · · + f2

d0−1

)

σ2

where F (q) = qd0−1 + f1 qd0−2 + · · · + fd0−1.

The minimal variance is achieved with: u(t) = −G(q)

B(q) F (q)y(t) .

The closed-loop system is

A(q) y(t) = B(q)

(

−G(q)

B(q) F (q)y(t)

)

+ C(q) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 46: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

var{y(t+d0)} ≥ var{F (q) e(t+1)} =(

1 + f21 + · · · + f2

d0−1

)

σ2

where F (q) = qd0−1 + f1 qd0−2 + · · · + fd0−1.

The minimal variance is achieved with: u(t) = −G(q)

B(q) F (q)y(t) .

The closed-loop system is

B(q) (A(q) F (q) + G(q)) y(t) = B(q) F (q) C(q) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 47: Lecture 09 EAILC

Minimum-variance control: General case (cont’d)

We have rewritten the process as

y(t + d0) =q G(q)

C(q)y(t) +

q B(q) F (q)

C(q)u(t)

︸ ︷︷ ︸

known and independent from F(q) e(t + 1)

+F (q) e(t + 1)

var{y(t+d0)} ≥ var{F (q) e(t+1)} =(

1 + f21 + · · · + f2

d0−1

)

σ2

where F (q) = qd0−1 + f1 qd0−2 + · · · + fd0−1.

The minimal variance is achieved with: u(t) = −G(q)

B(q) F (q)y(t) .

The closed-loop system (using: qd0−1 C = A F + G) is

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 9/14

Page 48: Lecture 09 EAILC

Minimum-variance control: Remarks

(1) The closed-loop system

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

defines the noise-to-output relation

y(t) =B(q) F (q) C(q)

qd0−1 B(q) C(q)e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 10/14

Page 49: Lecture 09 EAILC

Minimum-variance control: Remarks

(1) The closed-loop system

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

defines the noise-to-output relation

y(t) =F (q)

qd0−1e(t) =

qd0−1 + f1 qd0−2 + · · · + fd0−1

qd0−1e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 10/14

Page 50: Lecture 09 EAILC

Minimum-variance control: Remarks

(1) The closed-loop system

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

defines the noise-to-output relation

y(t) = (1 + f1 q−1 + · · · + f−d0+1) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 10/14

Page 51: Lecture 09 EAILC

Minimum-variance control: Remarks

(1) The closed-loop system

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

defines the noise-to-output relation

y(t) = e(t) + f1 e(t − 1) + · · · + f−d0+1 e(t − d0 + 1)

which is a moving average of order d0 − 1.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 10/14

Page 52: Lecture 09 EAILC

Minimum-variance control: Remarks

(1) The closed-loop system

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

defines the noise-to-output relation

y(t) = e(t) + f1 e(t − 1) + · · · + f−d0+1 e(t − d0 + 1)

which is a moving average of order d0 − 1.

(2) The closed-loop poles consist of (a) zeros of B(q), (b) zerosof C(q), and (c) d0 − 1 zeros.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 10/14

Page 53: Lecture 09 EAILC

Minimum-variance control: Remarks

(1) The closed-loop system

qd0−1 B(q) C(q) y(t) = B(q) F (q) C(q) e(t)

defines the noise-to-output relation

y(t) = e(t) + f1 e(t − 1) + · · · + f−d0+1 e(t − d0 + 1)

which is a moving average of order d0 − 1.

(2) The closed-loop poles consist of (a) zeros of B(q), (b) zerosof C(q), and (c) d0 − 1 zeros.

(3) The minimum variance controller

u(t) = −G(q)

B(q) F (q)y(t)

can be interpreted as a pole-placement controller.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 10/14

Page 54: Lecture 09 EAILC

Minimum-variance control as pole-placement

Sinceqd0−1 C(q) = A(q) F (q) + G(q)

the closed-loop characteristic polynomial

Ac(q) = qd0−1 B(q) C(q) = A(q) B(q) F (q)︸ ︷︷ ︸

R(q)

+B(q) G(q)︸ ︷︷ ︸

S(q)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 11/14

Page 55: Lecture 09 EAILC

Minimum-variance control as pole-placement

Sinceqd0−1 C(q) = A(q) F (q) + G(q)

the closed-loop characteristic polynomial

Ac(q) = qd0−1 B(q) C(q) = A(q) B(q) F (q)︸ ︷︷ ︸

R(q)

+B(q) G(q)︸ ︷︷ ︸

S(q)

Note that for the Diophantine equation Ac = A R + B S

deg{S(q)} = deg{G(q)} = n − 1

and S/R is proper since

deg{R} = deg{B}+deg{F} = (n−d0)+(d0 −1) = n−1.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 11/14

Page 56: Lecture 09 EAILC

Minimum-variance control as pole-placement

Sinceqd0−1 C(q) = A(q) F (q) + G(q)

the closed-loop characteristic polynomial

Ac(q) = qd0−1 B(q) C(q) = A(q) B(q) F (q)︸ ︷︷ ︸

R(q)

+B(q) G(q)︸ ︷︷ ︸

S(q)

Note that for the Diophantine equation Ac = A R + B S

deg{S(q)} = deg{G(q)} = n − 1

and S/R is proper since

deg{R} = deg{B}+deg{F} = (n−d0)+(d0 −1) = n−1.

Can we use an analogous pole-placement technique for thecase when B(q) has unstable zeros?

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 11/14

Page 57: Lecture 09 EAILC

Moving Average Controller

Design the controller

u(t) =S(q)

R(q)for A(q) y(t) = B(q) u(t) + C(q) e(t)

as follows.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 12/14

Page 58: Lecture 09 EAILC

Moving Average Controller

Design the controller

u(t) =S(q)

R(q)for A(q) y(t) = B(q) u(t) + C(q) e(t)

as follows.

• Factor B as B = B+ B−, where B+ is monic and stable.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 12/14

Page 59: Lecture 09 EAILC

Moving Average Controller

Design the controller

u(t) =S(q)

R(q)for A(q) y(t) = B(q) u(t) + C(q) e(t)

as follows.

• Factor B as B = B+ B−, where B+ is monic and stable.

• Set d = deg{A} − deg{B+}.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 12/14

Page 60: Lecture 09 EAILC

Moving Average Controller

Design the controller

u(t) =S(q)

R(q)for A(q) y(t) = B(q) u(t) + C(q) e(t)

as follows.

• Factor B as B = B+ B−, where B+ is monic and stable.

• Set d = deg{A} − deg{B+}.• Find Rp and S solving the Diophantine equation

qd−1 C = A Rp + B− S, d ≥ d0

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 12/14

Page 61: Lecture 09 EAILC

Moving Average Controller

Design the controller

u(t) =S(q)

R(q)for A(q) y(t) = B(q) u(t) + C(q) e(t)

as follows.

• Factor B as B = B+ B−, where B+ is monic and stable.

• Set d = deg{A} − deg{B+}.• Find Rp and S solving the Diophantine equation

qd−1 C = A Rp + B− S, d ≥ d0

• Let R = Rp B+, where deg{Rp} = d − 1.

The obtained transfer function for the closed-loop system is

y(t) = q1−d Rp(q) e(t) = e(t)+r1 e(t−1)+· · ·+rd−1 e(t−d+1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 12/14

Page 62: Lecture 09 EAILC

Example 4.3

Consider the plant described by

(q2 +a1 q +a2) y(t) = (b0 q + b1) u(t)+(q2 + c1 q + c2) e(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 13/14

Page 63: Lecture 09 EAILC

Example 4.3

Consider the plant described by

(q2 +a1 q +a2) y(t) = (b0 q + b1) u(t)+(q2 + c1 q + c2) e(t)

1. If |b1/b0| < 1 minimum variance (MV) controller can bedesigned noticing that

q1−1 (q2 + c1 q + c2)

q2 + a1 q + a2= 1 +

(c1 − a1) q + (c2 − a2)

q2 + a1 q + a2

as follows

u(t) = −G(q)

B(q) F (q)y(t) = −

(c1 − a1) q + (c2 − a2)

(b0 q + b1) · 1y(t)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 13/14

Page 64: Lecture 09 EAILC

Example 4.3

Consider the plant described by

(q2 +a1 q +a2) y(t) = (b0 q + b1) u(t)+(q2 + c1 q + c2) e(t)

1. If |b1/b0| < 1 minimum variance (MV) controller can bedesigned noticing that

q1−1 (q2 + c1 q + c2)

q2 + a1 q + a2= 1 +

(c1 − a1) q + (c2 − a2)

q2 + a1 q + a2

as follows

u(t) = −G(q)

B(q) F (q)y(t) = −

(c1 − a1) q + (c2 − a2)

(b0 q + b1) · 1y(t)

2. If |b1/b0| > 1 minimum variance (MV) controller cannot bedesigned but we can apply moving average controller (MA) withd = 2 using the solution of the Diophantine equation

q (q2+c1 q+c2) = (q2+a1 q+a2)(q+r1)+(b0 q+b1)(s0 q+s1)

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 13/14

Page 65: Lecture 09 EAILC

Next Lecture / Assignments:

Next meeting (April 28, 10:00-12:00, in A206Tekn): IndirectMinimum-Variance and Stochastic Self-Tuning Regulators.

Homework problems: Consider the process in Example 4.3with a1 = −1.5, a2 = 0.7, b0 = 1, c1 = −1, and c2 = 0.2.

Determine the variance of the output in closed-loop system as afunction of b1 when the Moving Average controller is used.Compare with the lowest achievable variance.

c©Leonid Freidovich. April 25, 2008. Elements of Iterative Learning and Adaptive Control: Lecture 9 – p. 14/14


Recommended