1
SYSTEM IDENTIFICATIONThe System Identification Problem is to estimate a model of a system based on input-output data.Basic Configuration
continuous
System
disturbance (not observed)v(t)
y(t)u(t)output (observed)input (observed)
discrete
System
{v(k)}
{y(k)}{u(k)}
2
We observe an input number sequence (a sampled signal)
{u(k)} = {u(0), u(1), ..., u(k), ..., u(N)}and an output sequence{y(k)} = {y(0), y(1), ..., y(k), ..., y(N)}
using standard z-transform notation
If we assume the system is linear we can write:-
Y G U V( ) ( ) ( ) ( )z z z z
3
G(z)U(z) Y(z)
V(z)
+
The disturbance v(k) is often considered as generated by filtered white noise :-
G(z)U(z) Y(z)
V(z)
+
H(z)(z)white noise filter disturbance
outputinput process
giving the description:Y G U H( ) ( ) ( ) ( ) ( )z z z z z
4
Parametric ModelsARX model (autoregressive with exogenous variables)
whereA
B
( )
( )
z a z a z
z b z b z b zn
n
nn
a
a
b
b
11
1
11
12
2
1
G(z)
U(z) Y(z)
V(z)
+
H(z)
(z)
z zz
n
BA
( )( )
1
1
11A( )z
5
giving the difference equation:y k a y k a y k n
b u k n b u k n b u k n n kn a
n b
a
b
( ) ( ) ( )
( ) ( ) ( ) ( )
1
1 2
1
1 2
and represents an extra delay of n sampling instants.
z n
identification problemdetermine n, na, nb (structure)
estimate
a a a
b b bn
n
a
b
1 2
1 2
, , ,
, , ,
(parameters)
6
ARMAX model (autoregressive moving average with exogenous variables)
where A
B
C
( )
( )
( )
z a z a z
z b z b z b z
z c z c z
nn
nn
nn
a
a
b
b
c
c
11
1
11
12
2
11
1
1
1
G(z)
U(z) Y(z)
V(z)
+
H(z)
(z)
z zz
n
BA
( )( )
1
1
C zz
( )( )
1
1A
7
giving the difference equation:y k a y k a y k n
b u k n b u k n b u k n n
k c k c k n
n a
n b
n c
a
b
c
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
1
1 2
1
1
1 2
1
identification problemdetermine n, na, nb, nc (structure)
estimate
a a a
b b b
c c c
n
n
n
a
b
c
1 2
1 2
1 2
, , ,
, , ,
, , ,
(parameters)
8
General Prediction Error ApproachProcess
Predictor withadjustable
parameters
+
-
Algorithm forminimising
some function of e(t,)
u(t) y(t)
e(t,)
Predictor based on a parametric modelAlgorithm often based on a least squares method. 2
0
min ( )N
k
e k
9
ConsistencyA desirable property of an estimate is that it converges to the true parameter value as the number of observations N increases towards infinity.This property is called consistencyConsistency is exhibited by ARMAX model identification methods but not by ARX approaches (the parameter values exhibit bias).
10
Example of MATLAB Identification Toolbox Session
Input and Output Data of Dryer Model
0 5 10 15 20 25-2
-1
0
1
2OUTPUT #1
0 5 10 15 20 25
-1
0
1
INPUT #1
input and output data
11
th = arx(z2,[2 2 3]); % z2 contains datath = sett(th,0.08); % Set the correct sampling interval.present(th)
Results:
Loss fcn: 0.001685Akaike s FPE: 0.001731 Sampling interval 0.08The polynomial coefficients areB = 0 0 0 0.0666 0.0445A = 1.0000 -1.2737 0.3935
MATLAB statements and results:(ARX n, na = 2, nb = 2)
12
64 65 66 67 68 69 70 71 72-1.5
-1
-0.5
0
0.5
1
1.5ARX Simulated (solid) and measured (dashed) outputs - error = 6.56
Time
ARX model:
G z z zz z
( ) . .. .
3
1
1 2
0 0666 0 04451 12737 0 3935
13
MATLAB Demo
14
ADAPTIVE CONTROL
PERFORMANCEASSESSMENT &
UPDATINGMECHANISM
REGULATOR PROCESS
parameters slowlyvarying
ref + _
outputs (fast varying)
disturbancesfastvarying
regulatorparameters
K J Astrom
15
Adaptive control is a special type of nonlinear control in which the states of the process can be separated into two categories:-
(i) slowly varying states (viewed as parameters
(ii) fast varying states (compensated by standard feedback)In adaptive control it is assumed that there is feedback from the system performance which adjusts the regulator parameters to compensate for the slowly varying process parameters.
16
Adaptive Control ProblemAn adaptive controller will contain :-
•characterization of desired closed-loop performance (reference model or design specifications)
•control law with adjustable parameters
•design procedure
•parameters updating based on measurements
•implementation of the control law (discrete or continuous)
17
Overview of Some Adaptive Control Schemes
Gain Scheduling
regulator processoutput
ycontrolsignal
ucommandsignal
gainschedule
regulatorparameters
operatingconditions
The regulator parameters are adjusted to suit different operating conditions. Gain scheduling is an open-loop compensation.
18
Auto-tuning
PIDcontroller Process
KT s
T si
d1 1
FHG IKJ+
_
parameters K, Ti, Td
PID controllers are traditionally tuned using simple experiments and empirical rules. Automatic methods can be applied to tune these controllers.
(i) experimental phase using test signals; then:-
(ii) use of standard rules to compute PID parameters.
19
Model Reference Adaptive Systems MRAS
regulator process actualoutput
yuuc
modelidealoutputym
adjustmentmechanism
regulatorparameters
20
The parameters of the regulator are adjusted such that the error e = y - ym becomes small. The key problem is to determine an appropriate adjustment mechanism and a suitable control law.
d eedt
where determines the adaptation rate. This rule changes the parameters in the direction of the negative gradient of e2
MIT rule adjustment mechanism
21
Combining the MIT rule with the control law:
u u yc ( )and computing the sensitivity derivatives
eproduces the scheme:
process umultiplieruc y_+
e
filter
s
integrator
e
model
+
_
ym
multiplier
Note: steady-state will be achieved when the input to the integrator becomes zero. That is when y = ym
22
Self Tuning Regulators STR
regulator process actualoutput
yuuc
regulatorparameters
design
process parameters
estimation
23
The process parameters are updated and the regulator parameters are obtained from the solution of a design problem. The adaptive regulator consists of two loops:-
(i) inner loop consisting of the process and a linear feedback regulator(ii) outer loop composed of a parameter estimator (recursive) and a design calculation. (To obtain good estimates it is usually necessary to introduce perturbation signals)
Two problems:-(i) underlying design problem(ii) real time parameter estimation problem
24
MODEL REFERENCE ADAPTIVECONTROL
parameters
Mux
Mux1
referenceerror
reference,output,
command
Mux
Mux
1/sIntegrator1
1/sIntegrator
1s+2
filter_
0.5s+1
process
*
mult_
-K-
g2
-K- filter
-+e
*
mult
-K-
g1
2s+2
referencemodel
*
so
*
to
+-
feedbackerrorInput
Example - SIMULINK Simulation of MRAS
25
0 50 100 150-0.2
0
0.2
0.4
0.6
0.8
1
Time (second)
Input, Reference and Actual Outputs
26
MATLAB Demo
27
INTRODUCTION TO THE KALMAN FILTERState Estimation Problem
Vectors w(t) and v(t) are noise terms, representing unmeasured system disturbances and measurement errors respectively. They are assumed to be independent, white, Gaussian, and to have zero mean. In mathematical terms:-
x Ax Bu Gw y Cx Du v
w(t) v(t)
u(t) x(t) y(t)
SYSTEM
28
E t t
E t
E t
T
T
T
v w 0
w w Q
v v R
( ) ( )
( ) ( )
( ) ( )
FHG IKJFHG IKJFHG IKJ
for all and
(assumed constant)
(assumed constant)
where Q and R are symmetric and non negative definite covariance matrices. (E is the expectation operator)
Only u(t) and y(t) are assessable.
The state estimation problem is to estimate the states x(t) from a knowledge of u(t) and y(t). (and assuming we know A, B, G, C, D, Q, and R).
29
Construction of the Kalman-Bucy FilterSYSTEMu(t) y(t)
x(t)
( )( )x Ax Bu L y Cx Du tFilter equation :-
z CB
A
D
L(t)
u(t) y(t)
( )x t
xx ( )y t+ _ +
FILTER
+
30
The estimation problem is now to find L(t) such that the error between the real states x(t) and the estimated states is minimized. This can be formulated as:
( )x t
min [ ( ) ( )] [ ( ) ( )]( )L
x x x xt
E t t t tT FHG IKJ
( )( )x Ax Bu L y Cx Du tFilter equation :-
L(t) is a time dependent matrix gain.
R E Kalman
31
Duality Between the Optimum State Estimation Problem and the Optimum Regulator Problem
It can be shown that the optimum state estimation problem:min [ ( ) ( )] [ ( ) ( )]
( )Lx x x x
tE t t t tT FHG IKJ
subject to:
( )( )ˆ ˆ ˆ( )= , ( )=T T
tE E
x Ax Bu Gwy Cx Du vx Ax Bu L y Cx Du
ww Q vv R
is the dual of the optimum regulator problem:min
( )Lx GQG x u Ru
tT T TT
dt 12 0
FHG IKJzsubject to:
( )x A x C uu L x
T T
t
32
Thus L(t) can be obtained by solving the matrix Ricatti equation:
S AS SA SC R CS GQGT T T1
L R CS( ) ( )t t 1
Furthermore for large measurement times L(t) converges to:
Lim t LT
L( )
a constant matrix gain.
33
Linear Quadratic Estimator Design Using MATLAB
LQE Linear quadratic estimator design. For the continuous-time system:.
x = Ax + Bu + Gw {State equation} z = Cx + Du + v {Measurements} with process noise and measurement noise covariances: E{w} = E{v} = 0, E{ww'} = Q, E{vv'} = R, E{wv'} = 0 L = LQE(A,G,C,Q,R) returns the gain matrix L such that the stationary Kalman filter: . x = Ax + Bu + L(z - Cx - Du) produces an LQG optimal estimate of x.
34
Example:
produces:
A=[0 1;-1 0];G=[0;1];C=[1 0];Q=1;R=3;L=lqe(A,G,C,Q,R)
L =
0.5562 0.1547
( ) ( ( ))
( ) ( ( ))
x x
x x w t E w t
y x v t E v t
1 2
2 12
12
1
3
,
,
35
giving the filter equations:
( )
( )
x x l y y
x x l y yy x
1 2 1
2 1 2
1
where l1 = 0.5562, l2 = 0.1547
36
1s
1s
-1
+u(t) = 0
w(t) v(t)
+y(t)x1x2
SYSTEM
( )x t2 ( )x t1
1s
1s+ +
-1
_ +
l1
l2FILTER
x2 x1
y y
37
SIMULINK SIMULATION
1/sx2
+--w(t)
1/sx1
++y
v(t)
1.7
sqrt(R)
+-
y-Cx
++__
1/sx2hat
0.556
l1
-+_
0.155
l2
x1/x1hat
x2/x2hat
Mux
Mux1
Mux
Mux
PLANT
KALMAN FILTER
meas(y)
vtWS1
wtWS2
+-e1
+-e2
e1tWS3
e2tWS4
1
sqrt(Q)
1/sx1hat
38265 270 275 280
-6
-4
-2
0
2
4
6
Time (second)
Comparison of actual (solid) and measured (dash) states
x1
39
Comparison of actual (solid) and measured (dash) states
265 270 275 280
-6
-4
-2
0
2
4
6
Time (second)
x2
40
265 270 275 280-10
-5
0
5
Measurement signal y(t)
41
MATLAB Demo