+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA Guidance, Navigation, and Control...

[American Institute of Aeronautics and Astronautics AIAA Guidance, Navigation, and Control...

Date post: 17-Dec-2016
Category:
Upload: ola
View: 212 times
Download: 0 times
Share this document with a friend
14
Modifying L 1 Adaptive Control for Augmented Angle of Attack Control Ola H¨ arkeg˚ ard * Saab AB, SE-581 88 Link¨ oping, Sweden In this paper we use L1 adaptive control to augment a baseline controller for improved angle of attack control of an unstable aircraft with multiple control surfaces. The scope is limited to considering a linearization of the aircraft dynamics in a single point of the flight envelope. Modifications are made to the design method to fit the intended application. These include control input redundancy management, tuning the state predictor in the case of an unmatched model error, using L1 adaptive control to augment a baseline controller and discrete-time implementation of the controller. The resulting controller is evaluated through a simulation example based on the Admire model. Nomenclature u Control input v Virtual control input x State vector y Regulated output A, B System matrices for linearized aircraft dynamics E Control effectiveness matrix A m , b, c System matrices of for adaptive control ω, θ, σ Model uncertainties ˆ ω, ˆ θσ Parameter estimates ˆ x Predicted state vector ˜ x State vector prediction error D(s), C (s) Adaptive controller filters k Adaptive controller filter gain Γ, Γ c ω θ σ Adaptation gains P , Q Lyapunov equation matrices λ Weighting parameter u nom Control input from baseline controller u ad Control input from adaptive controller α Angle of attack, deg q Pitch rate, deg/s T s Sampling time interval, s I. Introduction L 1 adaptive control is a recently developed tool for control design. 1, 2 For a class of uncertain dynamic systems, the method can be used to design controllers with guaranteed transient performance and guaranteed time-delay and gain margins. The aim of this paper is to investigate the potential of using L 1 adaptive control for augmented flight control design. * Control system engineer, Flight Control System, Saab Aerosystems, [email protected] 1 of 14 American Institute of Aeronautics and Astronautics AIAA Guidance, Navigation, and Control Conference 10 - 13 August 2009, Chicago, Illinois AIAA 2009-6071 Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Transcript

Modifying L1 Adaptive Control for

Augmented Angle of Attack Control

Ola Harkegard∗

Saab AB, SE-581 88 Linkoping, Sweden

In this paper we use L1 adaptive control to augment a baseline controller for improved

angle of attack control of an unstable aircraft with multiple control surfaces. The scope is

limited to considering a linearization of the aircraft dynamics in a single point of the flight

envelope. Modifications are made to the design method to fit the intended application.

These include control input redundancy management, tuning the state predictor in the case

of an unmatched model error, using L1 adaptive control to augment a baseline controller

and discrete-time implementation of the controller. The resulting controller is evaluated

through a simulation example based on the Admire model.

Nomenclature

u Control inputv Virtual control inputx State vectory Regulated outputA, B System matrices for linearized aircraft dynamicsE Control effectiveness matrixAm, b, c System matrices of for adaptive controlω, θ, σ Model uncertainties

ω, θ, σ Parameter estimatesx Predicted state vectorx State vector prediction errorD(s), C(s) Adaptive controller filtersk Adaptive controller filter gainΓ, Γc, Γω, Γθ, Γσ Adaptation gainsP , Q Lyapunov equation matricesλ Weighting parameterunom Control input from baseline controlleruad Control input from adaptive controllerα Angle of attack, degq Pitch rate, deg/sTs Sampling time interval, s

I. Introduction

L1 adaptive control is a recently developed tool for control design.1, 2 For a class of uncertain dynamicsystems, the method can be used to design controllers with guaranteed transient performance and guaranteedtime-delay and gain margins. The aim of this paper is to investigate the potential of using L1 adaptive controlfor augmented flight control design.

∗Control system engineer, Flight Control System, Saab Aerosystems, [email protected]

1 of 14

American Institute of Aeronautics and Astronautics

AIAA Guidance, Navigation, and Control Conference10 - 13 August 2009, Chicago, Illinois

AIAA 2009-6071

Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

The problem that we set out to solve using L1 adaptive control is one of the core problems in modernflight control design – angle of attack control of an unstable aircraft with multiple control surfaces. We limitthe scope to designing a controller for a linearized model of the aircraft dynamics representing the dynamicbehaviour in a single point of the flight envelope.

This clearly is a major simplification compared to considering the full nonlinear dynamics throughout theenvelope but the problem still has relevance. If you can solve this problem, you have taken an important firststep towards conquering the full nonlinear control problem. If you cannot solve this linear control problemin a satisfactory manner, you probably want to consider a different design method.

To resemble a typical flight control system upgrade situation, we assume a baseline controller to exist,to which we want to make an adaptive augmentation in order to improve control performance withoutmodifying the baseline controller. Further, the controller should be implemented in discrete-time with asampling frequency of about 50–100 Hz.

The paper is organized as follows. In Section II, the considered flight control problem is detailed. Sec-tion III summarizes L1 adaptive control as presented in Ref. 2. In Section IV, the adaptive theory is tailoredto considered control problem, and Section V contains simulation results using the Admire3 model. Somefinal conclusions are drawn in Section VI.

II. Flight Control Problem

Consider the linearized aircraft dynamics

x(t) = Ax(t) + Bu(t)

where x =(

α q)>

is the state vector containing angle of attack α (incremental compared to trim) and

pitch rate q, and u =(

u1 . . . um

)>

is a vector of control surface deflections. The control objective is to

make α follow the commanded angle of attack αcmd. Let

u(t) = unom(t) (1)

be the output of a (nominal) baseline controller which gives satisfactory control performance in the absenceof model errors. To increase robustness towards model errors we want to augment the baseline controllerwith an adaptive term. The overall control input hence becomes

u(t) = unom(t) + uad(t) (2)

III. L1 Adaptive Control

The core elements L1 adaptive control, as presented in Ref. 2, can be summarized as follows.

III.A. System Dynamics

Consider the single-input, single-output (SISO) system

x(t) = Amx(t) + b(ωu(t) + θ(t)>x(t) + σ(t))

y(t) = c>x(t)(3)

where u ∈ R is the control input, x ∈ Rn is the state vector, assumed to be measurable, and y ∈ R is the

regulated output. Am is a known n × n matrix and b and c are known n × 1 vectors, whereas ω, θ(t) andσ(t) are unknown model uncertainties and disturbances.

The matrix Am is assumed Hurwitz, thereby solving the Lyapunov equation

A>mP + PAm = −Q, P = P> > 0, Q > 0 (4)

The solution P enters the adaptive update laws below and hence becomes a design parameter.The model uncertainties are matched with the control input, meaning that they enter the dynamics at

the same place as the control input (since they are all multiplied by b). This assumption means that onecan design a control law that cancels the impact of ω, θ and σ if these can be estimated perfectly.

2 of 14

American Institute of Aeronautics and Astronautics

The uncertainties ω, θ and σ are assumed to satisfy

ω ∈ Ω0 = [ωl0 , ωu0], θ(t) ∈ Θ, |σ(t)| ≤ ∆0

where ω is a constant with known sign (ωl0 > 0 is assumed) and θ and σ can be time-varying with limitedtime derivatives: ∥

∥∥θ∥∥∥ ≤ dθ,

∥∥∥σ∥∥∥ ≤ dσ

In the parameter projection performed in the adaptive update laws below, the extended parameter spacesΩ = [ωl, ωu] and ∆ are used, where

∆0 < ∆, 0 < ωl < ωl0 < ωu0< ωu

III.B. Adaptive Controller

The goal is to design a feedback control law from x to u which makes the regulated output y follow a referencesignal r despite the model uncertainties ω, θ and σ. The controller consists of the following components:

State Predictor:

˙x(t) = Amx(t) + b(ω(t)u(t) + θ>(t)x(t) + σ(t)), x(0) = x(0) (5)

The state predictor is initiated with the true system state.

Update laws:

˙θ(t) = Proj(−Γθx(t)x>(t)Pb, θ(t)) (6)

˙σ(t) = Proj(−Γσx>(t)Pb, σ(t)) (7)

˙ω(t) = Proj(−Γωx>(t)Pbu(t), ω(t)) (8)

where x(t) = x(t)− x(t), P comes from (4), Γθ = ΓcIn×n and Γσ = Γω = Γc > 0. Proj(·, ·) is the projectionoperator.4

Control Law:

u(t) = kD(p)(kgr(t) − ω(t)u(t) − θ(t)>x(t) − σ(t)) (9)

where D(p) is the time-domain representation of the frequency-domain transfer function D(s) and

kg = − 1

c>A−1m b

This choice of kg gives the ideal system

x(t) = Amx(t) + bkgr(t)

y(t) = c>x(t)(10)

static gain one from r to y. The ideal control law achieving this closed loop system is

uideal(t) =1

ω(kgr(t) − θ>(t)x(t) − σ(t)) (11)

The filter D(s) should be selected such that

C(s) =ωkD(s)

1 + ωkD(s)

becomes strictly proper and satisfies C(0) = 1. The simplest choice is thus given by D(s) = 1/s.Solving for u in (9) gives

u(t) =kD(p)

1 + ω(t)kD(p)(kgr(t) − θ>(t)x(t) − σ(t))

If we let k → ∞, C(s) becomes an all-pass filter and the control law can be rewritten as

u(t) =1

ω(t)(kgr(t) − θ>(t)x(t) − σ(t))

The control law (9) can thus be viewed as a low-pass filtered, estimated version of the ideal control law (11).

3 of 14

American Institute of Aeronautics and Astronautics

III.C. Transient Response

With ideal parameter estimates,ω = ω, θ = θ, σ = σ (12)

the system (3) and the control law (9) form the reference system

xref (t) = Amxref (t) + b(ωuref (t) + θ>(t)xref (t) + σ(t))

uref (t) =C(p)

ω(kgr(t) − θ>(t)xref (t) − σ(t))

yref (t) = c>xref (t)

(13)

Inserting uref gives the closed loop dynamics

xref (t) = Amxref (t) + b(C(s)(kgr(t) − θ>(t)xref (t) − σ(t)) + θ>(t)xref (t) + σ(t)

)(14)

If C(s) has infinite bandwidth we get C(s) = 1 and

xref (t) = Amxref (t) + bkgr(t)

which means we recover the ideal closed loop dynamics (10), which is stable by assumption. The followingtheorem, based on the small-gain theorem, states the conditions for the reference system to also be stable.

Lemma 1 (Ref. 2, Lemma 6) The reference system (13) is stable if

∥∥∥G(s)

∥∥∥L1

L < 1 (15)

whereG(s) = (sI − Am)−1b(1 − C(s))∥∥∥G(s)

∥∥∥L1

=

∫ ∞

0

|g(t)| dt

L = maxθ(t)∈Θ

n∑

i=1

|θi(t)|

We can now recite the main result from Ref. 2, to which the reader is referred for details, regarding theclosed loop transient response.

Theorem 1 (Ref. 2, Theorem 2) The error between the true closed loop system (3), (5)–(9) and the

reference system (13) is bounded by

∥∥∥x − xref

∥∥∥L∞

≤ γ1,∥∥∥u − uref

∥∥∥L∞

≤ γ2

where γ1 and γ2 are both inversly proportional to√

Γc.

The L∞ norm of a signal x(t), t ≥ 0 is defined as

∥∥∥x∥∥∥L∞

= maxi=1,...,n

supt≥0

∣∣∣xi(t)

∣∣∣

The theorem gives a upper bound on the error between the true system and the reference system for all

times, not just asymptotically. The error can be made arbitrarily small by increasing the adaptation gainΓc.

IV. Modifying the Theory

Let us now compare the conditions and assumptions for L1 adaptive control, as presented in Section III,with the flight control problem posed in Section II.

4 of 14

American Institute of Aeronautics and Astronautics

• The system (3) is of single-input type whereas we have multiple control surfaces in our flight controlproblem.

• The model uncertainties in the system (3) are assumed to be matched with the control input. On mostaircraft, the control surfaces primarily produce aerodynamic moment, and therefore mainly affectthe pitch rate dynamics. An error in mass, mismodelling of the lift coefficient or lack of air datainformation (such as dynamic pressure) however affects the angle of attack dynamics and thus producesan unmatched model error.

• The theory presented in Section III assumes no baseline controller, whereas the goal here is to developan adaptive augmentation of an existing baseline controller.

• The adaptive controller is composed by a number of differential equations. In a digital implementation,these need to be solved in discrete time.

In the remainder of this section we treat these different issues separately. In Section V the resulting overalladaptive controller implementation is evaluated.

IV.A. Multiple Control Inputs

Consider first the case of multiple control inputs. Let the system dynamics be given by

x(t) = Amx(t) + b(ωEu(t) + θ>(t)x(t) + σ(t)) (16)

where u ∈ Rm and E is the 1 × m control efficiency matrix. As before, let ω be a scalar, i.e., assume that

the control efficiency model error is the same for all control inputs.Introduce the scalar virtual control input

v(t) = Eu(t) (17)

This gives us the SISO system

x(t) = Amx(t) + b(ωv(t) + θ>(t)x(t) + σ(t))

for which an adaptive controller can be designed in terms of v. This virtual control command can then berealized as some solution u to the underdetermined equation (17). This is often referred to as the controlallocation problem.5, 6

IV.B. Selecting the Lyapunov Equation Solution P

One of the parameters affecting the controller behaviour is the matrix P , entering the update laws (6)–(8).P is required to solve the Lyapunov equation (4) for some positive definite matrix Q. This leaves a fairamount of freedom in selecting P . In this section we show how this design freedom can be used to suppressthe impact of an unmatched model error.

In the update laws (6)–(8), P (along with b) governs how the n prediction errors in x are blended intoa scalar prediction error x>Pb. If the discrepancy between the true dynamics and the modelled dynamicscan be represented as a matched model error, as in Eq. (3), all components of the state prediction error xcan be regulated to zero through proper adaptation, as in Section III. In the case of an unmatched modelerror, this is no longer true. In this case, P can be designed to achieve a trade-off between regulation of thedifferent components of x.

Since our target application is angle of attack control, we restrict our discussion to the case n = 2. Wefurther assume, without restriction, that

b =

(

0

1

)

Parameterizing P as

P =

(

p11 p12

p12 p22

)

5 of 14

American Institute of Aeronautics and Astronautics

givesx>Pb = p12x1 + p22x2

This weighted prediction error, multiplied with the proper signal, is integrated to form the parameter esti-mates ω, θ and σ that define the control law. The relationship between p12 and p22 thus determines how theindividual prediction errors are prioritized.

Which values of p12 and p22 that are feasible is governed by the requirements in the Lyapunov equation (4):

P = P> > 0 (18)

A>mP + PAm = −Q, Q > 0 (19)

The first requirement gives

p11 > 0 (20)

p22 > 0 (21)

p11p22 − p212 > 0 (22)

From (21) it immediately follows that the x2 gain must be strictly positive.To emphasize the interpretation of p12 and p22 as weights, we introduce the parameterization

p11 = p, p22 = λ, p12 = 1 − λ, 0 < λ ≤ 1

where λ determines the relative weighting of the prediction errors. Given λ, Eq. (18) is satisfied if (and onlyif)

p >(1 − λ)2

λ= pmin(λ)

This follows from Eq. (22). Given Am and P , the solution Q found by equating Eq. (19) is valid if Q > 0.Hence, a certain relative weighting of the prediction errors, determined by λ, is feasible if one can findp > pmin(λ) such that A>

mP + PAm = −Q < 0.It is straightforward to implement a numerical algorithm which given Am and λ investigates if it is

possible to achieve Q > 0 by varying p. This algorithm can then be complemented with another algorithmto determine the smallest possible λ such that Q > 0 can be achieved. This gives the overall feasible interval

λmin(Am) ≤ λ ≤ 1

IV.C. Augmentation of a Baseline Controller

In this section we investigate how L1 adaptive control can be used to augment an existing baseline controller.Consider the SISO system

x(t) = Ax(t) + b(ωu(t) + θ(t)>x(t) + σ(t)) (23)

where A is a known, possibly unstable matrix. Assume that in the nominal case,

ω = 1, θ = 0, σ = 0 (24)

the baseline controller (1) gives the closed loop system

x(t) = Ax(t) + bunom(t) (25)

satisfactory dynamic properties. The ideal control law which, given (23), achieves the closed loop dynam-ics (25) is

u(t) =1

ω(unom(t) − θ(t)>x(t) − σ(t))

= unom(t) +1

ω((1 − ω)unom(t) − θ(t)>x(t) − σ(t))

︸ ︷︷ ︸

uad,ideal(t)

We now wish to use the techniques from Section III to design an adaptive augmentation uad as an estimated,filtered version of uad,ideal without altering unom. The overall control input is then given by (2).

6 of 14

American Institute of Aeronautics and Astronautics

To this end, first split the baseline controller into

unom(t) = unom(t) − Kx(t) (26)

If unom is a static function of x, a natural choice is

K = −∂unom

∂x

which makes unom capture nonlinearities and reference feedforward terms. Otherwise, K can be selected togive A − bK poles that well reflect the nominal closed loop dynamics.

Inserting (2), (26) into (23) gives

x(t) = Ax(t) + b(ω(unom(t) − Kx(t) + uad(t)) ± unom(t) ± Kx(t) + θ(t)>x(t) + σ(t))

= Amx(t) + bunom(t) + b(ωuad(t) + θ(t)>x(t) + σ(t))

whereAm = A − bK

θ(t) = θ(t) + (1 − ω)K>

σ(t) = σ(t) + (ω − 1)unom(t)

Except for the term bunom this system fits the system structure from Section III. This term can be interpretedas doing the job of kgr in Section III, but without getting filtered. Incorporating this term into the statepredictor (5) gives

˙x(t) = Amx(t) + bunom(t) + b(ω(t)uad(t) + θ(t)>x(t) + σ(t)), x(0) = x(0) (27)

The parameters ω, θ and σ can be estimated with the update laws (6)–(8) but with u replaced by uad.Bounds on θ and σ can, at least in principle, be computed from the bounds on ω, θ, σ and unom. InSection V we take a more pragmatic approach and set the bounds used for projection based on simulationstudies.

In analogy with Eq. (9), the adaptive control law becomes

uad(t) = −kD(p)(ωuad(t) + θ(t)>x(t) + σ(t))

An appealing feature with this augmentation scheme is that for the nominal model, defined by (24), thestate predictor (27) follows the nominal closed loop dynamics (25) perfectly. This gives x = 0 and unchangedparameter estimates and hence uad = 0. The adaptivity is therefore not activated in the abscence of modelerrors.

IV.D. Discrete-time Implementation

The adaptive controller is defined by the differential equations for the state predictor (5), the update laws (6)–(8) and the control law (9). In a digital implementation, these equations need to be solved numerically indiscrete time. In this section we investigate one way of doing this, based on Euler’s method, and how theresulting maximum adaptation gain, for which numerical stability is obtained, can be computed.

IV.D.1. Fast and Slow Controller Dynamics

To achieve good control performance it is desirable to increase the adaptation gain as much as possible.High adaptation gain gives fast dynamics of the state predictor (5) and the update laws (6)–(8). The impactof this fast dynamics on the control signal u is reduced by the low-pass filtering performed in Eq. (9). Tosimplify the analysis, the control law equation (9) is therefore disregarded in the discussion below. Thevalidity of this approximation is verified through simulation in Section V.

7 of 14

American Institute of Aeronautics and Astronautics

IV.D.2. Implementation

Introduce the overall parameter estimate vector

Θ =

θ

σ

ω

With suitable definitions of the functions f and g and the matrix Γ, we can rewrite (5)–(8) as

˙x = f(x, Θ, x, u) (28)

˙Θ = Proj(Γg(x, x, u), Θ) (29)

A simple way of discretizing these equations is to use Euler’s forward method. If we let that the projectionoperator be implemented as a limited integrator, i.e., the components of Θ are saturated at their limits ineach time step, we get the discrete-time update laws

xn+1 = xn + f(xn, Θn, x, u)Ts (30)

Θn+1 = Sat(Θn + Γg(xn, x, u)Ts) (31)

where Ts is the sampling time interval.It is well known that Euler’s forward method can become numerically unstable for stiff problems, which

in our case results from selecting a high adaptation gain. A simple way to improve the stability, withoutadding complexity to the implementation, is to first update Θ and then use the new value to update x, orvice versa. This gives the (mutually exclusive) update laws

xn+1 = xn + f(xn, Θn+1, x, u)Ts (32)

Θn+1 = Sat(Θn + Γg(xn+1, x, u)Ts) (33)

This method is sometimes referred to as Euler’s semi-implicit method in the litterature. Combining (30)–(33)gives the following three alternatives:

1. Euler’s forward method: Eq. (30), (31). Both x and Θ are updated based on their previous values.

2. Semi-implicit Euler 1: Eq. (30), (33). First, x is updated and then Θ, based on the new value of x.

3. Semi-implicit Euler 2: Eq. (31), (32). First, Θ is updated and then x, based on the new value of Θ.

IV.D.3. Numerical Stability Analysis

Let us now investigate the stability of the different time-discretization schemes above. The numerical stabilityof the different implementations can be analyzed under the assumption that the projection operator is notactive. Eq. (28)–(29) then become linear in x and Θ and can be written as

(˙x˙Θ

)

=

(

A11 A12

A21 0

)

︸ ︷︷ ︸

Actl

(

x

Θ

)

+ Φ(x, u)

where

A11 = Am

A12 = b(

x> 1 u)

A21 =

−Γθxb>P

−Γσb>P

−Γωb>Pu

Under the assumption in Section IV.D.1, the term Φ(x, u) does not affect stability and is therefore disregarded(set to zero). This gives the following update laws in the three cases above:

8 of 14

American Institute of Aeronautics and Astronautics

1. (

xn+1

Θn+1

)

= (I + ActlTs)

(

xn

Θn

)

=

(

I + A11Ts A12Ts

A21Ts I

)

︸ ︷︷ ︸

F1

(

xn

Θn

)

2. The update law for x is given by(

xn+1

Θn

)

=

(

I + A11Ts A12Ts

0 I

)

︸ ︷︷ ︸

Fx

(

xn

Θn

)

The subsequent update of Θ is given by(

xn+1

Θn+1

)

=

(

I 0

A21Ts I

)

︸ ︷︷ ︸

(

xn+1

Θn

)

Overall we get(

xn+1

Θn+1

)

= FΘFx

(

xn

Θn

)

=

(

I + A11Ts A12Ts

A21Ts(I + A11Ts) I + A21A12T2s

)

︸ ︷︷ ︸

F2

(

xn

Θn

)

3. Here, the state updates are performed in reverse order compared to above. This yields(

xn+1

Θn+1

)

= FxFΘ

(

xn

Θn

)

=

(

I + A11Ts + A12A21T2s A12Ts

A21Ts I

)

︸ ︷︷ ︸

F3

(

xn

Θn

)

Let us make a few comments:

• The three update schemes differ only in terms involving T 2s . As the sampling frequency is increased

they all converge to the same solution, as expected.

• For two quadratic matrices A and B it holds that the products AB and BA have the same eigenvalues.Consequently, F2 and F3 have the same eigenvalues. From a stability point of the two semi-implicitEuler schemes are therefore equivalent – either both are stable or both are unstable.

• The highest possible adaptation gain, given a certain sampling frequency, that can be selected withoutloosing numerical statility is determined by how much A21 can be scaled up before some eigenvalue getsoutside the unit circle. This maximum increase in adaptation gain can be easily computed numericallyfor each scheme.

V. Simulations

In this section we combine the different modifications of L1 from the previous section and evaluate theresulting controller using a linearization of the Admire3 model. Admire is a freely distributed Simulinkmodel of a small fighter with a delta-canard configuration.

V.A. Aircraft Models for Design and Simulation

At Mach 0.6, 1000 m (3300 ft), the short-period dynamics model used for control design is given by(

α

q

)

=

(

−1.8151 0.9605

12.9708 −1.8988

)

︸ ︷︷ ︸

A

(

α

q

)

︸ ︷︷ ︸

x

+

(

−0.0224 −0.2992

5.0858 −14.7616

)

︸ ︷︷ ︸

B

(

δc

δe

)

︸ ︷︷ ︸

u

(34)

9 of 14

American Institute of Aeronautics and Astronautics

where δc is the symmetric canard deflection and δe is the symmetric elevon deflection (both in degrees).This model has poles in +1.7 and −5.4. The control objective is for α to follow the commanded value αcmd.Hence the regulated output is

y =(

1 0)

︸ ︷︷ ︸

c>

x

In the model used for simulation, model errors and servodynamics are included:

x = (A. × ∆A)x + (B. × ∆B)u

u =

(

Gs(s) 0

0 Gs(s)

)

ucmd

The notation .× means elementwise multiplication. The model errors are given by

∆A =

(

0.7 1

1.3 1

)

, ∆B =

(

0.7 0.7

0.7 0.7

)

This corresponds to a 30% increase in mass, a 30% increase in static pitch instability and a 30% reduction ofcontrol effectiveness. This gives open loop poles in +2.5 and −5.6. Note that the first row of ∆A introducesan unmatched model error. To model servo dynamics we use the filter

Gs(s) =20

s + 20

V.B. Baseline Controller

The baseline controller is given by

unom(t) = Fr(p)αcmd(t) − Fy(p)x(t)

Fr(s) =

(

0.4679

−1.107− 1.107/s

)

Fy(s) =

(

0.4227 0.1247

−1.438− 1.107/s −0.3931

)

This controller was designed using a mix of linear-quadratic control design and set-point weighting. Thiscontroller achieves poles in −6.9 and −2.8 for the nominal model (34). The integrator pole and the corre-sponding zero are both in −0.9.

This controller is simulated in continuous time for implementational simplicity.

V.C. Augmented Adaptive Controller

In this section we present the elements of the adaptive controller which augments the baseline controller.The controller has been tuned for the case αcmd = 10 degrees.

Control Surface Redundancy Management: To use the method from Section IV.A to handle thatwe have two control surfaces for pitch control (canards and elevons), we make the following approximatefactorization of B:

B ≈(

0 0

5.0858 −14.7616

)

︸ ︷︷ ︸

Bad

=

(

0

1

)

︸︷︷ ︸

b

(

5.0858 −14.7616)

︸ ︷︷ ︸

E

The virtual control inputv = Eu

10 of 14

American Institute of Aeronautics and Astronautics

represents the commanded pitch angular acceleration from the control surfaces. To distribute the adaptivecontrol contribution equally bewteen the canards and the elevons we use the solution

E−1right =

(

0.0504

−0.0504

)

to computeuad = E−1

rightvad

Feedback Matrix K: The feedback matrix K from Section IV.C is selected using pole placement suchthat Am − bK gets eigenvalues in −6.9 and −2.8. This yields

K =

(

0.3825 0.1253

−1.1103 −0.3638

)

, Am =

(

−1.8151 0.9605

−5.3643 −7.9058

)

Lyapunov Equation Solution P : To select the solution P to the Lyapunov equation (4) we use themethod from Section IV.B. Since α is the regulated output it is natural to maximize the α weight in theadaptive update laws. This corresponds to minimizing λ. For the matrix Am above the smallest possible λ,for which (18)–(19) have a solution, is given by

λmin(Am) = 0.1083

For this value, a solution to the Lyapunov equation is given by

P =

(

9.6293 0.8917

0.8917 0.1083

)

, Q =

(

−44.5225 −1.3973 · 10−5

−1.3973 · 10−5 −4.3852 · 10−12

)

To achieve some robustness we select

λ = 2 · λmin(Am) = 0.2167

which gives the weighted state prediction error

x>Pb = 0.7833α + 0.2167q

Adaptation Gains: The adaptation gains are selected as

Γθ = Γ/10, Γσ = Γ/10, Γω = Γ

whereΓ = 10

The choice Γ is optimized for achieving good performance for a 10 degrees step in αcmd.

Filtering: For filtering of the adaptive control law we use

k = 10, D(s) = 1/s

which (for ω = 1) gives

C(s) =10

s + 10

The bandwidth 10 rad/s is selected not to exceed the servo bandwidth of 20 rad/s.

Projection: Projection of parameter estimates is achieved using limited integrators in the update laws.The limits, selected rather arbitrarily, are given by

ω ∈ [0.5, 1.5],∣∣∣θ1,2

∣∣∣ ≤ 10,

∣∣∣σ∣∣∣ ≤ 5

11 of 14

American Institute of Aeronautics and Astronautics

Initialization: The parameter estimates are initially set to

ω(0) = 1, θ(0) =

(

0

0

)

, σ(0) = 0

Discrete-time Implementation: The adaptive controller is executed in 60 Hz, i.e., Ts = 1/60 s. Dis-cretization of the state predictor and of the parameter update laws is done using alternative 3 in Section IV.D.Hence the parameter estimates are updated first and the new values are used to update the state predictor.

In this example, the semi-implicit Euler method substantially improves the numerical stability comparedto Euler’s forward method. Computing the eigenvalues of the update matrices in Section IV.D.3, the selectedmethod is numerically stable up to Γ = 36 while Euler’s forward method becomes unstable for Γ = 0.7, i.e.,a factor of 50 lower. These theoretically computed approximate stability limits can be verified with goodprecision through simulation.

Dead Zone: No dead zone for avoiding drift in the parameter estimates when the state prediction errorsare small has been implemented.

V.D. Simulation Results

With controller settings as above, Figure 1 shows simulation results for two consecutive steps in αcmd of 10degrees. In subplots containing more than one signal, the first signal name corresponds to the thick, solidline, the second name to the thin, solid line and the third name to the thin, dashed line.

In the figure, αideal is the response obtained by the baseline controller in the absence of model errors.This response is not quite recovered by the adaptive controller but an overshoot of 3% is still acceptable.We can further see that during the positive steps, neither α − α nor q − q goes to zero. This is due to theunmathced model error introduced.

To gain further insight into the properties of the adaptive controller, it is interesting to see how theperformance is affected by changes in the design parameters. Below, the resulting behaviour from changingsome of these parameters are commented on. Plots are available upon request.

Impact of adaptive controller: Deactivating the adaptive controller and only running the baselinecontroller gives a significantly reduced control performance with a 50% overshoot i α. This illustrates themagnitude of the introduced model error.

Choice of bandwidth k: Halving the bandwith of the filter C(s) by setting k = 5 gives slower adaptationas expected, resulting in a 7% overshoot in α. If we double the bandwidth by setting k = 20, which is thebandwidth of the servos, the adaptive controller starts to “resonate” with the servos with oscillatory controlcommands as a result.

Choice of Lyapunov Equation Solution P : Disregarding from the analysis in Section IV.B, a naturalapproach is to set Q as the unit matrix and solve for P from Eq. (4). Selecting Q = 17.6 ·I gives the weightedstate prediction error

x>Pb = −0.01α + 1.01q

The scaling of Q makes the weights sum up to 1. We note that the error in α has a negative weight whichone might find counterintuitive. Further, the high relative weighting of q results in good agreement betweenq and q but less good agreement between α and α and an 11% overshoot in α compared to αcmd.

Sensitivity to Disturbances: An important aspect of any control system is sensitivity to disturbances.For L1 adaptive control this issue is of particular importance since in the update laws (6)–(8), the measuredstate vector is multiplied with the adaptation gain, which one wants to set as high as possible. A high adap-tation gain in combination with noise corrupted state measurements could potentially give rapid variationsin the parameter estimates. One must then rely on proper filtering in the control law (9) to aviod too muchnoise in the control input.

Adding band-limited white noise with peak-to-peak variations of about 0.2 (deg and deg/s respectively)to the α and q measurements gives peak-to-peak variations in the adaptive control surface commands ofabout 0.5 deg while the correspondning variations in the baseline control commands are about 0.1 deg. Aninteresting issue, which has not been investigated, is how the filter C(s) can be better selected to supressthese disturbances.

12 of 14

American Institute of Aeronautics and Astronautics

Impact of Operating Point: An inherent, nonlinear characteristic of the adaptive update laws is thatthe gain of the state prediction error x depends on x and u, for estimating θ and ω respectively. Hence theperformance of the adaptive controller will vary with the operating point in a nonlinear fashion.

Recalling that the adaptation gain Γ has been tuned to achieve good control performance and numericalstability for the case αcmd = 10 deg, it is interesting to see how the closed loop dynamics vary with αcmd.For αcmd = 1 deg, the effective gain in the update laws decreases, resulting in degraded control performanceand a 25% overshoot in α. For αcmd = 20 deg, numerical stability of the discrete-time implementation islost, in agreement with the stability analysis results from Section IV.D.

A possible remedy for this problem, which has not been investigated, is to use different adaptation gainsat different operating points. This can for example be achieved by scheduling Γ with αcmd.

VI. Conclusions

Let us draw some conclusions from this attempt to apply L1 adaptive control to augmented angle ofattack control.

The control laws that define the adaptive controller are fairly simple, despite that the theory behindthem is mathematically advanced. The theory has also proven to be flexible to various modifications andextensions motivated by the considered flight control application.

Regarding the achieved control performance in the simulation example we note the following positive andnegative properties:

+ Good reference following despite having introduced a fairly large, partially unmatched model error.

+ Few design parameters to tune for the user. The existing design parameters can be set using a mix ofanalysis, heuristics and simulation in a fairly straightforward manner.

− Despite the controlled system being linear, the closed loop system becomes clearly nonlinear. Thismakes formal analysis as well as simulation based verification of the closed loop system propertiesmore difficult.

− High adaptation gain, which the success of L1 adaptive control relies on, may give high sensitivityto disturbances. It may also lead to numerical problems when solving the differential equations thatdefine the controller.

In a couple of areas, there is a definite potential for improvement of the achieved simulation results:

• The choice D(s) = 1/s leads to a first order low-pass filter C(s) with bad damping in the stopband.With a better choice of C(s) it should be possible to prevent high-frequency variations in the parameterestimates from affecting the control signal.

• If the suggested Euler based discrete-time implementation becomes a bottleneck for successful appli-cation of L1 adaptive control, one can turn to other more advanced methods, at the cost of a morecomplex implementation.

References

1Cao, C. and Hovakimyan, N., “Design and Analysis of a Novel L1 Adaptive Control Architecture With GuaranteedTransient Performance,” IEEE Transactions on Automatic Control , Vol. 53, No. 2, 2008, pp. 586–591.

2Cao, C. and Hovakimyan, N., “Guaranteed Time-delay Margin and Transient Response of L1 Adaptive Control Architec-ture,” Submitted to Transactions on Automatic Control.

3ADMIRE ver. 4.1, Aerodata Model in Research Environment (ADMIRE), version 4.1 , Swedish Defence Research Agency(FOI), 2006, http://www.foi.se/admire.

4Ioannou, P. and Fidan, B., Adaptive Control Tutorial , SIAM, 2006.5Bodson, M., “Evaluation of optimization methods for control allocation,” Journal of Guidance, Control, and Dynamics,

Vol. 25, No. 4, July–Aug. 2002, pp. 703–711.6Harkegard, O., Backstepping and Control Allocation with Applications to Flight Control , PhD thesis no. 820, Department

of Electrical Engineering, Linkoping University, May 2003.

13 of 14

American Institute of Aeronautics and Astronautics

0 5 10 15−2

0

2

4

6

8

10

12

0 5 10 15−2

0

2

4

6

8

10

12

0 5 10 15−20

−10

0

10

20

30

40

0 5 10 15−0.5

0

0.5

1

1.5

0 5 10 15−250

−200

−150

−100

−50

0

50

100

0 5 10 15−20

−15

−10

−5

0

5

10

0 5 10 15−20

−10

0

10

20

30

0 5 10 15−10

−5

0

5

10

PSfrag replacements

αα

αid

eal

αcm

q qω σ

θ(1

)θ(2

)v a

d

δ cδ c

,ad

δ c,n

om

δ eδ e

,ad

δ e,n

om

Figure 1. Step responses of closed loop system when baseline controller is augmented with L1 adaptive controller.

14 of 14

American Institute of Aeronautics and Astronautics


Recommended