+ All Categories
Home > Documents > Optimization-Based Control Design Techniques and...

Optimization-Based Control Design Techniques and...

Date post: 28-Aug-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
10
Optimization-Based Control Design Techniques and Tools Pierre Apkarian 1 , Dominikus Noll 2 1 ONERA - The French Aerospace Lab, Toulouse, France 2 Université de Toulouse, Institut de Mathématiques, Toulouse, France [email protected], [email protected] Abstract Structured output feedback controller synthesis is an exciting recent concept in modern control design, which bridges between theory and prac- tice in so far as it allows for the first time to ap- ply sophisticated mathematical design paradigms like H - or H 2 -control within control architectures preferred by practitioners. The new approach to structured H -control, developed by the authors during the past decade, is rooted in a change of paradigm in the synthesis algorithms. Struc- tured design is no longer be based on solving al- gebraic Riccati equations or matrix inequalities. Instead, optimization-based design techniques are required. In this essay we indicate why structured controller synthesis is central in modern control engineering. We explain why non-smooth opti- mization techniques are needed to compute struc- tured control laws, and we point to software tools which enable practitioners to use these new tools in high technology applications. Keywords and Phrases Controller tuning, H synthesis, multi-objective design, nonsmooth optimization, structured con- trollers, robust control Introduction In the modern high technology field control engineers usually face a large variety of concur- ring design specifications such as noise or gain at- tenuation in prescribed frequency bands, damp- ing, decoupling, constraints on settling- or rise- time, and much else. In addition, as plant mod- els are generally only approximations of the true system dynamics, control laws have to be robust with respect to uncertainty in physical parame- ters or with regard to un-modeled high frequency phenomena. Not surprisingly, such a plethora of constraints presents a major challenge for con- troller tuning, due not only to the ever growing number of such constraints, but also because of their very different provenience. The steady increase in plant complexity is ex- acerbated by the quest that regulators should be as simple as possible, easy to understand and to tune by practitioners, convenient to hardware implement, and generally available at low cost. These practical constraints highlight the limited use of Riccati- or LMI-based controllers, and they are the driving force for the implementation of structured control architectures. On the other hand this means that hand-tuning methods have to be replaced by rigorous algorithmic optimiza- tion tools. 1 Structured Controllers Before addressing specific optimization tech- niques, we recall some basic terminology for con- trol design problems with structured controllers. The plant model P is described as P : ˙ x P = Ax P + B 1 w + B 2 u z = C 1 x P + D 11 w + D 12 u y = C 2 x P + D 21 w + D 22 u (1) where A, B 1 , ... are real matrices of appropriate dimensions, x P R n P is the state, u R n u the con- trol, y R n y the measured output, w R n w the ex- ogenous input, and z R n z the regulated output. Similarly, the sought output feedback controller K is described as K : { ˙ x K = A K x K + B K y u = C K x K + D K y (2) with x K R n K , and is called structured if the (real) matrices A K , B K , C K , D K depend smoothly on a de- sign parameter x R n , referred to as the vector of tunable parameters. Formally, we have differ- entiable mappings A K = A K (x), B K = B K (x), C K = C K (x), D K = D K (x),
Transcript
Page 1: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

Optimization-Based Control Design Techniques and Tools

Pierre Apkarian1, Dominikus Noll21ONERA - The French Aerospace Lab, Toulouse, France

2Université de Toulouse, Institut de Mathématiques, Toulouse, [email protected], [email protected]

AbstractStructured output feedback controller synthesisis an exciting recent concept in modern controldesign, which bridges between theory and prac-tice in so far as it allows for the first time to ap-ply sophisticated mathematical design paradigmslike H∞- or H2-control within control architecturespreferred by practitioners. The new approach tostructured H∞-control, developed by the authorsduring the past decade, is rooted in a changeof paradigm in the synthesis algorithms. Struc-tured design is no longer be based on solving al-gebraic Riccati equations or matrix inequalities.Instead, optimization-based design techniques arerequired. In this essay we indicate why structuredcontroller synthesis is central in modern controlengineering. We explain why non-smooth opti-mization techniques are needed to compute struc-tured control laws, and we point to software toolswhich enable practitioners to use these new toolsin high technology applications.

Keywords and PhrasesController tuning, H∞ synthesis, multi-objectivedesign, nonsmooth optimization, structured con-trollers, robust control

Introduction

In the modern high technology field controlengineers usually face a large variety of concur-ring design specifications such as noise or gain at-tenuation in prescribed frequency bands, damp-ing, decoupling, constraints on settling- or rise-time, and much else. In addition, as plant mod-els are generally only approximations of the truesystem dynamics, control laws have to be robustwith respect to uncertainty in physical parame-ters or with regard to un-modeled high frequencyphenomena. Not surprisingly, such a plethora of

constraints presents a major challenge for con-troller tuning, due not only to the ever growingnumber of such constraints, but also because oftheir very different provenience.

The steady increase in plant complexity is ex-acerbated by the quest that regulators should beas simple as possible, easy to understand andto tune by practitioners, convenient to hardwareimplement, and generally available at low cost.These practical constraints highlight the limiteduse of Riccati- or LMI-based controllers, and theyare the driving force for the implementation ofstructured control architectures. On the otherhand this means that hand-tuning methods haveto be replaced by rigorous algorithmic optimiza-tion tools.

1 Structured ControllersBefore addressing specific optimization tech-

niques, we recall some basic terminology for con-trol design problems with structured controllers.The plant model P is described as

P :

xP = AxP + B1w + B2uz = C1xP + D11w + D12uy = C2xP + D21w + D22u

(1)

where A, B1, ... are real matrices of appropriatedimensions, xP ∈RnP is the state, u ∈Rnu the con-trol, y∈Rny the measured output, w∈Rnw the ex-ogenous input, and z ∈ Rnz the regulated output.Similarly, the sought output feedback controllerK is described as

K :{

xK = AKxK + BKyu = CKxK + DKy (2)

with xK ∈RnK , and is called structured if the (real)matrices AK ,BK ,CK ,DK depend smoothly on a de-sign parameter x ∈ Rn, referred to as the vectorof tunable parameters. Formally, we have differ-entiable mappingsAK = AK(x),BK = BK(x),CK =CK(x),DK = DK(x),

Page 2: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

and we abbreviate these by the notation K(x) forshort to emphasize that the controller is struc-tured with x as tunable elements. A structuredcontroller synthesis problem is then an optimiza-tion problem of the form

minimize ∥Twz(P,K(x))∥subject to K(x) closed-loop stabilizing

K(x) structured, x ∈ Rn(3)

where Twz(P,K) = Fℓ(P,K) is the lower feedbackconnection of (1) with (2) as in Fig. 1 (left),also called the Linear Fractional Transformation[Zhou et al, 1996]. The norm ∥ · ∥ stands forthe H∞-norm, the H2-norm, or any other systemnorm, while the optimization variable x ∈ Rn re-groups the tunable parameters in the design.

Standard examples of structured controllersK(x) include realizable PIDs, observer-based,reduced-order, or decentralized controllers, whichin state-space are expressed as: 0 0 1

0 −1/τ −kD/τkI 1/τ kP + kD/τ

,

[A−B2Kc −K fC2 K f

−Kc 0

],[

AK BKCK DK

],[

diag(AK1 , . . . ,AKq) diag(BK1 , . . . ,BKq)diag(CK1 , . . . ,CKq) diag(DK1 , . . . ,DKq)

].

In the case of a PID the tunable parameters arex = (τ,kP,kI ,kD), for observer-based controllersx regroups the estimator and state-feedbackgains (K f ,Kc), for reduced order controllersnK < nP the tunable parameters x are then2

K + nKny + nKnu + nynu unknown entries in(AK ,BK ,CK ,DK), and in the decentralized formx regroups the unknown entries in AK1, . . . ,DKq.In contrast, full-order controllers have themaximum number N = n2

P + nPny + nPnu + nynuof degrees of freedom and are referred to asunstructured or as black-box controllers.

contrast, full-order controllers have the maximum numberN = n2

P

+nP

ny

+nP

nu

+ny

nu

of degrees of freedom andare referred to as unstructured or as black-box controllers.

z w

P

y

K

u

r K1 K2 Gye u+

!

Fig. 1. Black-box full-order controller K on the left, structured 2-DOFcontrol architecture with K = block-diag(K1,K2) on the right.

More sophisticated controllers structures K(x) arise formarchitectures like for instance a 2-DOF control arrangementwith feedback block K2 and a set-point filter K1 as in Fig.1 (right). Suppose K1 is the 1st-order filter K1(s) = a/(s+

a) and K2 the PI feedback K2(s) = kP

+ kI

/s. Then thetransfer T

ry

from r to y can be represented as the feedbackconnection of P and K(x) with

P :=

2

664

A 0 0 BC 0 0 D0 I 0 0

�C 0 I �D

3

775 ,K(x) :=

K1(s) 0

0 K2(s)

�,

where K(x, s) takes a typical block-diagonal structure fea-turing the tunable elements x = (a, k

P

, kI

).In much the same way arbitrary multi-loop interconnec-

tions of fixed-model elements with tunable controller blocksK

i

(x) can be re-arranged as in Fig. 2, so that K(x) capturesall tunable blocks in a decentralized structure general enoughto cover most engineering applications.

Fig. 2. Synthesis of K = block-diag(K1, . . . ,KN ) against multiplerequirements or models P (1), . . . , P (M). Each Ki(x) can be structured.

The structure concept is equally useful to deal with thesecond central challenge in control design: system uncer-tainty. The latter may be handled with µ-synthesis techniques[2] if a parametric uncertain model is available. A lessambitious but often more practical alternative consists inoptimizing the structured controller K(x) against a finiteset of plants P (1), . . . , P (M) representing model variationsdue to uncertainty, aging, sensor and actuator breakdown,un-modeled dynamics, in tandem with the robustness andperformance specifications. This is again formally covered byFig. 2 and leads to a multi-objective constrained optimizationproblem of the form

minimize f(x) = max

k2SOFT,i2Ik

kT (k)wizi(K(x))k

subject to g(x) = max

k2HARD,j2Jk

kT (k)wjzj (K(x))k 1

K = K(x) closed-loop stabilizingx 2 Rn

(4)

where T(k)wizi denotes the ith closed-loop robustness or per-

formance channel wi

! zi

for the k-th plant model P (k)(s).

The rationale of (4) is to minimize the worst-case cost ofthe soft constraints kT (k)

wizik, k 2 SOFT, while enforcing thehard constraints kT (k)

wjzjk 1, k 2 HARD.

III. OPTIMIZATION TECHNIQUES OVER THE YEARS

During the late 1990s the necessity to develop designtechniques for structured regulators K(x) was recognized[3], and the limitations of synthesis methods based onalgebraic Riccati equations (AREs) or linear matrix inequal-ities (LMIs) became evident, as these techniques can onlyprovide black-box controllers. Unfortunately, the lack ofappropriate synthesis techniques for structured K(x) led tothe unsatisfying situation, where sophisticated approacheslike the H1 paradigm developed by academia since the1980s could not be brought to work for the design of thosecontroller structures K(x) preferred by practitioners. Designengineers had to continue to rely on heuristic and ad-hoctuning techniques, with only limited scope and reliability. Asan example: post-processing to reduce a black-box controllerto a practical size is prone to failure. It may at best beconsidered fill-in for a rigorous design method which directlycomputes a reduced-order controller. Similarly, hand-tuningof the parameter x remains a puzzling task because of theloop interactions, and fails as soon as complexity increases.

In the late 1990s and early 2000s, a change of methodswas observed. Structured H2- and H1-synthesis problems(3) were addressed by bilinear matrix inequality (BMI)optimization, which used local optimization techniques basedon augmented Lagrangian [4]–[6], sequential semidefiniteprogramming methods [7], [8], and non-smooth methods forBMIs [9], [10]. However, these techniques were based on thebounded real lemma or similar matrix inequalities, and weretherefore of limited success due to the presence of Lyapunovvariables, whose number grows quadratically in n

P

+nK

andrepresents the bottleneck of this approach.

The epoch-making change occurs with the introduction ofnon-smooth optimization techniques [11]–[14] to programs(3) and (4). Today non-smooth methods have supersededmatrix inequality-based techniques and may be consideredthe state-of-art as far as realistic applications are concerned.The transition took almost a decade.

Alternative control-related local optimization techniquesand heuristics include the gradient sampling technique of[15], derivative-free optimization discussed in [16], [17],particle swarm optimization, see [18] and references therein,and also evolutionary computation techniques [19]. The lastthree classes do not take advantage of derivative information

Figure 1: Black-box full-order controller K on theleft, structured 2-DOF control architecture with K =diag(K1,K2) on the right.

More sophisticated controller structures K(x)arise form architectures like for instance a 2-DOFcontrol arrangement with feedback block K2 and a

set-point filter K1 as in Fig. 1 (right). Suppose K1is the 1st-order filter K1(s) = a/(s+a) and K2 thePI feedback K2(s) = kP + kI/s. Then the transferTry from r to y can be represented as the feedbackconnection of P and K(x,s) with

P :=

A 0 0 BC 0 0 D0 I 0 0−C 0 I −D

,

K(x,s) :=[

K1(a,s) 00 K2(s,kP,kI)

],

and gathering tunable elements in x = (a,kP,kI).In much the same way arbitrary multi-loop in-

terconnections of fixed-model elements with tun-able controller blocks Ki(x) can be re-arrangedas in Fig. 2, so that K(x) captures all tunableblocks in a decentralized structure general enoughto cover most engineering applications.

contrast, full-order controllers have the maximum numberN = n2

P

+nP

ny

+nP

nu

+ny

nu

of degrees of freedom andare referred to as unstructured or as black-box controllers.

Fig. 1. Black-box full-order controller K on the left, structured 2-DOFcontrol architecture with K = block-diag(K1,K2) on the right.

More sophisticated controllers structures K(x) arise formarchitectures like for instance a 2-DOF control arrangementwith feedback block K2 and a set-point filter K1 as in Fig.1 (right). Suppose K1 is the 1st-order filter K1(s) = a/(s+

a) and K2 the PI feedback K2(s) = kP

+ kI

/s. Then thetransfer T

ry

from r to y can be represented as the feedbackconnection of P and K(x) with

P :=

2

664

A 0 0 BC 0 0 D0 I 0 0

�C 0 I �D

3

775 ,K(x) :=

K1(s) 0

0 K2(s)

�,

where K(x, s) takes a typical block-diagonal structure fea-turing the tunable elements x = (a, k

P

, kI

).In much the same way arbitrary multi-loop interconnec-

tions of fixed-model elements with tunable controller blocksK

i

(x) can be re-arranged as in Fig. 2, so that K(x) capturesall tunable blocks in a decentralized structure general enoughto cover most engineering applications.

z1 w1

z2 w2

......

P (1)

K1

. . .KN

y u. . .

z1 w1

z2 w2

......

P (M)

K1

. . .KN

y u

Fig. 2. Synthesis of K = block-diag(K1, . . . ,KN ) against multiplerequirements or models P (1), . . . , P (M). Each Ki(x) can be structured.

The structure concept is equally useful to deal with thesecond central challenge in control design: system uncer-tainty. The latter may be handled with µ-synthesis techniques[2] if a parametric uncertain model is available. A lessambitious but often more practical alternative consists inoptimizing the structured controller K(x) against a finiteset of plants P (1), . . . , P (M) representing model variationsdue to uncertainty, aging, sensor and actuator breakdown,un-modeled dynamics, in tandem with the robustness andperformance specifications. This is again formally covered byFig. 2 and leads to a multi-objective constrained optimizationproblem of the form

minimize f(x) = max

k2SOFT,i2Ik

kT (k)wizi(K(x))k

subject to g(x) = max

k2HARD,j2Jk

kT (k)wjzj (K(x))k 1

K = K(x) closed-loop stabilizingx 2 Rn

(4)

where T(k)wizi denotes the ith closed-loop robustness or per-

formance channel wi

! zi

for the k-th plant model P (k)(s).

The rationale of (4) is to minimize the worst-case cost ofthe soft constraints kT (k)

wizik, k 2 SOFT, while enforcing thehard constraints kT (k)

wjzjk 1, k 2 HARD.

III. OPTIMIZATION TECHNIQUES OVER THE YEARS

During the late 1990s the necessity to develop designtechniques for structured regulators K(x) was recognized[3], and the limitations of synthesis methods based onalgebraic Riccati equations (AREs) or linear matrix inequal-ities (LMIs) became evident, as these techniques can onlyprovide black-box controllers. Unfortunately, the lack ofappropriate synthesis techniques for structured K(x) led tothe unsatisfying situation, where sophisticated approacheslike the H1 paradigm developed by academia since the1980s could not be brought to work for the design of thosecontroller structures K(x) preferred by practitioners. Designengineers had to continue to rely on heuristic and ad-hoctuning techniques, with only limited scope and reliability. Asan example: post-processing to reduce a black-box controllerto a practical size is prone to failure. It may at best beconsidered fill-in for a rigorous design method which directlycomputes a reduced-order controller. Similarly, hand-tuningof the parameter x remains a puzzling task because of theloop interactions, and fails as soon as complexity increases.

In the late 1990s and early 2000s, a change of methodswas observed. Structured H2- and H1-synthesis problems(3) were addressed by bilinear matrix inequality (BMI)optimization, which used local optimization techniques basedon augmented Lagrangian [4]–[6], sequential semidefiniteprogramming methods [7], [8], and non-smooth methods forBMIs [9], [10]. However, these techniques were based on thebounded real lemma or similar matrix inequalities, and weretherefore of limited success due to the presence of Lyapunovvariables, whose number grows quadratically in n

P

+nK

andrepresents the bottleneck of this approach.

The epoch-making change occurs with the introduction ofnon-smooth optimization techniques [11]–[14] to programs(3) and (4). Today non-smooth methods have supersededmatrix inequality-based techniques and may be consideredthe state-of-art as far as realistic applications are concerned.The transition took almost a decade.

Alternative control-related local optimization techniquesand heuristics include the gradient sampling technique of[15], derivative-free optimization discussed in [16], [17],particle swarm optimization, see [18] and references therein,and also evolutionary computation techniques [19]. The lastthree classes do not take advantage of derivative information

Figure 2: Synthesis of K = diag(K1, . . . ,KN) againstmultiple requirements or models P(1), . . . ,P(M). EachKi(x) can be structured.

The structure concept is equally useful to dealwith the second central challenge in control de-sign: system uncertainty. The latter may be han-dled with µ-synthesis techniques [Stein and Doyle,1991] if a parametric uncertain model is avail-able. A less ambitious but often more practi-cal alternative consists in optimizing the struc-tured controller K(x) against a finite set of plantsP(1), . . . ,P(M) representing model variations due touncertainty, aging, sensor and actuator break-down, un-modeled dynamics, in tandem with therobustness and performance specifications. Thisis again formally covered by Fig. 2 and leads to amulti-objective constrained optimization problemof the form

minimize f (x) = maxk∈SOFT,i∈Ik

∥T (k)wizi(K(x))∥

subject to g(x) = maxk∈HARD, j∈Jk

∥T (k)w jz j(K(x))∥ ≤ 1

K(x) structured and stabilizingx ∈ Rn

(4)

Page 3: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

where T (k)wizi denotes the ith closed-loop robustness

or performance channel wi → zi for the k-th plantmodel P(k). SOFT and HARD denote index setstaken over a finite set of specifications, say in{1, . . . ,M}. The rationale of (4) is to minimizethe worst-case cost of the soft constraints ∥T (k)

wizi∥,k ∈SOFT, while enforcing the hard constraints∥T (k)

w jz j∥ ≤ 1, k ∈HARD, which prevail over softones and are mandatory. In addition to local op-timization (4), the problem can undergo a globaloptimization step in order to prove global stabil-ity and performance of the design, see Ravanbodet al [2017]; Apkarian et al [2015a,b].

2 Optimization Techniques Over theYears

During the late 1990s the necessity to de-velop design techniques for structured regulatorsK(x) was recognized [Fares et al, 2001], and thelimitations of synthesis methods based on alge-braic Riccati equations or linear matrix inequal-ities (LMIs) became evident, as these techniquescannot provide structured controllers needed inpractice. The lack of appropriate synthesis tech-niques for structured K(x) led to the unfortunatesituation, where sophisticated approaches like theH∞ paradigm developed by academia since the1980s could not be brought to work for the designof those controller structures K(x) preferred bypractitioners. Design engineers had to continueto rely on heuristic and ad-hoc tuning techniques,with only limited scope and reliability. As anexample: post-processing to reduce a black-boxcontroller to a practical size is prone to failure.It may at best be considered a fill-in for a rig-orous design method which directly computes areduced-order controller. Similarly, hand-tuningof the parameters x remains a puzzling task be-cause of the loop interactions, and fails as soonas complexity increases.

In the late 1990s and early 2000s, a change ofmethods was observed. Structured H2- and H∞-synthesis problems (3) were addressed by bilin-ear matrix inequality (BMI) optimization, whichused local optimization techniques based on theaugmented Lagrangian method [Fares et al, 2001;Noll et al, 2004; Kocvara and Stingl, 2003;Noll, 2007], sequential semidefinite programmingmethods [Fares et al, 2002; Apkarian et al, 2003],and non-smooth methods for BMIs [Noll et al,2009; Lemaréchal and Oustry, 2000]. However,

these techniques were based on the bounded reallemma or similar matrix inequalities, and weretherefore of limited success due to the presenceof Lyapunov variables, i.e. matrix-valued un-knowns, whose dimension grows quadratically innP +nK and represents the bottleneck of that ap-proach.

The epoch-making change occurs with theintroduction of non-smooth optimization tech-niques [Noll and Apkarian, 2005; Apkarian andNoll, 2006b, 2007, 2006c] to programs (3) and (4).Today non-smooth methods have superseded ma-trix inequality-based techniques and may be con-sidered the state-of-art as far as realistic applica-tions are concerned. The transition took almosta decade.

Alternative control-related local optimizationtechniques and heuristics include the gradientsampling technique of [Burke et al, 2005], andother derivative-free optimization as in [Koldaet al, 2003; Apkarian and Noll, 2006a], particleswarm optimization, see [Oi et al, 2008] and ref-erences therein, and also evolutionary computa-tion techniques [Lieslehto, 2001]. All these classesdo not exploit derivative information and rely onfunction evaluations only. They are therefore ap-plicable to a broad variety of problems includingthose where function values arise from complexnumerical simulations. The combinatorial natureof these techniques, however, limits their use tosmall problems with a few tens of variable. Moresignificantly, these methods often lack a solid con-vergence theory. In contrast, as we have demon-strated over recent years, [Apkarian and Noll,2006b; Noll et al, 2008; Apkarian et al, 2016, 2018]specialized non-smooth techniques are highly ef-ficient in practice, are based on a sophisticatedconvergence theory, capable of solving mediumsize problems in a matter of seconds, and are stilloperational for large size problems with severalhundreds of states.

3 Non-smooth optimizationtechniques

The benefit of the non-smooth casts (3) and(4) lies in the possibility to avoid searching forLyapunov variables, a major advantage as theirnumber (nP +nK)

2/2 usually largely dominates n,the number of true decision parameters x. Lya-punov variables do still occur implicitly in thefunction evaluation procedures, but this has noharmful effect for systems up to several hundred

Page 4: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

states. In abstract terms, a non-smooth optimiza-tion program has the form

minimize f (x)subject to g(x)≤ 0

x ∈ Rn(5)

where f ,g : Rn →R are locally Lipschitz functionsand are easily identified from the cast in (4).

In the realm of convex optimization, non-smooth programs are conveniently addressed byso-called bundle methods, introduced in the late1970s by Lemaréchal [Lemaréchal, 1975]. Bun-dle methods are used to solve difficult problemsin integer programming or in stochastic optimiza-tion via Lagrangian relaxation. Extensions of thebundling technique to non-convex problems like(3) or (4) were first developed in [Apkarian andNoll, 2006b, 2007, 2006c; Apkarian et al, 2008;Noll et al, 2009], and in more abstract form, in[Noll et al, 2008]. Recently, we also extendedbundle techniques to the trust-region frameworkApkarian et al [2016, 2018]; Apkarian and Noll[2018], which leads to the first extension of theclassical trust-region method to non-differentialoptimization supported by a valid convergencetheory.

Fig. 3 shows a schematic view of a non-convexbundle method consisting of a descent-step gen-erating inner loop (yellow block) comparable toa line search in smooth optimization, embeddedinto the outer loop (blue box), where serious iter-ates are processed, stopping criteria are applied,and the acceptance rules of traditional trust re-gion techniques is assured. At the core of theinteraction between inner and outer loop is themanagement of the proximity control parameterτ, which governs the stepsize ∥x − yk∥ betweentrial steps yk at the current serious iterate x. Sim-ilar to the management of a trust region radius orof the stepsize in a linesearch, proximity controlallows to force shorter trial steps if agreement ofthe local model with the true objective functionis poor, and allows larger steps if agreement issatisfactory.

Oracle-based bundle methods traditionally as-sure global convergence in the sense of subse-quences under the sole hypothesis that for ev-ery trial point x the function value f (x) and oneClarke subgradient ϕ ∈ ∂ f (x) are provided. In au-tomatic control applications it is as a rule possibleto provide more specific information, which maybe exploited to speed up convergence Apkarianand Noll [2006b].

Computing function value and gradients ofthe H2-norm f (x) = ∥Twz (P,K(x))∥2 requires es-

.Flowchart of proximity control algorithm

outer loop inner loop command if statement

start

current iterate

stopping exityes

workingmodel

tangentprogram

ρ ≥ γyes no

ρ ≥ γ

yesno

cutting planesaggregation

ρ ≥ γno

τ+ = 2τ

τ+ = τ

yesρ ≥ Γ

yes

recycle planes

noτ+ = 1

2ττ+ = τ

no

.

Figure 3: Flow chart of proximity control bundle al-gorithm

sentially the solution of two Lyapunov equationsof size nP+nK , see [Apkarian et al, 2007]. For theH∞-norm, f (x) = ∥Twz (P,K(x))∥∞, function eval-uation is based on the Hamiltonian algorithm of[Benner et al, 2012; Boyd et al, 1989]. The Hamil-tonian matrix is of size nP +nK , so that functionevaluations may be costly for very large plantstate dimension (nP > 500), even though the num-ber of outer loop iterations of the bundle algo-rithm is not affected by a large nP and generallyrelates to n, the dimension of x. The additionalcost for subgradient computation for large nP isrelatively cheap as it relies on linear algebra [Ap-karian and Noll, 2006b]. Function and subgradi-ent evaluations for H∞ and H2 norms are typicallyobtained in O

((nP +nK)

3)

flops.

4 Computational Tools

Our non-smooth optimization methods be-came available to the engineering communitysince 2010 via the MATLAB Robust Con-trol Toolbox [Robust Control Toolbox 4.2,2012; Gahinet and Apkarian, 2011]. RoutinesHINFSTRUCT, LOOPTUNE and SYSTUNEare versatile enough to define and combine tun-able blocks Ki(x), to build and aggregate mul-tiple models and design requirements on T (k)

wz ofdifferent nature, and to provide suitable valida-tion tools. Their implementation was carried outin cooperation with P. Gahinet (MathWorks).These routines further exploit the structure ofproblem (4) to enhance efficiency, see [Apkarianand Noll, 2007] and [Apkarian and Noll, 2006b].

It should be mentioned that design prob-lems with multiple hard constraints are inher-ently complex and generally NP-hard, so that ex-

Page 5: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

haustive methods fail even for small to mediumsize problems. The principled decision made in[Apkarian and Noll, 2006b], and reflected in theMATLAB tools, is to rely on local optimizationtechniques instead. This leads to weaker con-vergence certificates, but has the advantage towork successfully in practice. In the same vein,in (4) it is preferable to rely on a mixture of softand hard requirements, for instance, by the useof exact penalty functions [Noll and Apkarian,2005]. Key features implemented in the men-tioned MATLAB routines are discussed in [Ap-karian, 2013; Gahinet and Apkarian, 2011; Ap-karian and Noll, 2007].

5 Applications

Design of a feedback regulator is an inter-active process, in which tools like SYSTUNE,LOOPTUNE or HINFSTRUCT support the de-signer in various ways. In this section we illus-trate their enormous potential by showing thateven infinite-dimensional systems may be success-fully addressed by these tools. For a plethora ofdesign examples for real-rational systems includ-ing parametric and complex dynamic uncertaintywe refer to Ravanbod et al [2017]; Apkarian et al[2016, 2015a]; Apkarian and Noll [2018]; Apkar-ian et al [2018]. For recent applications of ourtools in real-world applications see also Falcozet al [2015], where it is in particular explainedhow HINFSTRUCThelped in 2014 to save theRosetta mission. Another important applicationof HINFSTRUCTis the design of the atmosphericflight pilot for the ARIANE VI launcher by theAriane Group Ganet-Schoeller et al [2017].

5.1 Illustrative example

We discuss boundary control of a wave equationwith anti-stable damping,

xtt(ξ, t) = xξξ(ξ, t), t ≥ 0,ξ ∈ [0,1]

xξ(0, t) =−qxt(0, t), q > 0,q = 1 (6)xξ(1, t) = u(t).

where notations xt , xξ are partial derivatives of xwith respect to time and space, respectively. In(6), x(·, t),xt(·, t) is the state, the control appliedat the boundary ξ = 1 is u(t), and we assume thatthe measured outputs are

y1(t) = x(0, t),y2(t) = x(1, t),y3(t) = xt(1, t). (7)

The system has been discussed previously inSmyshlyaev and Krstic [2009]; Fridman [2014];Bresch-Pietri and Krstic [2014] and has been pro-posed for the control of slip-strick vibrations indrilling devices Saldivar et al [2011]. Here mea-surements y1,y2 correspond to the angular posi-tions of the drill string at the top and bottomlevel, y3 measures angular speed at the top level,while control corresponds to a reference velocityat the top. The friction characteristics at the bot-tom level are characterized by the parameter q,and the control objective is to maintain a con-stant angular velocity at the bottom.

Similar models have been used to control pres-sure fields in duct combustion dynamics, see De-Queiroz and Rahn [2002]. The challenge in (6),(7) is to design implementable controllers despitethe use of an infinite-dimensional system model.

The transfer function of (6) is obtained from:

G(ξ,s) =x(ξ,s)u(s)

=1s· (1−q)esξ +(1+q)e−sξ

(1−q)es − (1+q)e−s ,

which in view of (7) leads to G(s) = [G1;G2;G3] =[G(0,s);G(1,s);sG(1,s)].

Putting G in feedback with the controller K0 =

[0 0 1] leads to G = G/(1+G3), where

G(s) =

1

s(1−q)

1+Q2s12

+

− 1−e−s

s(1−q)

−Q(1−e−2s)2s

Q2 e−2s

=: G(s)+Φ(s).

(8)Here G is real-rational and unstable, whileΦ is stable but infinite dimensional. Nowwe use the fact that stability of the closedloop (G + Φ,K) is equivalent to stability ofthe loop (G,feedback(K,Φ)) upon definingfeedback(M,N) :=M(I+NM)−1 . The loop trans-formation is explained in Fig. 4.

xtt(ξ, t) = xξξ(ξ, t), t ≥ 0,ξ ∈ [0,1]

xξ(0, t) =−qxt(0, t), q > 0,q = 1

xξ(1, t) = u(t).

xt xξ x

x(·, t),xt(·, t)ξ = 1 u(t)

y1(t) = x(0, t),y2(t) = x(1, t),y3(t) = xt(1, t).

y1,y2

y3

q

G(ξ,s) = x(ξ,s)u(s)

=1s· (1−q)esξ +(1+q)e−sξ

(1−q)es − (1+q)e−s ,

G(s) = [G1;G2;G3] =[G(0,s);G(1,s);sG(1,s)]

G K0 =[0 0 1] G = G/(1+G3)

G(s) =

⎢⎢⎣

1s(1−q)

1+Q2s12

⎥⎥⎦+

⎢⎢⎢⎣

− 1−e−s

s(1−q)

−Q(1−e−2s)2s

Q2 e−2s

⎥⎥⎥⎦=: G(s)+Φ(s).

(G + Φ,K)(G,feedback(K,Φ))

feedback(M,N) := M(I+NM)−1 .

G

Φ

K

e

u y

yG

Φ

K

e

uy

y

(G +Φ,K)(G,feedback(K,Φ))

K = K(x)

Figure 4: Stability of the closed-loop (G +Φ,K) is equivalent to stability of the closed-loop(G,feedback(K,Φ)). See also Moelja and Meinsma[2003].

Using (8) we construct a finite-dimensionalstructured controller K = K(x) which stabilizes

Page 6: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

G. The controller K stabilizing G in (8) isthen recovered from K through the equation K =feedback(K,Φ), which when inverted gives K =

feedback(K,−Φ). The overall controller for (6)is K∗ =K0+K, and since along with K only delaysappear in Φ, the controller K∗ is implementable.

Construction of K uses SYSTUNE with poleplacement via TuningGoal.Poles, imposing thatclosed-loop poles have a minimum decay of 0.9,minimum damping of 0.9, and a maximum fre-quency of 4.0. The controller structure is cho-sen as static, so that x ∈ R3. A simulation withK∗ is shown in Fig. 5 (bottom) and some ac-celeration over the backstepping controller fromBresch-Pietri and Krstic [2014] (top) is observed.

Figure 5: Wave equation. Simulations for K obtainedby backstepping control (top) Bresch-Pietri andKrstic [2014] and K∗ = K0 +K obtained by optimiz-ing feedback(G, K) via SYSTUNE (bottom). Bothcontrollers are ∞-dimensional, but implementable.

5.2 Gain-scheduling control

Our last study is when the parameter q ≥ 0 is un-certain or allowed to vary in time with sufficientlyslow variations as in Shamma and Athans [1990].We assume that a nominal q0 > 0 and an uncer-

tain interval [q,q] with q0 ∈ (q,q) and 1 ∈ [q,q] aregiven.

The following scheduling scenarios, all lead-ing to implementable controllers, are possi-ble. (a) Computing a nominal controller Kat q0 as before, and scheduling through Φ(q),which depends explicitly on q, so that K(1)(q) =K0 + feedback(K,−Φ(q)). (b) Computing K(q)which depends on q, and using K(2)(q) = K0 +

feedback(K(q),−Φ(q)).While (a) uses (3) based on Apkarian and Noll

[2006b,c] and available in SYSTUNE, we showthat one can also apply (3) to case (b). We useFig. 4 to work in the finite-dimensional system(G(q), K(q)), where plant and controller now de-pend on q, which is a parameter-varying design.

For that we have to decide on a parametricform of the controller K(q), which we choose as

K(q,x) = K(q0)+(q−q0)K1(x)+(q−q0)2K2(x),

and where we adopted the simple static formK1(x) = [x1 x2 x3], K2 = [x4 x5 x6], featuring a to-tal of 6 tunable parameters. The nominal K(q0)is obtained via (3) as above. For q0 = 3 this leadsto K(q0) = [−1.049 −1.049 −0.05402], computedvia SYSTUNE.

With the parametric form K(q,x) fixed, wenow use again the feedback system (G(q), K(q)) inFig. 4 and design a parametric robust controllerusing the method of Apkarian et al [2015a], whichis included in the SYSTUNE package and used bydefault if an uncertain closed-loop is entered. Thetuning goals are chosen as constraints on closed-loop poles including minimum decay of 0.7, min-imum damping of 0.9, with maximum frequency2. The controller obtained is (with q0 = 3)

K(q,x∗) = K(q0)+(q−q0)K1(x∗)+(q−q0)2K2(x∗),

with K1 = [−0.1102,−0.1102,−0.1053], K2 =[0.03901,0.03901,0.02855], and we retrieve the fi-nal parameter varying controller for G(q) as

K(2)(q) = K0 +feedback(K(q,x∗),−Φ(q)).

Nominal and scheduled controllers are comparedin simulation in Figs. 6, 7, and 8, which indi-cate that K(2)(q) achieves the best performancefor frozen-in-time values q ∈ [2,4]. All controllersare easily implementable, since only real-rationalelements in combination with delays are used.

The non-smooth program (5) was solved withSYSTUNE in 30s CPU on a Mac OS X with2.66 GHz Intel Core i7 and 8 GB RAM. The

Page 7: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

Figure 6: Synthesis at nominal q0 = 3. Simulationsof nominal K = K0 +feedback(K,Φ(3)) for q = 2,3,4.Nominal controller is robustly stable over [q,q].

reader is referred to the MATLAB Control Tool-box 2018b and higher versions for additional ex-amples. More details on this study can be foundin Apkarian and Noll [2019].

Cross References• H∞ control• Optimization-Based Robust Control• Robust Synthesis and Robustness Analysis

Figure 7: Method 1. K obtained for nominal q = 3,but scheduled K(q) = K0 +feedback(K,Φ(q)). Simu-lations for q = 2 top, q = 3 middle, q = 4 bottom

Techniques and Tools

Recommended reading

Apkarian P (2013) Tuning controllers againstmultiple design requirements. In: American

Page 8: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

Figure 8: Method 2. K(q) = Knom +(q− 3)K1 +(q−3)2K2 and K(q) = K0 + feedback(K(q),Φ(q). Simula-tions for q = 2,3,4

Control Conference (ACC), 2013, pp 3888–3893

Apkarian P, Noll D (2006a) Controller de-sign via nonsmooth multi-directional search.SIAM Journal on Control and Optimization44(6):1923–1949

Apkarian P, Noll D (2006b) Nonsmooth H∞ syn-thesis. IEEE Trans Aut Control 51(1):71–86

Apkarian P, Noll D (2006c) Nonsmooth optimiza-

tion for multidisk H∞ synthesis. European J ofControl 12(3):229–244

Apkarian P, Noll D (2007) Nonsmooth optimiza-tion for multiband frequency domain controldesign. Automatica 43(4):724–731

Apkarian P, Noll D (2018) Structured H∞-controlof infinite dimensional systems. Int J RobustNonlin Control 28(9):3212–3238

Apkarian P, Noll D (2019) Boundary con-trol of partial differential equations usingfrequency domain optimization techniquesarXiv:1905.06786v1: pages 1–22

Apkarian P, Noll D, Thevenet JB, Tuan HD(2003) A spectral quadratic-SDP method withapplications to fixed-order H2 and H∞ synthe-sis. European J of Control 10(6):527–538

Apkarian P, Noll D, Rondepierre A (2007) MixedH2/H∞ control via nonsmooth optimization. In:Proc. of the 46th IEEE Conference on Decisionand Control, New Orleans, LA, pp 4110–4115

Apkarian P, Noll D, Prot O (2008) A trust regionspectral bundle method for nonconvex eigen-value optimization. SIAM J on Optimization19(1):281–306

Apkarian P, Dao MN, Noll D (2015a) Parametricrobust structured control design. IEEE Trans-actions on Automatic Control 60(7):1857–1869

Apkarian P, Noll D, Ravanbod L (2015b)Computing the structured distance to in-stability. In: SIAM Conference on Controland its Applications, pp 423–430, DOI10.1137/1.9781611974072.58

Apkarian P, Noll D, Ravanbod L (2016) Nons-mooth bundle trust-region algorithm with ap-plications to robust stability. Set-Valued andVariational Analysis 24(1):115–148

Apkarian P, Noll D, Ravanbod L (2018) Non-smooth optimization for robust control ofinfinite-dimensional systems. Set-Valued andVariational Analysis 26(2):405–429

Benner P, Sima V, Voigt M (2012) L∞-norm com-putation for continuous-time descriptor sys-tems using structured matrix pencils. IEEETrans Aut Control 57(1):233–238

Boyd S, Balakrishnan V, Kabamba P (1989) Abisection method for computing the H∞ normof a transfer matrix and related problems.Mathematics of Control, Signals, and Systems2(3):207–219

Page 9: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

Bresch-Pietri D, Krstic M (2014) Output-feedback adaptive control of a wave PDEwith boundary anti-damping. Automatica50(5):1407–1415

Burke J, Lewis A, Overton M (2005) A robustgradient sampling algorithm for nonsmooth,nonconvex optimization. SIAM Journal on Op-timization 15:751–779

DeQueiroz M, Rahn CD (2002) Boundary controlof vibrations and noise in distributed parame-ter systems: an overview. Mechanical Systemsand Signal Processing 16(1):19–38

Falcoz A, Pittet C, Bennani S, Guignard A, Ba-yart C, Frapard B (2015) Systematic designmethods of robust and structured controllersfor satellites. Application to the refinement ofRosetta’s orbit controller. CEAS Space Journal7:319–334, DOI 10.1007/s12567-015-0099-8

Fares B, Apkarian P, Noll D (2001) An aug-mented Lagrangian method for a class of LMI-constrained problems in robust control theory.Int J Control 74(4):348–360

Fares B, Noll D, Apkarian P (2002) Robustcontrol via sequential semidefinite program-ming. SIAM J on Control and Optimization40(6):1791–1820

Fridman E (2014) Introduction to Time-DelaySystems. Systems and Control, Foundationsand Applications, Birkhäuser Basel

Gahinet P, Apkarian P (2011) Structured H∞synthesis in MATLAB. In: Proc. IFAC WorldCongress, Milan, Italy, pp 1435–1440

Ganet-Schoeller M, Desmariaux J, Combier C(2017) Structured control for future europeanlaunchers. AeroSpaceLab 13:2–10

Kocvara M, Stingl M (2003) A code for convexnonlinear and semidefinite programming. Opti-mization Methods and Software 18(3):317–333

Kolda TG, Lewis RM, Torczon V (2003) Opti-mization by direct search: new perspectives onsome classical and modern methods. SIAM Re-view 45(3):385–482

Lemaréchal C (1975) An extension of David-son’s methods to nondifferentiable problems.In: Balinski ML, Wolfe P (eds) Math. Pro-gramming Study 3, Nondifferentiable Opti-mization, North-Holland, pp 95–109

Lemaréchal C, Oustry F (2000) Nonsmooth algo-rithms to solve semidefinite programs. SIAM

Advances in Linear Matrix Inequality Meth-ods in Control series, ed L El Ghaoui & S-INiculescu

Lieslehto J (2001) PID controller tuning usingevolutionary programming. In: American Con-trol Conference, vol 4, pp 2828 –2833

Moelja AA, Meinsma G (2003) Parametrizationof stabilizing controllers for systems with mul-tiple i/o delays. IFAC Proceedings Volumes36(19):351 – 356, 4th IFAC Workshop on TimeDelay Systems (TDS 2003), Rocquencourt,France, 8-10 September 2003

Noll D (2007) Local convergence of an augmentedLagrangian method for matrix inequality con-strained programming. Optimization Methodsand Software 22(5):777–802

Noll D, Apkarian P (2005) Spectral bundle meth-ods for nonconvex maximum eigenvalue func-tions: first-order methods. Mathematical Pro-gramming Series B 104(2):701–727

Noll D, Torki M, Apkarian P (2004) Partiallyaugmented Lagrangian method for matrix in-equality constraints. SIAM Journal on Opti-mization 15(1):161–184

Noll D, Prot O, Rondepierre A (2008) A proxim-ity control algorithm to minimize nonsmoothand nonconvex functions. Pacific J of Opti-mization 4(3):571–604

Noll D, Prot O, Apkarian P (2009) A proximitycontrol algorithm to minimize nonsmooth andnonconvex semi-infinite maximum eigenvaluefunctions. Jounal of Convex Analysis 16(3 &4):641–666

Oi A, Nakazawa C, Matsui T, Fujiwara H, Mat-sumoto K, Nishida H, Ando J, Kawaura M(2008) Development of PSO-based PID tuningmethod. In: International Conference on Con-trol, Automation and Systems, pp 1917 –1920

Ravanbod L, Noll D, Apkarian P (2017) Branchand bound algorithm with applications to ro-bust stability. Journal of Global Optimization67(3):553–579

Robust Control Toolbox 42 (2012) The Math-Works Inc. Natick, MA, USA

Saldivar M, Mondié S, Loiseau JJ, Rasvan V(2011) Stick-slip oscillations in oilwell drillings:distributed parameter and neutral type re-tarded model approaches

Shamma JF, Athans M (1990) Analysis of GainScheduled Control for Nonlinear Plants. IEEETrans Aut Control 35(8):898–907

Page 10: Optimization-Based Control Design Techniques and Toolspierre.apkarian.free.fr/papers/Encyclopedia2019.pdf · optimization, which used local optimization techniques based on augmented

Smyshlyaev A, Krstic M (2009) Boundary con-trol of an anti-stable wave equation with anti-damping on the uncontrolled boundary. Sys-tems and Control Letters 58:617–623

Stein G, Doyle J (1991) Beyond singular val-ues and loopshapes. AIAA Journal of Guidanceand Control 14:5–16

Zhou K, Doyle JC, Glover K (1996) Robust andOptimal Control. Prentice Hall


Recommended