+ All Categories
Home > Documents > Mixed objective constrained stable generalised predictive control

Mixed objective constrained stable generalised predictive control

Date post: 19-Sep-2016
Category:
Upload: jr
View: 213 times
Download: 1 times
Share this document with a friend
9
Mixed objective constrained stable generalised predictive cmtral J.A. Rossiter B. Kouvaritakis J.R. Giossmr Indexing terms: Predictive cantrol, Stability AWtfacl: Canstrained stable generahd predict- ive control (CSGPC) provides a means for hand- ling constraints within the predictive control context and has guaranteed stability properties. However, to guarantee stability, an assumption concerning the feasibility of making the output reach its set-point over a finite horizon is required. If the performance objective is changed from a two-norm of the predicted errors to an infinity- norm, then the finite horizon feasibility assump tion is not needed to guarantee stability. As might be expected, though, performance under an infinity-norm objective is often not as good. Here we propose an algorithm which overcomes these difficulties by mixing the two- and infinity-norm objectives. 1 Introduction The basis of predictive control is to predict and minimise ny future tracking errors with n, future command inputs. Unfortunately, this technique can only be guaranteed to give stable results for special cases. To remedy this, several strategies which adopt the basic idea of gener- alised predictive control (GPC) [ 13 have been proposed. Constrained receding Horizon predictive control (CRHPC) [Z] adds further terminal constraints, as does a similar algorithm proposed in [3]. Stable generalised pre- dictive control (SGPC) [4], on the other hand, forms a stabilising loop around the system before applying GPC to a closed-loop configuration with finite impulse responses (FIRs). By using FIRs, the minimisation of the error norm gives a monotonically decreasing cost which guarantees stability and asymptotic tracking. When constraints are added to predictive control, one must assume the implied optimisation problem is feasible to retain the desired stability properties. This optim- isation is addressed as a mixed-weights least-squares (MWLS) problem in constrained stable generalised pre- dictive control (CSGPC) [SI. The feasibility assumption is strong because if includes ‘short term’ feasibility or feasibility over finite horizons. The fact that a system 0 IEE, 1995 Paper 1961D (CB), first received 6th July 1994 and in final revised Corm 23rd February 1995 J.A. Rossiter is with the Department of Mathematical Sciences, Lough- borough University, Loughborough, LE1 1 3TU, United Kingdom B. Kouvaritakis and J.R. Gossner are with the Department of Engineer- ing Science, Oxford University, Parks Road, Oxford, OX1 3PJ, United Kingdom 286 output can eventually reach a target value without vio- lating any constraints (longterm feasibility) does not imply it can do so in ny steps and with only n, command input changes (short-term feasibility). CSGPC, like other constrained predictive control algorithms, can, in fact, destabilise what was originally a feasible problem; by requiring the ptedicted output to reach its target within ny steps, it may be necessary to drive the controls and/or their increments at their limits. If, however, a system is unstable and/or nonminimum-phase, these earlier control moves may require future control moves which will not be available within the existing constraints; this will lead to instability [6]. Short-term infeasibility is caused by the requirement that the output should reach its target value within ny steps, so this requirement must be relaxed. However, the stability proof of SGPC and CSGPC depends on the property implied by the use of FIRS that the output settles after ny steps. Thus an obvious strategy to follow is to: (i) retain this last property but (i) allows the value to which the output settles at to become a degree of freedom. This was implemented in [6] by augmenting the MWLS problem appropriately to include a penalty on the deviation of far future command inputs from their target value. The modified algorithm is activated only when CSGPC becomes short-term infeasible, and guar- antees recovery of short-term feastxlity ; this, together with the properties of CSGPC, guarantees stability and asymptotic tracking; we call this stable constrained gen- eralised predictive control (SCGPC). Unfortunately, during the feasibility recovery stage, the modified dgo- rithm does not place any penalty on the norm of the vector of predicted errors, and this may degrade transient performance. Recent work by Zheng and Morari [7] and Allwright [8] minimises the infinity-norm of the predicted errors rather than the two-norm as CSGPC does. Stability is guaranteed without recourse to terminal constraints, and therefore without the need for a short-term feasibility assumption; however, the results are restricted to open- loop-stable systems. Within the context of SGPC this work can be extended to unstable systems. The per- formance of such systems, though, is often not as good as CSGPC, as all effort is spent minimising just the largest error, and this is often the first one. The system is there- fore driven very hard and the responses can be oscil- latory. In this paper we propose a procedure for dealing with short-term infeasibility in which the objective is usually the standard CSGPC two-norm minimisation, but when short-term feasibility is encountered, the setpoint is allowed to become a degree of freedom and the objective IEE Proc.-Control Theory Appl., Vol. 142. No. 4, July 1995
Transcript
Page 1: Mixed objective constrained stable generalised predictive control

Mixed objective constrained stable generalised predictive cmtral

J.A. Rossiter B. Kouvaritakis J.R. Giossmr

Indexing terms: Predictive cantrol, Stability

AWtfacl: Canstrained stable gene rahd predict- ive control (CSGPC) provides a means for hand- ling constraints within the predictive control context and has guaranteed stability properties. However, to guarantee stability, an assumption concerning the feasibility of making the output reach its set-point over a finite horizon is required. If the performance objective is changed from a two-norm of the predicted errors to an infinity- norm, then the finite horizon feasibility assump tion is not needed to guarantee stability. As might be expected, though, performance under an infinity-norm objective is often not as good. Here we propose an algorithm which overcomes these difficulties by mixing the two- and infinity-norm objectives.

1 Introduction

The basis of predictive control is to predict and minimise ny future tracking errors with n, future command inputs. Unfortunately, this technique can only be guaranteed to give stable results for special cases. To remedy this, several strategies which adopt the basic idea of gener- alised predictive control (GPC) [ 13 have been proposed. Constrained receding Horizon predictive control (CRHPC) [Z] adds further terminal constraints, as does a similar algorithm proposed in [3]. Stable generalised pre- dictive control (SGPC) [4], on the other hand, forms a stabilising loop around the system before applying GPC to a closed-loop configuration with finite impulse responses (FIRs). By using FIRs, the minimisation of the error norm gives a monotonically decreasing cost which guarantees stability and asymptotic tracking.

When constraints are added to predictive control, one must assume the implied optimisation problem is feasible to retain the desired stability properties. This optim- isation is addressed as a mixed-weights least-squares (MWLS) problem in constrained stable generalised pre- dictive control (CSGPC) [SI. The feasibility assumption is strong because if includes ‘short term’ feasibility or feasibility over finite horizons. The fact that a system

0 IEE, 1995 Paper 1961D (CB), first received 6th July 1994 and in final revised Corm 23rd February 1995 J.A. Rossiter is with the Department of Mathematical Sciences, Lough- borough University, Loughborough, LE1 1 3TU, United Kingdom B. Kouvaritakis and J.R. Gossner are with the Department of Engineer- ing Science, Oxford University, Parks Road, Oxford, OX1 3PJ, United Kingdom

286

output can eventually reach a target value without vio- lating any constraints (longterm feasibility) does not imply it can do so in ny steps and with only n, command input changes (short-term feasibility). CSGPC, like other constrained predictive control algorithms, can, in fact, destabilise what was originally a feasible problem; by requiring the ptedicted output to reach its target within ny steps, it may be necessary to drive the controls and/or their increments at their limits. If, however, a system is unstable and/or nonminimum-phase, these earlier control moves may require future control moves which will not be available within the existing constraints; this will lead to instability [6] .

Short-term infeasibility is caused by the requirement that the output should reach its target value within ny steps, so this requirement must be relaxed. However, the stability proof of SGPC and CSGPC depends on the property implied by the use of FIRS that the output settles after ny steps. Thus an obvious strategy to follow is to: (i) retain this last property but (i) allows the value to which the output settles at to become a degree of freedom. This was implemented in [6] by augmenting the MWLS problem appropriately to include a penalty on the deviation of far future command inputs from their target value. The modified algorithm is activated only when CSGPC becomes short-term infeasible, and guar- antees recovery of short-term feastxlity ; this, together with the properties of CSGPC, guarantees stability and asymptotic tracking; we call this stable constrained gen- eralised predictive control (SCGPC). Unfortunately, during the feasibility recovery stage, the modified dgo- rithm does not place any penalty on the norm of the vector of predicted errors, and this may degrade transient performance.

Recent work by Zheng and Morari [7] and Allwright [8] minimises the infinity-norm of the predicted errors rather than the two-norm as CSGPC does. Stability is guaranteed without recourse to terminal constraints, and therefore without the need for a short-term feasibility assumption; however, the results are restricted to open- loop-stable systems. Within the context of SGPC this work can be extended to unstable systems. The per- formance of such systems, though, is often not as good as CSGPC, as all effort is spent minimising just the largest error, and this is often the first one. The system is there- fore driven very hard and the responses can be oscil- latory.

In this paper we propose a procedure for dealing with short-term infeasibility in which the objective is usually the standard CSGPC two-norm minimisation, but when short-term feasibility is encountered, the setpoint is allowed to become a degree of freedom and the objective

IEE Proc.-Control Theory Appl., Vol. 142. No. 4, July 1995

Page 2: Mixed objective constrained stable generalised predictive control

is shifted to the minimisation of the infinity-norm of the predicted errors until short-term feasibility is regained. Thus, by mixing objectives, the superior performance of CSGPC is retained when possible, but stability and asymptotic tracking are guaranteed with only the assumption of long-term feasibility.

Section 2 reviews SGPC and CSGPC; then Section 3 introduces an infinity-norm version of CSGPC, terms 1,- CSGPC; an extension of the MWLS algorithm which performs the constrained optimisation of an infinity- norm of the predicted errors is developed in Section 4 and subsequently embedded into the CSGPC framework in Section 5 to complete the developement of I,-CSGPC; finally, an algorithm which mixes these two objectives, mixed objective CSGPC, is presented in Section 6. The application of these results is illustrated by numerical example in Section 7, which also draws some concluding remarks. 2

2.7 The SGPCsrrategy [4] Let the model of the system to be controlled be given as

Brief review of SGPC and CSGPC

z - b(z) dz) = - 44

1 + a,z-' + a2z-' + . . . + a,z-" (1)

where the order of the numerator and denominator poly- nomials have been assumed, without loss of generality, to be the same and equal to n; more delays can be incorpor- ated by setting bi = 0 for i = 1, 2, ..., k. Form the stabilising feedback loop of Fig. 1. If the controller poly- T T H T V . , WAk) ~

- z-'(b0 + biz-' + ... + b,-,z-"+l) -

22 X(Z)

Fig. 1 SGPC stabilising feedback loop

nomials Y(z) and X(z) from Fig. 1 are dervied by solving the Bezout identity

A(z)Y(z) + B(z)X(z) = 1

4 4 = 4444

B(z) = 2 - "2)

A(z) = 1 - 2 - l

(2) then it is easy to show that the system input U, output y and commanding input c are related as

Y,+ 1 = b(z)e, Aut = 4z)ct (3) Simulating eqn. 3a, b forward in time gives

y = r b c + Y J Ae=TAE + AUj

y , HbC + mbC, Auf = HAC + mAcm (4) where y f and Au, depend on past data and are known, and the 'under-arrows' signify

Y = Cutto), u(to + 11, ..., utt, + n-)IT 0 = Co(to), utt, - I), . . . , 4to - n+)lT (5 )

with the parameters to and n- being t + 1 and n, - 1 for y, t and n, - 1 for E, and t and ny - 1 for Au, while to and n+ are t - 1 and n for c ; n, and n, denote the output and

I E E Proc.-Control Theory Appl., Vol. 142, No. 4, July 1995

reference horizons. C,, = [r,, F,] denotes an (tt, x ny) convolution Toeplitz matrix (with r,, comprising the first n, columns), and H, denotes an (n, x n) Hankel matrix, both formed out of the coefficients of the polynomial p(z) = po + plz-' + ... + pnz-", such that the ijth element is pi-j for C,,, and for H , . The column vector m, is the sum of the columns of M,, whereas c, is chosen to remove steady-state offsets; assuming that r, denotes the next ny setpoint values and en, the myth standard basis vector, then e , can be chosen to be e;,r+/b(l) to remove the steady-state offset. Eqns. 4a, b can be used to stipulate the SGPC cost as

J = Ilc-rll: + LllAulI: = Cc - colrSs2C~ -co l + Y

c0 = S-'[r:([ - y f ) - X:Au,] s2 = r;rb + m'f;rA

y = I l c - ~ f 1 1 2 + ~ l l A ~ f 1 1 2 - Ilsco112 (6) In the absence of constraints, the minimum value of J is y, and the minimiser is c,; as the first value only is needed

C, = eTco

= eTs-'[r:(! - Hb C - ttl, e,) - Ir:(HA m~ C,)] (7)

where e, denotes the first standard basis vector. This implies

e, + eTS-'(rfHb + nr; H&

= eTS-2pf(z-mbeE/b(l)) - ~ r ~ t t l ~ t ? ~ / b ( 1 ) ] ~ (8) Solving the above for E, and using the backward shift operator z-', we may write the control law as

c, = 1 P ( 4 ri+ , Au, = A(z)c, P A 4 (9)

where p,(z) and p,(z) are an anticausal prefilter and the (causal) closed-loop pole-polynomial whose vectors of coefficients are given by

Pr = eTs-2p:(z- mbct/b(l)) - nr'f;mAeE/b(l)l pc = [l, eTS-'(r:H, + LrfHJ] (10)

This strategy mirrors that of GPC with one important difference: in GPC Au is related to y through the infinite impulse response of g(z), whereas here the free variable c is related to y through the FIR of eqn. 3a. On account of this, the vector of future outputs reaches the target value after n + n, steps, making the error zero from n + n, onwards. As a consequence, if the control is not changed, the cost at the next step, J:+,, will be less than or equal to the optimum cost at t, J,; optimisation at t + 1 can only reduce the value of J,+ to give J , + , Q J, . It is easy to show that, if ny > n + n e , equality can only happen if the output is already at the setpoint; hence the control law of eqn. 9 renders the cost of eqn. 6 a monotonically decreasing function oft, and stability is guaranteed.

The control law of eqn. 9 is convenient for analysis, but does not take explicit account of past output values; for practical implementation the control law can be written in closed-loop form using the relationship between U, y and c implied by the feedback configuration of Fig. 1 to express past values of c in terns of past values of U and y. 2.2 lnrroducing constraints into SGPC [5] In most practical applications there are constraints on the values the system inputs/outputs are allowed to take. For brevity, we shall consider only input absolute and

287

Page 3: Mixed objective constrained stable generalised predictive control

rate constraints of the form

l ~ t + i - U o I < U l A ~ t + i l < R (11) and assume the constraints are time-invariant ; time- varying limits can be handled in a similar fashion. Con- straint eqns. 11, together with eqns. 4u, b, imply

(12) where U varies with time whether or not the constraints are time-dependent. The matrix M and the vector U are detailed in [SI.

Next we define the 'feasible region', F.,[u(t)] as the subspace of all n,-dimensional vectors c which satisfy conditional eqn. 12. If F,,[u(t)] is empty for all n, , then the problem is said to be infeasible; in contrast to this, the term 'short-term' infeasibility will be used when F,[a(t)] is empty for a particular n, .

When llMc - a(t)ll, cannot be made to be less than 1 (i.e. short-term infeasible), one may wish to choose c so as to minimise the worst-case constraint violation; this can be done by minimising JlMc - v(t)il, with Lawson's weighted least-squares algorithm [SI. However, in the case of short-term feasibility, one would wish the strategy for choosing c to be dominated by the minimisation of the cost of eqn. 6. This we do with the mixed-weight least-squares (MWLS) iteration defined by the mini- misation over di+ ') of

IlMe - v(t)ll m < 1

4t) (13) &"+I) = q c ( i + l ) - = ~ ~ ( f + l ) -

where the weights are updated as follows

Under the usual assumption that S is positive-definite, MWLS has the following desirable property [SI: with short-term feasibility, it can only converge to the con- strained optimum, c * ; in the case of short-term infeasi- bility, it converges to the solution that minimises the maximum constraint violation.

CSGPC, like SGPC, has an attendant stability result which can be established by proving that the relevant cost forms a monotonically decreasing function of time [SI. However, for CSGPC this requires a feasibility assumption as stated in theorem 1.

Theorem I [SI: Let a linear system with transfer function g(z) = z - 'b(z)/u(z) be subject to input/output constraints which, at time t and for horizons ny and n, , are given as IIMc - u(t)il, < 1, and let F,[v(t)] denote the implied feasible region for E . Then if F,[a(t)] is nonempty for all t, CSGPC will casue y to follow asymptotically any set- point change.

CSGPC runs into the problem of 'short-term infeas- ibility' when, for a finite reference horizon, n,, lIMc - n(t)ll , cannot be made less than one. This short- term infeasibility arises because of the inplicit require- ment that y should reach its target value in ny steps and with only n, 'free' command inputs. The other ny - n, command inputs are chosen to be c, = r/b(l), where the reference signal, r(t). is assumed to be a step of size r. If c, is allowed to vary from r/b(l) so llMc - a(t)ll, can be made less than one, then the monotonicity proof for sta- bility breaks down. If, however, the objective is shifted from minimising the two-norm of the predicted errors to

288

minimising the infinity-norm, then monotonicity can be retained. In Section 3 we define a CSGPC algorithm which has c, as a degree of freedom and minimises the infinity-norm of the predicted errors; we call this 1,- CSGPC. Then, in Section 4, we modify MWLS to mini- mise an infinity-norm cost in the presence of constraints; the resulting algorithm is referred to as I,-MWLS. In Section 5 we give the stability properties of I,-CSGPC, and finally, in Section 6, we present the mixed objective CSGPC algorithm which mixes the two-norm and infinity-norm objectives and thus retains the superior performance of CSGPC whenever possible and yet needs no short-term feasibility assumptions.

3 The I,-CSGPC overall strategy

In I,-CSGPC we make two changes to CSGPC. First, to overcome the problem of short-term infeasibility, c, is allowed to vary. Then, to ensure that monotonicity of cost is retained, the objective is changed to the mini- misation of the infinity-norm of the predicted errors. To make c, a degree of freedom, eqn. 4 is modified to give

y = T&* +yF Ay = TTC* + AuF

yF = Hbc AuF = HAf (15) As mentioned at the end of Section 2.1, the above can be expressed in closed-loop form involving past values of the output and control increment rather than past cs. We also modify constraint eqn. 12, which is then expressed as

I I M * C * - v*(t)ll, = Ile*II, < 1

(16) and E, the vector of future errors to be minimised, is represented as

M* = [ M m] u*(t) = u(t) + me,

k . n y = "(e - c ) = HC* -f H = N T T t f = NT(c - yF) (17)

where N is a matrix containing the n,th-n,th standard basis vectors in Rny. The choice of n,, the initial output horizon, determines the first predicted error to be included in the vector of errors to be minimised. Nor- mally, n1 can be set to one, and all future errors up to n y , the output horizon, will be included in e. When b(z) exhibits nonminimum-phase characteristics, though, n1 must be chosen greater than one so that some of the initial (transient) errors are ignored; the quantitative treatment of this problem is presented in Section 5.

The following algorithm is used to implement I , - CSGPC.

Algorithm I: At each time instant t: Step I: Calculate the vector of future command inputs

which minimises the infinity-norm of the predicted errors without violating the constraints, namely, min,* J, =

Step 2: Calculate and implement the first control increment using the closed-loop form of eqn. 156.

Step 3: Increment t by one and return to Step 1.

{II~"l,nyIlm s.t. I I M * C * - u*cpllm G 1).

For convenience, the * superscript will be suppressed in Sections 4 and 5, as this is the form (with c, a degree of freedom) assumed throughout the development of 1,- CSGPC.

IEE Proc-Control Theory Appl., Vol. 142, No. 4, July 1995

Page 4: Mixed objective constrained stable generalised predictive control

4 The I , - MWLS algorithm and its properties

The constrained minimisation of an I,-norm for the purpose of predictive control has been implemented in the literature with linear programming (LP) theory. Here, as an alternative, and to demonstrate the versatility of MWLS, we recast MWLS to achieve an infinity-norm objective rather than its original two-norm objective. In this Section we derive the properties of I,-MWLS.

Let the I,-MWLS cost be

To gain an intuitive feel for the algorithm, it is useful to sum the weight updates of eqn. 19

where the summation of the numerator of each equation is equal to, and therefore cancels, its respective one-norm in the denominator. The partition of the cost which con- tains E is associated with performance, while the partition containing e is associated with constraint violations.At each iteration, the total weight W placed on the con- straint partition is normalised to one, and the total weight w placed on the performance partition is adjusted by a measure of the size of the constraint violations. When constraints are violated, /le11 , is greater than one, and thus less weight is placed on performance, forcing the algorithm to emphasise the constraint violations; but when all constraints are met, the weights on performance are increased, shifting the emphasis away from con- straints. Within each partition, though, the largest ele- ments are always emphasised. The following definitions will be used in the derivation of the properties of I,- MWLS

e* I lime") (if it exists)

eo k - m

unconstrained I, optimum: min Il&ll,

e* constrained I, optimum: min llzl/, s.t. Jlell, < 1

(21) c

Lemma I: The 1,-MWLS feasibility region, F, , is always nonempty for the disturbance-free case (Fnc # (0)).

Proof: 1,-MWLS has E , as a degree of freedom; there- fore, at start-up, doing nothing is always a feasible solu- tion, and thereafter it can be argued recursively that the control trajectory calculated at the previous time instant will remain feasible in a disturbance-free environment.

Theorem 2: I,-MWLS cannot converge to a point outside the feasibility region (c" E Fnc).

Proof: Assume that I,-MWLS converges to c x 4 Fnc; thenlei[> l f o r i E I E I , = (1 ,_2,..., m},wheremisthe total number of constraints. Let I be the complement of I with respect to I,. Then, in the limit, Wim) = 0 for j E T, I E E Proc.-Control Theory Appl., Vol. 142, No, 4, July 199s

because W?' for i E I are increasing (relative to the others) with k, but, by eqn. 20b, they all sum to one. Hence, because Z,WIk) = 1

k = I I

This, with eqn. 20a, shows that the sum of the per- formance weights will decrease with k, and in the limit, Zw!"' = 0. Therefore, the I,-MWLS cost will become

]im p u L s = lI[W(i+" l P e ( i + l ) 2 1 II 2 k - m

(23) = Me"+" - I

whose solution is feasible since, by Lemma 1, F , # (0). This contradicts the assumption that e x $ F,, , and thus completes the proof.

Corollary I: If eo E F,, , then eo = e* and I,-MWLS can only converge to e* (e' = e*).

Proof: CO minimises llell , , and since eo E F, , the corres- ponding e is such that JJejJ, < 1; therefore, eo = e*. Assume that I,-MWLS converges to c' # c*, but, by theorem 2, er E F , and so leil = 1 fo r i E I and 151 < 1 for i E r, where, as before, I is a subset of I, and I is its complement. Then, if I # (0}, Wj") = 0 for j E T, and

m

k = l I 1

(24) Therefore, by eqn. 20a, the sum of the performance weights will be constant, ZW:"~) = Zw!') = y. The indi- vidual weights will then be updated as follows

and so, by Lawson, e # = E * , which contradicts our earlier assumption that e* # e*.

If I = (0}, 11 W(k)e(k)lll < 1, and the sum of the per- formance weights will increase with k. This has the effect of shifting all of the emphasis to performance as none of the constraints are active, and hence, we once again con- tradict our earlier assumption that e* # e*. Note that the growth of the performance weights does not consti- tute a numerical problem, because the algorithm can be stopped after E* converges and before the performance weights become too large. Alternatively, the sum of the performance and constraint weights can be normalised to one after the update of eqn. 19; then, if I = (01, all the constraint weights would tend to zero.

Theorem 3: I,-MWLS can only converge to e* (e" = e*).

Proof: By corollary 1, this is true for CO E F,, , so here, let eo $ F,, . Assume that I,-MWLS converges to e x # e*. By theorem 2, c' E F , , and so the polyhedroid defined by llellrn = constant which passes through E* must either intersect F, or be tangential to it (for example, see Fig. 2). Of these, the former cannot be true, because any point inside the intersection of the polyhedroid and F,, will result in a smaller cost for I,-MWLS. Hence, I,-MWLS will not converge to e* unless the polyhedroid through e' is tangential to F,, . This can only happen at e* = e*. If the tangency is a point, then the solution is unique; however, if the tangency is a common edge (line, plane or hyperplane), then the solution is nonunique, because any point along the common edge tangency will produce the same optimum cost. This is discussed further in remark 1.

289

Page 5: Mixed objective constrained stable generalised predictive control

Remark I : In some cases, the solution to min, J , = { llell , s.t. llell, 1) is not unique. When this occurs in I,- MWLS, there will be less than n, active (nonzero) weights

Fig. 2 Contradictory solution

which are associated with linearly independent rows of H or M. This will cause the matrix [HTw112, MTW1'Z]T to be rank-deficient, and means that not all degrees of freedom are used to minimise the 1,-MWLS cost. This nonuniqueness can be visualised by considering the tangency mentioned in the proof of theorem 3, which, in this case, will be a common edge (for example, see Fig. 3).

Fig. 3 Nonunique solution

In practice, it is sufficient to stop the algorithm prior to the rank deficiency occurring, but an alternative solution would be to allow I,-MWLS to run only until the active set is identified and constrain c to lie in it. The remaining degrees of freedom could then be used to minimise the next largest error.

Remark 2: Unlike LP, 1,-MWLS does not need a feasible starting p i n t . Additionally, for many problems, an exact solution is not necessary. In this case, I,-MWLS can provide a close solution very quickly. A reasonable ter- mination criterion for I,-MWLS is whether or not the cost changes are smaller than a small threshold value, E.

By choosing E to be relatively large, say 0.01, I,-MWLS will terminate in just a few iterations, thus providing a very quick estimate of the optimum solution. In applica- tions where computation time during the sampling inter- val is at a premium, using I,-MWLS with a nonexact termination criterion may provide a much better option than waiting for LP to find a feasible starting point (a separate LP) and then converging to the exact solution. This is illustrated in the first example.

5 The stability of I,-CSGPC

Once the optimum vector of future reference values, E*, has been calculated, the procedure for computing and implementing the first optimum control increment is exactly the same as that used in CSGPC [SI. The argu-

290

ments for the stability of I,-CSGPC are also similar, as presented here.

Lemma 2: Let a linear system with transfer function g(z) = z-'b(z)/a(z) be subject to input constraints which at time t and for reference horizon nc and output horizon ny = n + n, are given as llMc - I)(t)ll, d 1. Furthermore, let the I,-CSGPC objective be min, J , = ( I I E . , , . ~ ~ ~ ~ s.t. Jlell, l}, then for 1 < n1 ny, J , is nonincreasing for all t and I,-CSGPC is BIB0 stable.

Proof: First, we note that I I E , , , ~ ~ / I , = I I E . , . ,llrn, because, by the property of FIRS, the output will settle after ny steps. Now, at each new time instant, t + 1, the control trajectory implied by the previous optimum is feasible and can be used to give a cost, .I,+,, which will be less than or equal to the optimum cost J: at t. Then, the optimum at t + 1, J?+,, will be less than or equal to J,,,, and hence less than or equal to J:.

Lemma 3: For n, = ny, I,-CSGPC is stable and gives asymptotic tracking.

Proof: This is equivalent to that part of the SCGPC algo- rithm of [6] which is used when CSGPC encounters short-term infeasibility.

Lemma 4 : If 1,-CSGPC is at rest, and y , # r, then there exists an n, such that / I E , ~ , . ~ I ~ , can be reduced without violating the constraints.

Proof: The elements of this steady-state error vector will be E , = e2 = . . = sny E, = y , - r # 0. Furthermore, IIMc, - 011, must be strictly less than one because the rates are inactive and the problem is long-term feasible. Therefore, there exists a positive number E such that IIM(c, + 6c) - D I I , d 1 for all 6c which satisfy 116c/l, < E. Now, we need to show that there exists a 6c which satisfies

IIH(C, + 64 -film = llH6c + (Hc, -.f)llm = llH6c + &,,Cl . . . 1ITII, < IIHc, -A, = lEssl s.t. ll6cll, c (26)

This can easily be done by choosing n, 3 n such that H6c - e,[1 . . . 11 = 0 has solution 6cp and setting 6c = E ~ c , / ~ ~ ~ c , ~ ~ , so that

IIH6c + e,[1 . . . 11'11,

and 116cll , = E

Theorem 4 : There exists ann, such that min llHx - [l ... l]'II, < 1

Furthermore, if a system is at rest and y,, # r, then I,- CSGPC will remain at rest at the wrong steady-state value if and only if n, is chosen such that condition 28 is not satisfied.

Proof: That such an n1 exists is proved by choosing n, > n, which makes condition 28 equal to zero. Now, llHx - [ 1 . . . 11 , < 1 is a normalisation of condition 26, so

if n, is chosen such that condition 28 is not satisfied, no control trajectory will exist which produces a lower cost than remaining at rest, but if condition 28 is satisfied, then such a trajectory does exist and can be made small

IEE Proc.-Control Theory Appl., Vol. 142, No. 4, July 1995

Page 6: Mixed objective constrained stable generalised predictive control

enough to keep IIM(c + 6c) - ull, < 1. The arguments for this are similar to those given in the proof of lemma 4. 1,-CSGPC will chose the optimum cost, so in the first instance y will remain at rest at the wrong steady-state value, and in the second it will move towards the set- point.

Remark 3: Note, in Lemma 3 (n, = ny), that only the last error, is minimised. But, to maintain as much of the transient behaviour in the objective as is possible, we want n, as small as possible. Condition 28 can be used to determine the smallest valid n,. By starting with n, = 1, condition 28 can be tested (oflline, with a linear program or Lawson’s algorithm) and n1 can then be incremented until the condition is satisfied.

Remark 4 : Theorem 4 states that if n1 is chosen such that condition 28 is not satisfied, then 1,-CSGPC will not track a reference; this would normally be associated with nonminimum-phase attributes. Because of the SGPC sta- bilising loop and by eqn. 4a, this behaviour is determined solely by the system’s numerator polynomial, b(z). If 6(z) is such that the initial tendency of the output is opposite the desired direction, then the first error(s) will always be greater than one. Thus, since 1,-CSGPC minimises the maximum error, this initial error@) must be excluded, or the ‘optimum’ cost will be that which requires the system to stay at steady state.

As stated in Remark 2, the solution to min, J , = { llell, s.t. llell , < 1) can be nonunique; therefore, the possibility of an undamped oscillating cost exists. To account for this, theorem 5 is written with an assumption of unique- ness, but then the assumption is removed in the corollary by modifying the cost objective to emphasise later errors more than earlier ones.

Theorem 5 : Let n, be chosen to satisfy condition 28 and let the control trajectory which optimises the cost of lemma 2 be unique, then 1,-CSGPC is stable and gives asymptotic tracking.

Proof: If y , = r, then we are tracking the reference, so let y , # r. Now, by lemma 2, J:+i 6 J:; if J:+‘ has not sta- bilised at a constant nonzero value, then the cost is decreasing as desired, so assume it has stabilised (i.e. J:+i = J: for all i). Uniqueness then implies that the control and output trajectories are frozen, and we have steady state after i = n, + nb steps. But, by lemma 4, this cost can always be reduced, which contradicts the assumption of a stabilised nonzero cost. Therefore, the cost will be reduced to zero and I,-CSGPC will asymp- totically track the reference.

Corollary: Let n, be chosen to satisfy condition 28 and let the I,-CSGPC objective be min, J , = { I/Senl,.yllm s.t. llell , < 1 , where S j j > Sii for j > i and S , = 0 for j # i. Then 1,-CSGPC is stable and gives asymptotic tracking.

Proof: If we use the same control trajectory, then at t + 1 each E, is multiplied by an ‘earlier’ (i.e. smaller) S i i , so J,,, < J:. This will always be true unless the biggest E, at t is the last one (in such a case, the biggest at the next instant will also be equal to Snyeny), but then the extra degree of freedom which is available at the next time instant will be used to decrease J , + ,. This can always be done without causing any of the ‘earlier’ Si-,Eis to become dominant as they are all strictly less than S , , E ~ ~ . Therefore, J , is a monotonically decreasing function of time.

IEE Proc.-Control Theory Appl., Vol. I42, No. 4, July I995

Remark 5 : S can also be used to increase the speed of convergence by increasing the emphasis on steady-state errors; in the limit one obtains that part of the SCGPC algorithm of [6] which is used when CSGPC encounters short-term infeasibility.

1,-CSGPC gives stability by minimising the maximum predicted error, as do the works of Zheng and Morari [7] and Allwright [8], but by first applying the SGPC stabilising loop, 1,-CSGPC can also handle open-loop unstable systems. Unfortunately, min-max controllers often give undesirable performance because the maximum error is often the first (transient) one. This causes the system to be driven very hard and can lead to underdamped oscillatory response. CSGPC, on the other hand, minimises the two-norm of the predicted errors, and therefore offers excellent performance, but cannot handle short-term infeasibility. Thus, in the following Section, we propose an algorithm which mixes these two objectives.

6 Mixed objective CSGPC

Mixed objective CSGPC offers a compromise between CSGPC and I,-CSGPC. When short-term feasible, the preferred two-nom objective of CSGPC will be used, but when short-term infeasible, the objective will be changed to the infinity-norm objective of 1,-CSGPC until short- term feasibility is regained.

Consider the following algorithm, which we shall term the mixed-objective CSGPC.

Algorithm 2: At each time instant t : Step I : Test for short-term feasibility, namely, that

m i q JIMc - o(t)ll, < 1; this can be done with either Lawson’s algorithm or linear programming. If short-term feasible, proceed to step 2; otherwise, go to step 3.

Step 2: Apply CSGPC as described in Section 2.2, namely, min, J , = {llell: s.t. IlMc - a(t)ll, < I, c, = r /b( l ) } ; increment t by one and return to step 1 .

Step 3: Apply 1,-CSGPC as described in Algorithm 1, namely, min,, J , = {ll~ml.nyll, s.t. IIM*c* - u*(t)ll, < 1) ; increment t by one and return to step 1.

Theorem 6 : Under the assumption that setpoint changes do not cause the control problem to become infeasible, mixed objective CSGPC has guaranteed stability and will cause the output y to reach its target value asymp totically.

Proof: If the problem is short-term feasible for all t, the modified algorithm will operate as the original CSGPC algorithm of Section 2.2, and so by theorem 1 we have stability and asymptotic tracking. If, however, due to a set-point change, CSGPC is short-term infeasible, 1,- CSGPC will be applied. Now, by lemma 1, it is known that this optimisation problem will always be feasible, and furthermore, by theorem 4, we have that the 1,- CSGPC cost is a monotonically decreasing function of time. Hence the output will be driven towards its target value, r. At some time instant before the output reaches r, the algorithm will regain short-term feasibility and revert to the original CSGPC algorithm. Namely, by the results of Section 5, we know that I,-CSGPC will always recover short-term feasibility, and hence, by theorem 1, mixed objective CSGPC will be stable and will cause the output to track its target value asymptotically.

291

Page 7: Mixed objective constrained stable generalised predictive control

7 Illustrative examples

In this Section we give some numerical examples to illus- trate the results of this paper. We begin with Example 1, which concerns the feasibility region and convergence properties of I,-MWLS as given in Section 4.

Example I: Let the system with transfer function g(z) = z-’b(z)/a(z), such that

U(Z) 1 - 2.2z-I + 0.09~-’ + 0 . 2 5 2 ~ - ~

b(z) = 2 + 0.452-’ + Z-’ (29) be subject to input absolute and rate constraint eqn. 1 1 for which U, = 0, U = 1 and R = 0.1. Apply I,-CSGPC using algorithm 1. Figs. 4-7 show the feasibility region

035-

03-

0 2 5 -

0 2 -

0 15-

01-

ao5-

0 -

-005

-0 1

-0 15

Fig.

- -

I

o 261 0 28 0 30 0 3 2 0 34 0 36

Fig. 5 Feusibility region (t = 9)

0.5 r

0.35

0.3

0.2 5 0.25 0.3 0.35 0.4

Fig. 6

292

Feasibility region ( t = II)

1 0 45

(solid lines) and contours of equal llell, (broken lines) at four time instants with n, = 1 so that e* has two ele- ments; eo is indicated by Q’, and E” by ’*’. In all instances

I I I I

0 2 6 028 0 3 0 032 034 036 038

Feaslbilzty region (t = 14) Fig. 7

it is obvious that I,-MWLS has converged to the optimum, as in theorem 2 (e” = e*). At t = 6, the pro- gression of e, w, e and W through 50 iterations is shown in Fig. 8 and their final values after 100 iterations are

performance W

-0

0 20 40 0 20 40 constraints w

-0 5 e-2 3 O

- 1 5 1 0- 0 20 40 0 20 40

Fig. 8 I,-MWLS through 50 iterafions (t = 6)

e = [-0.8653 -0.7385 -0.6495 -0,60131

w = C8.5776 0 0 01

e = C0.6737 - 1 - 1 0.2132 0.1215 0.0674 -0.0326 -0.1326 -0.1113 -0.09921

W = [0 0.6875 0.3125 0 0 0 0 0 0 01

Note that the nonzero weights correspond to the largest es and to active constraints, as stated in remark 1. In this case, one performance weight and two constraint weights are active; this is evident in Fig. 4.

At t = 9 (Fig. 5), the final errors and weights are

e = [-0.1620 0.0312 0.1234 0.16201

w = C98.7436 0 0 17.88831

e = [-0.8243 - 1 -0.5758 0.2234 0.0974 -0.1635 -0.2635 -0.3211 -0.2987 -0.28901

w = [ o 1 o o o o o o o o ] This time, two performance weights and one constraint weight are active.

At t = 11 (Fig. 6), only the first performance weight and the first constraint weight are active, as indicated

IEE Proc.-Control Theory Appl., Vol. 142, No. 4, July I995

Page 8: Mixed objective constrained stable generalised predictive control

below E = C0.0412 0.0269 -0.0224 -0.03981 w = [121.31 0 0 0.00061 e = [ - 1 0.9649 0.4421 -0.1203 -0.0437 -0.3631

-0.2666 -0.2224 -0.2344 -0.23881 w = [ l o 0 0 0 0 0 0 0 01

The corresponding rows of H and M are [l 01, and, therefore, linearly dependent; this is a case of a non- unique solution.

Finally, at t = 14 (Fig. 7), the unconstrained optimum is feasible, and, as stated in corollary 1, co = cx = c*. The final errors and weights are

E = [-O.o040 0.0040 0.0040 -0.0040]

w = 1.0 x loz4 * CO.6080 0.0715 1.1839 0.88221 e = [-OS883 -0.2036 0.2380 0.0019 -0.0204

- 0.2493 - 0.2696 - 0.2458 - 0.2457 - 0.24771 W = [ 1 0 0 0 0 0 0 0 0 01

Note that no constraints are active, and all degrees of freedom are used to minimise the performance cost. The performance weights have grown large, but this did not adversely affect the algorithm and could have been averted by appropriate normalisation.

To illustrate the time savings possible with I,-MWLS, the minimisation in step 1 of algorithm 1 was run with I,-MWLS (using the termination criterion mentioned in Remark 2, i.e. setting E = 0.01) and with LP. The former never needed more than 11 iterations, taking a maximum of 0.11 s; LP required a maximum of 0.33 s. The use of a ‘relatively large’ value for the threshold E results in a slight degradation in performance (the MWLS cost is now about 1% larger than the LP cost), but this is a small price to pay when considering the very significant reduction in computation time (threefold).

In the next example, we illustrate the instability of contrained predictive control algorithms when they encounter short-term infeasibility with an open-loop unstable system. The algorithm we use for this is CSGPC, but the property is true for all. We then show how I,-CSGPC, using algorithm 1, and mixed objective CSGPC, using algorithm 2, actually maintain stability. This example also serves to illustrate the sometimes oscil- latory response of I,-CSGPC and the great improvement achieved by mixed objective CSGPC.

Example 2: Let the system of example 1 be subject to constraint eqn. 1 1 for which U , = 0, U = 0.5 and R = 0.04. The system has an unstable pole at 2.1. Setting n, = 3, the three algorithms are applied to control the system. With a step set-point change from zero to one, CSGPC immediately encounters short-term infeasibility which causes the system to go unstable (Fig. 9). On the other hand, both I,-CSGPC (Fig. 10) and mixed object- ive CSGPC (Fig. 11) maintain stability and set-point tracking. Note, though, that I,-CSGPC gives an oscil- latory response, taking 23 time steps to settle; this is because it always uses the infinity-norm objective. Mixed objective CSGPC, on the other hand, regained short- term feasibility and reverted to the two-norm objective after only four time steps; the response is, therefore, very good.

As mentioned in Section 5, infinity-norm controllers minimise the maximum predicted error, so for systems which exhibit nonminimum-phase behaviour some of the

I E E Proc.-Control Theory Appl., Vol. 142, No. 4 , July 1995

initial errors must be ignored. We are concerned with the nonminimum-phase behaviour of the closed-loop system, and, as noted in remark 4, this is determined solely by the open-loop system’s numerator polynomial, 42). The last example illustrates this point.

I output and reference

input rate

-0 05 0 20 40 0 20 40

Fig. 9 Example 2 ’ CSGPC response

output and reterence IIMc-vII

inout inwt rate

Fig. 10 Example 2: I,-CSGPC response

output and reference IIMc-II

Fig. 11 Example 2 : mixed objective CSGPC response

Example 3 : Let the system with transfer function g(z) = z-’b(z)/a(z), such that

a(z) = 1 - 3.82-’ + 3 . 8 7 ~ ~ ’ - 0 . 2 7 ~ ~ ~ - 0 . 5 4 ~ - ~

b(z) = 1 - 1.13z-’ - 5 . 0 0 3 ~ - ~ + 6.6378F3 (30)

293

Page 9: Mixed objective constrained stable generalised predictive control

be subject to input absolute and rate constraint eqns. 11 for which U, = 0, U = 0.1 and R = 0.05. This system is particularly difficult to control as it has a near pole-zero cancellation outside the unit circle. a(z) has a root at 1.5, and b(z) has roots at 1.55 and -3.0. For n, = 3 and n, = 1, the H-matrix is

1 0 0 0 0.95 1 0

H = [-!:: - 7 5 2.325 -5.375 i.95 ] 2.325 -3.425

- 1.1

Now, for n, < 2, minx llHx - [l . . . l]Tll, = 1, and in fact the minimising vector is x = [0 . . . O I T ; clearly, condition 28 is violated, and I,-CSGPC never moves from an initial rest condition. But for n, = 3, min, llHx - [l . ' . 13 T I / , = 0.0546, and the minimising x is -[0.4717, 0.7037, 0.8124, 0.8594IT; condition 28 is satisfied, and I,-CSGPC tracks the reference as desired (Fig. 12). For completeness, Fig. 13 shows the response of

output and reference IIMC-VII

I 1 - input input rate

-0.051 ' I Y

Fig. 12

mixed objective CSGPC; in this case, the improvement is only minor.

These examples demonstrate the effectiveness of 1,- MWLS in optimising an infinity-norm cost and the ability of I,-CSGPC and mixed objective CSGPC to

Example 3: I,-CSGPC response

handle open-loop unstable and nonminimum-phase systems. Mixed objective CSGPC produces superior per- formance by retaining the two-norm objective whenever possible, but falling back to an infinity-norm objective when short-term infeasibility is encountered.

IlMc-~ll output and reference

-05: 0- oo:vt 0 20 40 I obin~utml; 0 20 40

-0 1 _ _ _ _ _ _ _ _ _ _ _ _ _ _ -005 ---------------- 0 20 40 0 20 40

0 05 _ _ - - _ _ _ - _ _ _ _ - _ _ _ 0 1 -_---___________

-0 05

Fig. 13 Example 3 : mixed objective CSGPC response

8 References

1 CLARKE, D.W., MOHTADI, C., and TUFFS, P.S.: 'Generalized predictive control, Parts 1 and 2', Automatics, 1987.23, pp. 137-160

2 CLARKE, D.W., and SCATI'OLINI, R.: 'Constrained reading horizon predictive control', IEE Proc. D, 1991,138, (4), pp. 347-354

3 MOSCA, E., and ZHANG, J.: 'Stable redesign of predictive control', Automutica, 1992,28, (6). pp. 1229-1233

4 KOUVARITAKIS, B., ROSSITER, J.A., and CHANG, A.O.T.: 'Stable generalized predictive control', IEE Proc. D, 1992, 139, (4), pp. 349-362

5 ROSSITER, LA., and KOUVARITAKIS, B.: Constrained stable generalized predictive control', IEE hoc. D, 1993, 140, (4), pp. 243- 254

6 ROSSITER, J.A., KOUVARITAKIS, B., and GOSSNER, J.R.: 'Feasibility and stability results for constrained stable generalized predictive control'. Tech. report OUEL 2006/93, Department of Engineering Science, Oxford University, 1993; presented at the 3rd IEEE Conf. on Control Applications, Galsgow, 24-26 August 1994

7 ZHENG, Z.Q., and MORARI, M.: 'Robust stability of constrained model predictive control'. ACC P r d i n g s , San Francisco, CA, June 1993, pp. 379-383

8 ALLWRIGHT, J.C.: 'On minimax model-based predictive control'. Proceedings of conference on Advances in model-baed predictiw control, Department of Engineering Science, Oxford University, 20-21 September, 1993. Vol. 1, pp. 246-255

9 LAWSON, C.L., and HANSON, RJ.: 3olving least squares prob- lems' (Prentice-Hall, 1974)

294 IEE Proc-Control Theory Appl., Vol. 142, No. 4, July 1995


Recommended