+ All Categories
Home > Documents > [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia...

[IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia...

Date post: 09-Dec-2016
Category:
Upload: karol
View: 221 times
Download: 3 times
Share this document with a friend
20
Optimal Process Control Miroslav Fikar Slovak University of Technology in Bratislava Bratislava, Slovakia http://kirp.chtf.stuba.sk/ kar Karol Kost´ ur Technical university in Koˇ sice Koˇ sice, Slovakia e-mail: [email protected] Abstract—The purpose of this paper is to review basic concepts of optimal process control. We dene the formulation and basic elements of optimal process control which are models of physical processes, constraints, objective function, functional, history of optimization, and practical approaches to optimal control implementation. The classication comprises basic theory, methods, and techniques. The theory will be complemented by concrete examples from practice. Finally, the structures for optimal process control (feed forward and feedback optimal control) will be given in the paper. Index Terms—Concepts of optimal process control, optimal control of well-dened processes, dynamic optimization of pro- cesses, structures of optimal process control, applications I. I NTRODUCTION TO OPTIMIZATION PROBLEMS Process optimization is the discipline of adjusting a process so as to optimize some specied set of parameters with or without some constraints. The most common goals are cost minimization, throughput maximization throughput, and/or ef- ciency. This is one of the major quantitative tools in industrial decision making. When optimizing a process, the goal is to maximize one or more of the process specications, while keeping all others within their constraints. Fundamentally, there are three components or procedures that can be adjusted to affect optimal performance. They are: Technological equipment optimization Technological procedures Process control. There are hundreds or even thousands of control loops in process units. Each control loop is responsible for controlling one part of the process, e.g. such as maintaining a pressure, a volume ow, a temperature or level. If the control system and required value is not properly designed, the process runs in non optimal conditions. There- fore, the solution of a optimization tasks is needed in process control. A. Formulation of the Optimization Problem Main elements of the optimization problem are: The choice of optimal criteria. The mathematical model. Constraints on state and control variables. The choice of a suitable method for solution of optimiza- tion problem. 1) Optimal criterion: Optimal criterion expresses the aim of optimization. The objective function is the mathematical expression of process operation to be optimized. The function f is called an objective function, cost function (minimization), utility function (maximization) or in certain elds, energy function or the functional (energy functional) or performance index. The principle of uniqueness states that one and only one objective function should be maximized or minimized. In situations where two objective functions f 1 and f 2 are to be optimized, they may be combined into a single objective function by means of a linear combination. The composition objective function is f c = ψ 1 f 1 + ψ 2 f 2 (1) where ψ 1 2 are weighting constants. If f 1 is to be maximized and f 2 is to be minimized, then f 3 may be substituted for f 2 if f 3 is equal to the inverse of f 2 , so that the maximum of f 3 will occur at the same point as the minimum of f 2 . Then composite objective function to be maximized is f c = ψ 1 f 1 + ψ 3 f 3 (2) Others principles for dening objective function are the prin- ciple of accountability, principle of prot orientation, principle of optimal control equations states. The prot objective function expresses the prot contribu- tion that can be obtained from operating process. The prot objective function is often dened as the difference between gross revenue and variable cost f = i c i O i j c j R j (3) where c is price and cost, O is the output volume, R is the raw material volume. The cost objective function (4) expresses the costs associ- ated with operating the process. Usually only variable costs that are controllable are included in cost objective function. Then objective function to be minimized is given as f = i c i P i (4) where c i is cost for unit of producing the volume from i-th process. Dynamic optimization problem has objective function which is integral of the cost-rate function over time or other changing 978-1-4577-1868-7/12/$26.00 ©2012 IEEE 153
Transcript
Page 1: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

Optimal Process ControlMiroslav Fikar

Slovak University of Technology in BratislavaBratislava, Slovakia

http://kirp.chtf.stuba.sk/∼fikar

Karol KosturTechnical university in Kosice

Kosice, Slovakiae-mail: [email protected]

Abstract—The purpose of this paper is to review basic conceptsof optimal process control. We define the formulation andbasic elements of optimal process control which are modelsof physical processes, constraints, objective function, functional,history of optimization, and practical approaches to optimalcontrol implementation. The classification comprises basic theory,methods, and techniques. The theory will be complementedby concrete examples from practice. Finally, the structures foroptimal process control (feed forward and feedback optimalcontrol) will be given in the paper.

Index Terms—Concepts of optimal process control, optimalcontrol of well-defined processes, dynamic optimization of pro-cesses, structures of optimal process control, applications

I. INTRODUCTION TO OPTIMIZATION PROBLEMS

Process optimization is the discipline of adjusting a processso as to optimize some specified set of parameters with orwithout some constraints. The most common goals are costminimization, throughput maximization throughput, and/or ef-ficiency. This is one of the major quantitative tools in industrialdecision making.

When optimizing a process, the goal is to maximize oneor more of the process specifications, while keeping all otherswithin their constraints.

Fundamentally, there are three components or proceduresthat can be adjusted to affect optimal performance. They are:

• Technological equipment optimization• Technological procedures• Process control.

There are hundreds or even thousands of control loops inprocess units. Each control loop is responsible for controllingone part of the process, e.g. such as maintaining a pressure, avolume flow, a temperature or level.

If the control system and required value is not properlydesigned, the process runs in non optimal conditions. There-fore, the solution of a optimization tasks is needed in processcontrol.

A. Formulation of the Optimization Problem

Main elements of the optimization problem are:• The choice of optimal criteria.• The mathematical model.• Constraints on state and control variables.• The choice of a suitable method for solution of optimiza-

tion problem.

1) Optimal criterion: Optimal criterion expresses the aimof optimization. The objective function is the mathematicalexpression of process operation to be optimized. The functionf is called an objective function, cost function (minimization),utility function (maximization) or in certain fields, energyfunction or the functional (energy functional) or performanceindex. The principle of uniqueness states that one and onlyone objective function should be maximized or minimized.In situations where two objective functions f1 and f2 are tobe optimized, they may be combined into a single objectivefunction by means of a linear combination. The compositionobjective function is

fc = ψ1f1 + ψ2f2 (1)

where ψ1, ψ2 are weighting constants. If f1 is to be maximizedand f2 is to be minimized, then f3 may be substituted for f2if f3 is equal to the inverse of f2, so that the maximum off3 will occur at the same point as the minimum of f2. Thencomposite objective function to be maximized is

fc = ψ1f1 + ψ3f3 (2)

Others principles for defining objective function are the prin-ciple of accountability, principle of profit orientation, principleof optimal control equations states.

The profit objective function expresses the profit contribu-tion that can be obtained from operating process. The profitobjective function is often defined as the difference betweengross revenue and variable cost

f =∑i

ciOi −∑j

cjRj (3)

where c is price and cost, O is the output volume, R is theraw material volume.

The cost objective function (4) expresses the costs associ-ated with operating the process. Usually only variable coststhat are controllable are included in cost objective function.Then objective function to be minimized is given as

f =∑i

ciPi (4)

where ci is cost for unit of producing the volume from i-thprocess.

Dynamic optimization problem has objective function whichis integral of the cost-rate function over time or other changing

978-1-4577-1868-7/12/$26.00 ©2012 IEEE 153

Page 2: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

variable. It is called functional. Very often the functional ex-presses the following types of dynamic optimization problems:

• the minimum time,• the minimum cost (the maximum profit),• the optimal continuous operation.

The aim of dynamic optimization problem can be expressede.g., by functional which is minimized

J =

∫ tf

t0

F (x1, x2, . . . , xm, u1, u2, . . . , un, t)dt (5)

This is called optimal trajectory problem because the problemis to select a path or trajectory x(u)∗ of the control variablesbetween two boundary conditions which will minimize J

during period tf − t0 where t is time.2) Mathematical model: Mathematical model is a mathe-

matical representation of any studied process (physical, eco-nomical, producing process). Each process is transformationof inputs to outputs. This transformation can be for exampledescribed by linear or nonlinear functions for static (6) anddynamic processes (7)

xj = fj(u) (6)

in continuous time formdxjdt

= fj(x,u, t) (7)

including boundary conditions, or discrete time form (8)

xj(t0 +Δt) = Tj(x(0),u(0), t0) (8)

where u is the n dimensional vector of control variables, x isthe m dimensional vector of state variables, f , f are processmodel functions, T is function which represents the effect ofcontrol variable at past intervals t0. Note that it is a recurrencerelationship, i.e., the same equation holds for all time intervals,t0, t0 +Δt, t0 + 2Δt, . . ..

3) Constraints on state and control variables: Constraintsare always present in the optimization of real processes.In a sense, constraints on state and control variables maybe thought of as extensions of the physical process model.This is because constraints define the relationships betweenstate and control variables or the allowed region of processoperation. Also, the effect of constraints is generally to degradethe quality of optimal process operation. Different types ofconstraints can be divided to five groups:

• Hard constraints on the limits of control variables.• Soft constraint on the limits of control variables.• Constraints on state and control variables expressed by

functions.• Constraints on state and control variables.• Initial and terminal boundary values for dynamic opti-

mization.Hard constraints on the limits of control variables prescribe

specific upper and / or lower limits for control variables usuallyby inequalities

CiL ≤ ui ≤ CiU (9)

By introducing positive slack variables, the constraints may beexpressed as equalities

ui − ZiL = CiL, ui + ZiU = CiU (10)

where CiL, CiU are lower / upper constraints limit for the i-thcontrol variable, ZiL, ZiU are slack variables.

Soft constraints on the limit of control variables do notprohibit the control variables to exceed the constraint limit.However, the quality of process operation will deterioraterapidly if the constraint limit is exceeded by any significantextent. Soft constraints may be implemented indirectly, i.e.by modifying the objective function to reflect the penaltyimposed by significant deviation of a control variable beyondthe constraint limit. A possible penalty function to be addedto the objective function can be given as

P = Ki

(ui

CiU

)Mi

(11)

where Ki is a positive constant, Mi is a positive integer. Then,the modified objective function to be minimized will be F =f + P .

Constraints on state and control variables have general form(12), (13)

Sj(x,u) ≥ SjL (12)

orSj(x,u) ≤ SjU (13)

for j = 1, . . . , r, where SjU , SjL are positive constraintslimits.

Again, positive slack variables Zj can be introduced andinequalities (12), (13) are transformed into

Gj(x,u, Zj) = 0 (14)

To express constraints on state and control variables as func-tions of control variables only, it may be possible to substituteprocess model (6) into (14) to eliminate the state variables. Thegeneral form of the constraint on state and control variablesexpressed only as functions of control variables is then

gj(u, Zj) = 0 (15)

Constraints on state and control variables are applied whenstate variables cannot be eliminated, the objective function fto be optimized may be modified by the Lagrange multipliertechnique. If λ is r dimensional vector of Lagrange multipliersthe modified objective function subject to the constraints (14)or (15) is

F = f +

r∑j=1

λjGj(x,u) (16)

Initial and terminal boundary values for dynamic optimizationfor the specified state variables are given by

x(t0) = x0, x(tf ) = xf (17)

where t0 and tf are initial and final times, x0 and xf areconstants.

154 2012 13th International Carpathian Control Conference (ICCC)

Page 3: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

4) The choice of a suitable method for solution of optimiza-tion problems: The choice of a suitable method to solve anoptimization problem is difficult problem because it dependson multiple factors. If optimal criterion, mathematical model,and constraints are known the choice of optimization methodwill be easier. Generally, optimization problems can be dividedto:

• static optimization problems,• dynamic optimization problems.

If variables in objective function, models, constraints do notdepend on time then the static optimization problems can besolved by following static optimization methods:

• classical mathematical analysis,• methods of linear programming,• non-linear programming.

If objective function, constraints are linear functions, theseoptimization problems are suitable to solve by linear pro-gramming. If some function or constraint is not linear, theseoptimization problems can be solved by methods from groupof nonlinear programming.

If some variables in objective function (functional), models,constraints depend on time then these optimization problemscan be solved by following dynamic optimization methods ortechniques:

• Calculus of variations• Pontryagin’s principle• Dynamic programming.

Depending on way how optimization problems are solved, theoptimization methods are divided to:

• analytical methods,• numerical or iterative methods.

Analytical methods for solving of optimization tasks withconstraints and any conditions are rarely used. In these cases,numerical methods are dominantly used and supported a bigset of special programs (software).

Managers on various levels of processes are interested incorrect decisions. At least, they should have an interest inoptimal decisions. Linear and nonlinear programming are usedfor optimal production planning, optimal transport, or optimaldistribution products from producer to consumer. Similarly,optimal source division, seek of optimal strategies, optimalcontrol of equipments, optimal control of communication pro-cesses, etc. are solved often as dynamic optimization problems– see Fig. 1. Optimal control implementation is the applicationof optimization problem solutions to control of processes asin [1].

II. HISTORY OF OPTIMIZATION

An optimization problem can be represented in the follow-ing way.Given: a function f : A → Rn from some set A of the realnumbers.Sought: an element x∗ in A such that f(x∗) ≤ f(x) for allx in A (minimization) or such that f(x∗) ≥ f(x) for all xin A (maximization).

Fig. 1. Main fields of using optimization problems dependent on managementor process control.

Typically, A is some subset of the Euclidean space Rn,often specified by a set of constraints, equalities or inequalitiesthat the members of A have to satisfy. The domain A of fis called the search space or feasible solutions. A feasiblesolution that minimizes (or maximizes, if that is the goal) theobjective function is called an optimal solution. By convention,the standard form of an optimization problem is stated in termsof minimization. Generally, unless both the objective functionand the feasible region are convex in a minimization problem,there may be several local minima, where a local minimumx∗ is defined as a point for which there exists some δ > 0 sothat for all x such that

||x− x∗|| ≤ δ (18)

the expressionf(x∗) ≤ f(x) (19)

holds, that is to say, in some region around x∗ all of thefunction values are greater than or equal to the value at thatpoint. Local maxima are defined similarly.

A. Conditions of optimality for objective function withoutconstraints

A large number of algorithms proposed for solving non-convex problems – including the majority of commerciallyavailable solvers – are not capable of making a distinctionbetween local optimal solutions and rigorous optimal solutionsand will treat the former as actual solutions to the originalproblem. The branch of applied mathematics and numericalanalysis that is concerned with the development of determin-istic algorithms that are capable of guaranteeing convergencein finite time to the actual optimal solution of a non-convexproblem is called global optimization.

1) Necessary conditions for optimality: One of Fermat’stheorems states that optima of unconstrained problems arefound at stationary points, where the first derivative or thegradient of the objective function is zero. More generally, theymay be found at critical points, where the first derivative orgradient of the objective function is zero or is undefined, oron the boundary of the choice set. An equation (or set ofequations) stating that the first derivative(s) equal(s) zero at

1552012 13th International Carpathian Control Conference (ICCC)

Page 4: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

an interior optimum is called a ’first-order condition’ or a setof first-order conditions.

Theorem 1 (Fermat). Let f : (a, b) → R is a function andsuppose that

x∗ ∈ (a, b) (20)

and x∗ is a local extreme of f(x). If f(x) is differentiable atx∗ then

f ′(x∗) = 0. (21)

Similarly, if vector x ∈ Rn and f(x) is differentiable atlocal extreme x∗ then the gradient at extreme is zero

fx(x∗) = 0. (22)

2) Sufficient conditions for optimality: Let F xx is Hessianof a function f(x) which is twice differentiable

F xx =

{∂2f(x)

∂xi∂xj

}> 0 (23)

The sufficient condition for minimum states that this matrixmust be positive definite, i.e., have eigenvalues that are posi-tive.

While the first derivative test identifies points that mightbe optima, this test does not distinguish a point which is aminimum from one that is a maximum or one that is neither.When the objective function is twice differentiable, these casescan be distinguished by checking the second derivative or thematrix of second derivatives (called the Hessian matrix) inunconstrained problems, or the matrix of second derivativesof the objective function and the constraints called the bor-dered Hessian in constrained problems. The conditions thatdistinguish maxima, or minima, from other stationary pointsare called “second-order conditions”. If a candidate solutionsatisfies the first-order conditions, then satisfaction of thesecond-order conditions as well is sufficient to establish atleast local optimality.

Further, critical points can be classified using the definite-ness of the Hessian matrix. If the Hessian is positive definiteat a critical point, then the point is a local minimum; if theHessian matrix is negative definite, then the point is a localmaximum; finally, if indefinite, then the point is some kind ofa saddle point.

B. Method of of Lagrange multipliers

Constrained problems can often be transformed into uncon-strained problems using the method of Lagrange multipliers.Lagrangian relaxation can also provide approximate solutionsto difficult constrained problems. In mathematical optimiza-tion, the method of Lagrange multipliers (named after JosephLouis Lagrange; 1736 – 1813) provides a strategy for findingmaxima and minima of a function subject to constraints.

For instance, consider the optimization problem to maxi-mize objective function f(x, u) subject to constraint

g(x, u) = c. (24)

Let us introduce a new variable λ called a Lagrange multiplier,and study the Lagrange function defined by

L(x, u, λ) = f(x, u) + λ (g(x, u)− c) , (25)

where the λ term may be either added or subtracted. If f(x, u)is a maximum for the original constrained problem, thenthere exists λ such that (x, u, λ) is a stationary point for theLagrange function (stationary points are those points where thepartial derivatives of L are zero). However, not all stationarypoints yield a solution of the original problem. Thus, themethod of Lagrange multipliers yields a necessary conditionfor optimality in constrained problems [2], [3], [4], [5].

C. Linear programmingLinear programming is a mathematical method for deter-

mining a way to achieve the best outcome (such as maximumprofit or lowest cost) in a given mathematical model for somelist of requirements represented as linear relationships. Linearprogramming arose mathematical models developed duringWorld War II to plan expenditures and returns in order toreduce costs to the army and increase losses to the enemy. Itwas kept secret until 1947. After that, many industries foundits use in their daily planning.

The founders of this subject are Leonid Kantorovich, aRussian mathematician who developed linear programmingproblems in 1939, Dantzig (George Bernard Dantzig, 1914– 2005), who published the simplex method in 1947 andmore times later [6], [7], [8], and John von Neumann, whodeveloped the theory of the duality in the same year.

Linear programs are problems that can be expressed incanonical form:

maxx

cTx, (26a)

s.t.Ax ≤ b, (26b)x ≥ 0. (26c)

with the corresponding symmetric dual problem

miny

bTx, (27a)

s.t.ATy ≤ c, (27b)

y ≥ 0. (27c)

where x represents the vector of variables (to be determined),b, c are vectors of (known) coefficients and A is a (known)matrix of coefficients. The expression to be maximized orminimized is called the objective function (cTx in this case).The equations Ax ≤ b are the constraints which specify aconvex polytope over which the objective function is to beoptimized. (In this context, two vectors are comparable whenevery entry in one is less than or equal to the correspondingentry in the other. Otherwise, they are incomparable.) Thebasic procedures of computation optimal variables x∗ aregiven by so called simplex algorithm or interior methods [9],[10], [11].

156 2012 13th International Carpathian Control Conference (ICCC)

Page 5: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

TABLE ISURVEY OF OFTEN USING NLP METHODS

Group Subgroup MethodOnedimensionalproblems

Passive finding ofextreme

Passive finding of extreme

Direct methods forfinding of extreme

Fibonacci methodA golden cutQuadratic interpolationNewton Raphson method

Unconstrainedmultidimensionalproblems

Passive finding ofextreme

Extended Passive method

Nonderivativedirect methods

Probe methodRossenbrock methodSimplex method

First order gradientmethods

Simple gradient methodRelaxed gradient methodPARTAN methodModified gradient methods

Second order gra-dient methods

Newton methodFletcher – Powell methodZontendijk methodGoldfarb methodBroyden method

Constrainedmultidimensionalproblems

Equalityconstraints

Applied gradient methodApplied Newton method

Inequalityconstraints

Method of linear approxi-mationGradient method for in-equalities

Penalty function Carrol methodRossenbrock method

QuadraticProgramming

Linear programming can be applied to various fields ofstudy. It is used most extensively in business and economics,but can also be utilized for some engineering problems.Industries that use linear programming models include trans-portation, energy, telecommunications, and manufacturing. Ithas proved useful in modeling diverse types of problems inplanning, routing, scheduling, assignment, and design [12].

D. Nonlinear programmingIn mathematics, nonlinear programming (NLP) is the pro-

cess of solving a system of equalities and inequalities, col-lectively termed constraints, over a set of unknown realvariables, along with an objective function to be maximizedor minimized, where some of the constraints or the objectivefunction are nonlinear [13], [14], [15]. Many iterative methodshave been used in this field. They can be divided to groupsand subgroups – see Tab. I [10].

E. Multi-objective optimizationMulti-objective optimization (or multi-objective program-

ming) [12], [15], [16] also known as multi-criteria or multi-attribute optimization, aims to optimize simultaneously two ormore conflicting objectives subject to certain constraints.

Multi-objective optimization problems can be found invarious fields: product and process design, finance, aircraftdesign, oil and gas industry, automobile design, or whereveroptimal decisions need to be taken in the presence of trade-offsbetween two or more conflicting objectives. Maximizing profitand minimizing the cost of a product; maximizing performanceand minimizing fuel consumption of a vehicle; and minimizingweight while maximizing the strength of a particular compo-nent are examples of multi-objective optimization problems.

For nontrivial multi-objective problems, one cannot identifya single solution that simultaneously optimizes each objective.While searching for solutions, one reaches points such that,when attempting to improve an objective further, other objec-tives get worse as a result. A tentative solution is called non-dominated, Pareto optimal, or Pareto efficient if it cannot beeliminated from consideration by replacing it with another so-lution which improves an objective without worsening anotherone. Finding such non-dominated solutions and quantifying thetrade-offs in satifying the different objectives is the goal whensetting up and solving a multi-objective optimization problem.

In mathematical terms, the multi-objective problem can bewritten as:

minx

[μ1(x), μ2(x), . . . , μn(x)]T (28a)

s.t.g(x) ≤ 0, (28b)h(x) = 0, (28c)

xl ≥ x ≥ xu (28d)

where μi(x) is the i-th objective function, g and h are theinequality and equality constraints, respectively, and x is thevector of optimization or decision variables. The solution tothe above problem is a set of Pareto points. Thus, instead ofbeing a unique solution to the problem, the solution to a multi-objective problem is a possibly infinite set of Pareto points.A design point in objective space μ is termed Pareto optimalif there does not exist another feasible design objective vectorμ∗ such that μi ≤ μ∗

i for all i, and μi < μ∗

i for at least oneindex of i.

F. Calculus of variationsCalculus of variations is a field of mathematics that deals

with extremal functionals, as opposed to ordinary calculuswhich deals with functions. A functional is usually a mappingfrom a set of functions to real numbers. Functionals are oftenformed as definite integrals involving unknown functions andtheir derivatives. The interest is in extremal functions thatmake the functional attain a maximum or minimum value –or stationary functions – those where the rate of change of thefunctional is precisely zero.

In calculus of variations, the Euler–Lagrange equation,Euler’s equation [17] or Lagrange’s equation, is a differentialequation whose solutions are the functions for which a givenfunctional is stationary.

Euler’s (1707 – 1783) contributions [17] began in 1733, andhis Elementa Calculi Variationum gave to the science its name.

1572012 13th International Carpathian Control Conference (ICCC)

Page 6: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

Lagrange contributed extensively to the theory, and Legendre(1786) laid down a method, not entirely satisfactory, for thediscrimination of maxima and minima.

1) The Euler–Lagrange equation: Consider the functional:

J[f ] =

∫ x2

x1

L(x, f, f ′)dx. (29)

The function f(x) should have at least one derivative in orderto satisfy the requirements for valid application of the function;further, if the functional J[f ] attains its local minimum atf∗ then optimal trajectory is given by solution of the Euler–Lagrange equation:

∂L

∂f− d

dx∂L

∂f ′= 0. (30)

In general this gives a second-order ordinary differentialequation which can be solved to obtain the extreme of f .The Euler–Lagrange equation is a necessary, but not sufficientcondition for an extreme. It is possible to express sufficientcondition by help Legendre’s condition [18]

Lf ′f ′(x, f, f ′) ≥ 0 (31)

if f∗ is a trajectory that minimizes (for maximization oppositeinequalities) some measure of performance (functional) withinprescribed constraint boundaries.

Lev Pontryagin, Ralph Rockafellar, and Clarke developednew mathematical tools for optimal control theory, a general-ization of calculus of variations [19].

G. Optimal controlOptimal control deals with the problem of finding a control

law for a given system such that a certain optimality criterionis achieved. A control problem includes a cost functional thatis a function of state and control variables.

The optimal control can be derived using Pontryagin’smaximum principle (a necessary condition also known asPontryagin’s minimum principle or simply Pontryagin’s princi-ple [20]) or by solving the Hamilton-Jacobi-Bellman equation(a sufficient condition). Some important milestones in thedevelopment of optimal control in the 20th century includethe formulation dynamic programming by Richard Bellman(1920 – 1984) in the 1950s, the development of the minimumprinciple by Lev Pontryagin (1908 – 1988) and co-workersalso in the 1950s, and the formulation of the linear quadraticregulator and the Kalman filter by Rudolf Kalman (b. 1930)in the 1960s [12].

1) Optimal control by using the variation approach: Findthe control vector trajectory u to minimize the performanceindex:

minuJ = φ(x(tf )) +

∫ tf

t0

L(x,u, t)dt (32a)

s.t.dxdt

= f (x,u) (32b)

x(t0) = x0 (32c)

where t0, tf is the time interval of interest, x is the statevector, φ is a terminal cost function, L is an intermediate costfunction, and f is a vector field. Note that equations (32b) and(32c) represent the dynamics of the system and its initial statecondition, respectively. If L(x,u, t) = 0, then the problem isknown as the Mayer problem, if φ(x(t)f ) = 0, it is knownas the Lagrange problem. Note that the performance indexJ = J(u) is a functional, this is a rule of correspondence thatassigns a real value to each function u in a class, and it is thetool that is used in this section to derive necessary optimalityconditions for the minimization of J(u).

Let us adjoin the constraints to the performance index witha time-varying Lagrange multiplier vector function:

J = φ(x(tf ))+

∫ tf

t0

L(x,u, t)+λT (f(x,u, t)− x)dt (33)

Define the Hamiltonian function H as follows:

H(x,u,λ, t) = L(x,u, t)) + λTf(x,u, t) (34)

Since the Lagrange multipliers are arbitrary, they can beselected to make the coefficients of δx(t) and δx(tf ) equalto zero, as follows:

dλdt

= −∂H∂x

(35)

λ(tf ) =∂φ(tf )

∂x(36)

This choice of λ(t) results in the following expression for J ,assuming that the initial state is fixed, so that δx(t0) = 0:

δJ =

∫ tf

t0

(Huδu)dt (37)

For a minimum, it is necessary that δJ = 0. This gives thestationarity condition:

∂H

∂u= 0. (38)

Equations (32b), (35), (38) are the first-order necessary condi-tions for a minimum of J . Equation (35) is known as the co-state (or adjoint) equation. Equation (36) and the initial statecondition (32c) represent the boundary (or transversality) con-ditions. These necessary optimality conditions, which definea two point boundary value problem, are very useful as theyallow to find analytical solutions to special types of optimalcontrol problems, and to define numerical algorithms to searchfor solutions in general cases. Moreover, they are useful tocheck the extremality of solutions found by computationalmethods [21].

2) Linear quadratic control: A special case of the gen-eral nonlinear optimal control problem given in the previoussection is the linear quadratic (LQ) optimal control problem.The LQ problem is stated as follows. Minimize the quadratic

158 2012 13th International Carpathian Control Conference (ICCC)

Page 7: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

continuous-time cost functional

minuJ =

1

2xT (tf )Sfx(tf )

+1

2

∫ tf

t0

[xT (t)Q(t)x(t) + uT (t)R(t)u(t)

]dt

(39a)s.t.dxdt

= A(t)x(t) +B(t)u(t), (39b)

x(t0) = x0 (39c)

A particular form of the LQ problem that arises in many con-trol system problems is that of the linear quadratic regulator(LQR) where all of the matrices (A,B,Q,R) are constant,the initial time is arbitrarily set to zero, and the terminal timeis taken as infinity (this last assumption is what is knownas infinite horizon). The LQR problem is stated as follows.Minimize the infinite horizon quadratic continuous-time costfunctional

minuJ =

1

2

∫∞

0

[xT (t)Qx(t) + uT (t)Ru(t)

]dt (40a)

s.t.dxdt

= Ax(t) +Bu(t), (40b)

x(t0) = x0 (40c)

In the finite-horizon case the matrices are restricted in that Qand R are positive semi-definite and positive definite, respec-tively. In the infinite-horizon case, however, the matrices Q

and R are not only positive-semi definite and positive-definite,respectively, but are also constant. These additional restrictionson Q and R in the infinite-horizon case are enforced to ensurethat the cost functional remains positive. Furthermore, in orderto ensure that the cost function is bounded, an additionalrestriction is imposed that the pair (A,B) is controllable.Note that the LQ or LQR cost functional can be thoughtof physically as attempting to minimize the control energy(measured as a quadratic form) [21].

The infinite horizon problem (ie, LQR) may seem overlyrestrictive and essentially useless because it assumes that theoperator is driving the system to zero-state and hence drivingthe output of the system to zero. This is indeed correct.However the problem of driving the output to a desirednonzero level can be solved after the zero output problem. Infact, it can be proved that this secondary LQR problem can besolved in a very straightforward manner. It has been shown inclassical optimal control theory that the LQ (or LQR) optimalcontrol has the feedback form

u(t) = −K(t)x(t) (41)

where K(t) is a properly dimensioned matrix given as

K(t) = R−1BTS(t) (42)

and S(t) is the solution of the differential Riccati equation

S(t) = −S(t)A−ATS(t)+S(t)BR−1BTS(t)−Q (43)

For the finite horizon LQ problem, the Riccati equation isintegrated backward in time using the terminal boundarycondition

S(tf ) = Sf (44)

For the infinite horizon LQR problem, the differential Riccatiequation is replaced with the algebraic Riccati equation (ARE)given as

SA+ATS − SBR−1BTS +Q = 0 (45)

Moreover, if the pair (A,C) is observable, where CTC = Q,then the closed loop system is asymptotically stable. This isan important result, as the linear quadratic regulator providesa way of stabilizing any linear system that is stabilizable.Of course, it is possible to solve the simplest case withoutRiccati equation or by using LMI. Understanding that theARE [12], [22] arises from infinite horizon problem, thematrices A,B,Q,R are all constants. There are in generaltwo solutions to the algebraic Riccati equation and the positivedefinite (or positive semi-definite) solution is the one that isused to compute the feedback gain. It is noted that the LQ(LQR) problem was elegantly solved by Rudolf Kalman [23].LQ problem besides of predictive control has been used in therobust control [24], [25], [26], diagnostics systems [27], [28],etc.

H. Pontryagin’s principlePontryagin (1908 – 1988) and coworkers worked in optimal

control theory. His maximum principle is fundamental to themodern theory of optimization. He also introduced there theidea of a bang-bang control to describe situations where eitherthe maximum action should be applied to a system, or none.If the calculus of variations finds optimal control u inside ofconstraints, Pontryagin’s principle makes it possible to find iton boundaries of allowed values.

Let a dynamical system be described by following differen-tial equations

x = f(x,u) (46)

with initial and terminal conditions

x(t0) = x0, x(tf ) = xf (47)

where function f is defined for all x(t) ∈ Rn and u(t) ∈ Rr.Let the control variable is within a set

u ∈ U (48)

Then from all possible control u ∈ U which steer dynamicalsystem (46) from x0 to xf , u must be chosen such thatfunctional

J(u) =

∫ tf

t0

f0(x(t),u(t))dt (49)

attains its extreme.If the system (46) will be extended with a new state

dx0dt

=dJdt≡ f0(x,u) (50)

1592012 13th International Carpathian Control Conference (ICCC)

Page 8: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

then original system (46) has dimension n+1 and initial andterminal condition are as follows

xT0 = (0,xT

0 ), xTf = (J,xT

f ), (51)

Adjoint system equations after defining of Hamiltonian func-tion H and using calculus of variations including Lagrangemultipliers will be in following form

λ = −∂H∂x

(52)

x =∂H

∂λ(53)

and stationarity condition

∂H

∂u= 0 (54)

The extreme of Hamiltonian function H depends on u ∈ Uonly for fixed λ,x.

For fixed λ,x, the maximum of Hamiltonian function H

denoted by M(λ,x) attains supreme

M(λ,x) = supu(t)∈U

H(λ,x,u) (55)

It was shown by Pontryagin and co-workers that in thiscase, the necessary conditions (46), (52) and (53) still hold(see Section II-G1), but the stationarity condition (54) has tobe replaced by (55). In other words, optimal control mustbe chosen such that H attains its extreme during periodtf − t0 [29].

The principle was first known as Pontryagin’s maximumprinciple and its proof is historically based on maximizingthe Hamiltonian. The initial application of this principle wasto the maximization of the terminal velocity of a rocket.However as it was subsequently mostly used for minimizationof a performance index it is also referred to as the minimumprinciple. Pontryagin’s book solved the problem of minimizinga performance index [30].

The principle states informally that the Hamiltonian mustbe minimized over U , the set of all permissible controls.

If u∗(t) ∈ U is the optimal control for the problem, thenthe principle states that:

H(x∗(t),u∗(t),λ∗(t), t) ≤ H(x∗(t),u(t),λ∗(t), t) (56)

for t ∈ [t0, tf ] and ∀u ∈ U . Here, x∗(t) is the optimal statetrajectory and λ∗(t) is the optimal co-state trajectory.

The result was first successfully applied into minimum timeproblems where the input control is constrained, but it can alsobe useful in studying state-constrained problems.

Special conditions for the Hamiltonian can also be derived.When the final time tf is fixed and the Hamiltonian does notdepend explicitly on time

∂H

∂t≡ 0, (57)

thenH(x∗(t),u∗(t),λ∗(t), t) ≡ constant (58)

and if the final time is free, then:

H(x∗(t),u∗(t),λ∗(t)) ≡ 0 (59)

When satisfied along a trajectory, Pontryagin’s minimumprinciple is a necessary condition for an optimum. TheHamilton–Jacobi–Bellman equation provides sufficient condi-tions for an optimum, but this condition must be satisfied overthe whole of the state space.

I. Dynamic programmingThe key idea behind dynamic programming is quite simple.

In general, to solve a given problem, we need to solve differentparts of the problem (subproblems) and then combine thesolutions of the subproblems to reach an overall solution.Often, many of these subproblems are exactly the same. Thedynamic programming approach seeks to solve each subprob-lem only once, thus reducing the number of computations. Thisis especially useful when the number of repeated subproblemsis exponentially large.

Top-down dynamic programming simply means storing theresults of certain calculations which are later used again sincethe completed calculation is a subproblem of a larger calcula-tion. Bottom-up dynamic programming involves formulationof a complex calculation as a recursive series of simplercalculations. This breaks a dynamic optimization problem intosimpler subproblems, as prescribed by Bellman’s Principle ofOptimality (R. E. Bellman, 1920 – 1984). The Principle ofOptimality: An optimal policy has the property that whateverthe initial state and initial decision are, the remaining decisionsmust constitute an optimal policy with regard to the stateresulting from the first decision [31].

The Bellman equation was first applied to engineeringcontrol theory and to other topics in applied mathematics, andsubsequently became an important tool in economic theory.Analyzing the appropriate Bellman equation can also solvealmost any problem, which can be solved using optimal controltheory. However, the term “Bellman equation” usually refers tothe dynamic programming equation associated with discrete-time optimization problems. In continuous-time optimizationproblems, the analogous equation is a partial differentialequation, which is usually called the Hamilton-Jacobi-Bellman(HJB) equation

−∂J∗

∂t= min

u∈U

{∂J∗

∂x∗Tf(x∗,u) + f0(x

∗,u)

}(60)

with appropriate boundary condition. The link between Pon-tryagin’s approach and HJB equation is that adjoint variablesare the sensitivities of the cost function with respect to thestates:

λ =∂J∗

∂x(61)

The term to be minimised in (60) is the Hamiltonian H .Thus, the partial differential equation (60) represents the timeevolution of the adjoints:

λ =ddt∂J∗

∂x=

∂x

∂J∗

∂t= −∂Hmin

∂x(62)

where Hmin is the minimum value of the Hamiltonian.

160 2012 13th International Carpathian Control Conference (ICCC)

Page 9: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

III. OPTIMAL CONTROL OF WELL – DEFINED PROCESSES

A. Steady state optimal controlSteady-state processes are approximations to real world

where time is assumed not be a parameter. Steady-statephysical process models also assume that the processes arealways in equilibrium. Steady-state approximations of processoperations may be used for slow processes or processesthat normally operate around a fixed set of conditions. Aperturbation is such a change of state process that requires anew computation of optimal conditions. A perturbation can becontinuous or discontinuous. The time period between jumpsof input variables Tp are very important. If Ttt is transitiontime of process after jump and the following inequality holds

Ttt � Tp (63)

then it is possible to consider a process at steady state and it isconvenient to use static model and static optimization methodsfor solution of optimal control.

1) Classification of optimal process control systems: Bydefinition, four characteristics of the objective function, phys-ical process model, and constraints of well-defined process aresteady-steady state, continuous-value, deterministic, and wellbehaved. There are various structures and characteristics ofoptimal control systems. From standpoint of optimal processcontrol, we can divide them in analytical and numericalmethods.

Analytical solution has main advantage that control lawfor a process is expressed in explicit form. Unfortunately, ananalytical solution is possible to use in simple cases only.Usually, the complex system of non-linear equations requiresiterative numerical methods. The second reason for numericaldesign is the fact that method or theory of optimal control isbased on iterative procedures.

Another possible division of the methods is as follows:• Control without constraints• Control with linear constraints and linear objective func-

tion• Control with linear constraints and non-linear objective

function or respectively.In general, constraints reduce free space for optimum. Incontrary, the finite optimum is impossible to find withoutconstraints in many cases, e.g., if objective function is linear.

Optimal control approach can be divided into:• Programmed optimal control• Feed forward optimal control• Feed forward optimal control with updated model• Feedback optimal control without model• Feedback optimal control with incremental model• Combined feed forward and feedback optimal control.Optimal control system on hierarchical basis can be divided

into:• Optimal control with one level• Optimal control with a more levels.

Optimal control with a more levels solves on each level adefined optimization task. Usually, lower optimization level

solves optimal control equations for technological equipmentand higher level optimizes a group of equipments, etc [32].Other classifications of optimal control depend on used meth-ods and principles. Below, selected elements will be described.

2) Optimal process control with nonlinear objective func-tion without constraints: Physical process model is given by

x = f(u) (64)

The objective function is

F = F (wu,wx,u,x). (65)

where u is the vector of control variables, wu,wx are desiredoperating points, x is the vector of state variables.

Substituting the values of x from (64) into (65), the objec-tive function expressed in terms u only in G form is

G = G(wu,wx,u) (66)

Optimal control equation is given by using necessary condi-tion (22) and it means to solve following system of equations

∂G

∂u= 0. (67)

Usually, iterative techniques must be used to solve nonlinearequations (67). In opposite case it is possible to attain optimalcontrol low in analytical form

u = Kbc (68)

where matrix K(n, n + m), vectors bT = (wTx ,w

Tu ), c are

constant.Sufficient conditions (23) must be investigated whether the

optimum is a maximum or a minimum.3) Optimal process control with nonlinear objective func-

tion with constraints: The constraint may be the physicalprocess model, which is not in form that can be directlysubstituted into the objective function. The constraint may alsobe equation relating some of the control variables to the othercontrol variables. Then, it is convenient in this case to useLagrange multiplier’s method [10]. The problem is to optimizeobjective function

F = F (u,x) (69)

subject to following constraints

gk(u,x) = 0 k = 1, . . . , p < n. (70)

Then Lagrange function is following

L = F +

p∑k=1

λkgk (71)

and optimal control conditions are given by following partialequations

∂L

∂ui= 0 i = 1, . . ., n (72)

∂L

∂λk= 0 k = 1, . . ., p. (73)

1612012 13th International Carpathian Control Conference (ICCC)

Page 10: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

Feedforward optimal control requires four main elements(optimization criterion, mathematical model of process, con-straints) introduced in introduction and is particular situatedto well-defined steady state processes. Fig. 2 shows the feed-forward optimal control approach.

Fig. 2. Feedforward optimal control.

4) Evolutionary optimization of poorly defined and well-defined processes: steady state: The evolutionary optimizationapproach (EVOP) is particularly applicable to steady opti-mization of poorly defined processes. EVOP uses an iterativeprocedure, which adjusts the control variables in successivemoves to arrive at the optimum of the objective function.The objective function and process may be linear, nonlinearand even undefined. EVOP can be used for feedback optimalcontrol and combined feedforward – feedback optimal control(see Fig. 3). In this case, feedback measurement of the stateand control variables will be used to determine the operatingpoint. For example, if objective function is linear, the EVOPapproach will be applied to the objective function and thesimulated physical process model to estimate coefficientsof linear objective function ci and matrix A. Then linearprogramming technique (LP) will be used to solve optimumfollowing task

Au ≤ b (74)

to optimizecTu (75)

In order to make the linearization assumption valid, the solu-tion should be limited by constraints (b) to the neighborhoodof the operating point. The results of the LP solution willbe applied in a feedforward approach to adjust the controlvariables. This process is repeated over and over again tomaintain or to improve on the optimum of the objectivefunction [1].

5) Optimization system with simulation model: The exis-tence of the simulation model allows direct optimization, theso-called principle of optimization by model. The simulationmodel represents a complex mathematical model a physicalprocess. This model can be developed as statical or dynamical.This system is known as optimization with separate model.The procedure can be characterized as universal and flexible.It has all the advantages which the classic optimization tech-niques do not possess because they are closed and applicable

Fig. 3. Combined feedforward and feedback approach.

in often highly idealized processes. Methods for dynamicoptimization process are flexible and main advantage of thosemethods are computational procedures. Hereby these rulesreduce universal usage. Therefore, the principle of optimiza-tion by model appears as a compromise. It is suitable forslow processes. Its principle [10], [33], [34], [35], [36], [37],[38], [39] is shown in Fig. 4. The optimization algorithmmay be based on convenient optimization methods or heuristicmethods [40]. Its aim is to control the course of simulation insuch a manner so as to ensure that optimal criterion reachesthe extreme and the constraints are fulfilled during repeatingcycles.

Fig. 4. The principle of optimisation by model.

B. Dynamic optimal control

Numerical optimization methods for dynamic problemstransform the original dynamic problem into static NLP opti-mization formulation which can be solved via common NLPstrategies.

1) Sequential approach: Sequential approach is also re-ferred to Control Vector Parametrization (CVP) in litera-ture [41], [42], [43] and can be found in a variety of chemicalprocess applications [44], [45], [46], [47], [48]. The main ideabehind it is to parametrize the continuous controls using afinite set of decision variables. Typically, a piece-wise constantapproximation over equally spaced time intervals is chosen forthe inputs [44], [45]. Consequently, the general NLP solveriteratively optimizes an objective function by a choosing thecontrol variables and by respecting algebraic constraints. Thesequential method is of the feasible path type method, i.e. inevery iteration, the solution of differential equations remainsfeasible while optimizing performance index. This leads to a

162 2012 13th International Carpathian Control Conference (ICCC)

Page 11: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

robust solution procedure if feasible initial conditions for allvariables are provided.

The gradient for the cost function and for constraints withrespect to the all optimized variable is estimated by the one ofthe following two: (i) by sensitivity equations of the systemwhich are integrated together with the process equations, or (ii)by adjoint variables that have to be integrated backwards. Thesensitivity equations are found by differentiating of right sidesof the process differential equations with respect to the timeinvariant parameters and variables from discretized inputs [49],[50]. The obvious advantage of sensitivity approach is thatit leads to a very efficient computation of the gradient. Thegradient computed by adjoint variables is less accurate overgradient directly expressed by equations because the statesare approximated during backward integration. In opposite,with increasing number of discretized intervals, the advantagelies on the side of adjoints because of computing advantageover sensitivity equations: (i) in case of sensitivity equations,large number of ODEs needs to be solved as every discretizedinterval adds one differential equation to process ODEs; (ii) incase of adjoints, the number of integrated differential equationsdoes not depend on number of discretized intervals, simply itis two times process equations plus an additional equation peranother optimized variable and per the constraint, as reportedin [50]. Several efficient optimization algorithms for sequentialmethods can be found in [51], [52], [49].

2) Simultaneous approach: Although sequential methodsguarantee an optimal solution by following a feasible path,in opposite, they can be prohibitively expensive because theytend to converge slowly and require solution of differentialequations at each iteration. In the simultaneous approach,the state profiles are approximated in addition to the controlprofiles, thus the dynamic optimization problem becomes apure NLP problem expressed by a set of algebraic equations.This NLP problem then simultaneously converges to theoptimum even from infeasible starting guess. The idea oforthogonal collocation coupled with Quasi-Newton methodin [53] was used to perform simultaneous parameter estimationand integration of a non-linear system dynamics. Sensitivityequations for the dependent variables with respect to theparameters along system dynamics were also replaced by anapproximated set of algebraic equations. So, the optimizationwas performed in the subspace of the parameters. A low orderpolynomial approximation was found to give good accuracywhile keeping the dimension of the NLP low. This methodproved to be superior in computational efficiency to the otherexisting parameter estimation algorithms.

A similar approach is discussed by Biegler [54], in whichorthogonal collocation is applied to the system of differentialequations, too. The control and state profiles are transformedinto a set of algebraic equations. Then, the optimization strat-egy solves the transformed problem. The main improvementof the Biegler’s simultaneous algorithm over Hertzberg andAsbjornsen algorithm is in different approximation of the timevarying independent variables. Biegler approximated them bythe Lagrange polynomials instead of the constant independent

variables used in [53].A generalization of these two collocation methods is pre-

sented in [55]. The major difference in this approach is theapplication of collocation procedure to convert the ODEs intoan approximating set of algebraic equations. This method isalso labeled as collocation on finite elements [56], [57]. Thecontinuous independent variables are specified as piecewiseconstant functions. Algorithm can specify the number andlocation of spline points. This makes [55] algorithm slightlymore complex and dimensionally larger then Biegler’s method.

The simultaneous algorithms introduces an approximationof dynamic system equations in order to explicitly avoid of in-tegration process. Hence, the optimization is carried out in thefull space of approximated inputs and states. In general, ODEsare satisfied only at the solution of optimization problem [44].So, this method is called the infeasible path approach. Theapproach can be found in several batch applications [58], [59],[60], [61].

3) Software tools: There are numerous software pack-ages (commercial or free) for solving dynamic optimizationproblems implemented in various programming environments.MATLAB packages such as orthogonal collocation basedDynopt [62] or CVP based DOTcvp [63] are among thoseavailable freely.

IV. PROCESS CONTROL EXAMPLES

A. Feed forward optimal control with updated model for groupof technological equipments

The aim of static optimization is to minimize a consumptionof fuel in four parallel manufacturing equipments (tunnelfurnaces), which during period t ∈ (0, tf ) must to make out amaterial mass Q. Fig. 5 shows functional model of n parallelmanufacturing processes.

Fig. 5. Functional model of E1, E2, . . . , En manufacturing equipments.

Individual equipments are of various construction and dif-ferent age. Efficiency of these equipments is different as well.Therefore, i-th equipment has for performance xi specificconsumption of energy yi. The dependence of specific con-sumption of energy on performance was given in Tables II,III, IV, V.

Technological processes are defined by following convexfunctions interpolated from tables

yi = ai0 + ai1xi + ai2x2i for i = 1, . . ., n (n = 4). (76)

1632012 13th International Carpathian Control Conference (ICCC)

Page 12: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

TABLE IISPECIFIC CONSUMPTION OF NATURAL GAS FOR 1st TUNNEL FURNACE

x1 [t.h−1] 2.53 2.76 3.68 4.6 5.065.525.986.446.9y1 [m3.t−1] 139.5133.1117.09116.21107 99.492.990.995.7

TABLE IIISPECIFIC CONSUMPTION OF NATURAL GAS FOR 2nd TUNNEL FURNACE

x2 [t.h−1] 2.53 2.76 3.76 4.73 5.24 5.69 6.15 7.52y2 [m3.t−1] 144.4131.2118.6111.69106.58104.8102.1112.28

TABLE IVSPECIFIC CONSUMPTION OF NATURAL GAS FOR 3rd TUNNEL FURNACE

x3 [t.h−1] 3.6 3.97 4.04 5.12 6.19 6.43y3 [m3.t−1] 129.4118.17115.34110.3108.5107.73

TABLE VSPECIFIC CONSUMPTION OF NATURAL GAS FOR 4th TUNNEL FURNACE

x4 [t.h−1] 4.073 5.067 5.35 6.24 6.706y4 [m3.t−1] 128.38115.86113.46109.35105.18

The optimal control of non-linear objective function withlinear and non-linear constraints is given as optimizationproblem

f(x) =

n∑i=1

ai0 + ai1xi + ai2x2i (77)

subject to linear and non-linear constraints

g1 ≡n∑

i=1

xiti −Q = 0 (78)

g2 ≡ ti + ZiU − tf = 0 (79)g3 ≡ ti − ZiL = 0 (80)

Thus formulated problem of optimal control is convenient tosolve by Lagrange multiplier’s method.

Then Lagrange function is following

L = f(x) +

3∑k=1

λkgk. (81)

Then optimal solution of task (77)–(80) is given from follow-ing non-linear equations

∂L

∂xi= 0 (82)

∂L

∂ti= 0 (83)

∂L

∂λk= 0. (84)

In the simplest case it is possible to get optimal control inanalytical form if the vector t = (tf , tf , tf , tf ) will be constantfor n = 4. Then Lagrange function (81) will be in followingform

L = f(x) + λ(xT tT −Q). (85)

The optimal solution of the optimization problem is given bysolution of following linear equations

∂L

∂xi= 0,

∂L

∂λ= 0 (86)

in form Bu∗ = b where u is the vector of controlvariables u = (x1, x2, x3, x4, λ)

T and matrix B, vectorb = (a11, a21, a31, a41, tf ) consist of constant elements. Thenoptimal control equation is

u∗ = B−1b. (87)

The solution of optimal control equation (87) enables to reachminimal consumption of energy and Q tons of material willbe produced after time tf .

On based data from Tab. II–V, the optimal control wasdetermined from (87) as u∗ = (6.483, 5.016, 4.398, 5.995)for t = 30 days (720 h) and desired production Q =15763.032 tons of fired material. This way of control min-imizes the consumption of fuel given as is ymin = ( 438006.53, 387 452.80, 361798.39, 470 785.25). Comparison onreal data from the plant during the same period of one monthresulted in the consumption of fuel y = (438 534.60, 468669.45, 43 0316.39, 467 483.46) m3 gas. The fuel reductionis 146 960.90 m3 gas in comparison with considered periodof one month.

The optimal process control system can be used to improvethe accuracy by updating physical process model coefficientsin real time using measurements. Fig. 6 shows that controland state variables are measured to provide data for updateof the model of processes. However, if the allowable erroris exceeded, a new set of optimal control equations can bedeveloped by using the actual values of the coefficients inmatrix A [32].

Fig. 6. Adapting feedforward optimal control by updating models.

B. Feedforward optimal process control using model optimiza-tion

Principle of static optimization by model was used todevelop a control system for heating slabs in pushing fur-nace [33], [35], [36], [37]. The pushing furnace belongs to thegroup of industrial furnaces with a high-energy consumption.Fig. 7 shows a length cut through the furnace. This furnacehas the length about 34 m. There are seven upper burners andtwo lower burners’ zones, which are controlled by stabilizationlevel. The furnace is fueled by mixture of natural, coke gas,

164 2012 13th International Carpathian Control Conference (ICCC)

Page 13: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

and blast furnace gas. The slabs, which are steel blocks withdimensions about 1.2 × 0.2 × 9 m, are pushed through thefurnace longitudinally of the furnace. The furnace is filledwith 25 slabs, which form a “slab band”. Whenever a slab ispushed into furnace, the slab band moves in forward directionand another slab leaves the furnace at the discharging end atthe required temperature of approximately 1270 ◦C.

Fig. 7. Scheme of pushing furnace.

1) Mathematical model: The development of the mathe-matical model was based on the zone structure. The workingspace of the furnace was divided into volume zones, whichare closed by real surfaces zones (roof, wall, and slab). Mainprocesses in volume zone are described as follows.

Heat transfer by conduction:

∂t

∂τcρ =

∂(λ ∂t

∂x

)∂x

(88)

where c is the specific heat capacity [J kg−1 K−1], ρ is thedensity [kg m−3], x is the coordinate [m], t is the temperature[◦C], λ is the thermal conductivity [W m−1 K−1], τ is the time[s].

Conduction heat transfer is solved for roof, wall, slabelements within the initial and boundary conditions

t(x, 0) = f(x) (89)

−λ dtdx

= qin forx = 0 (90)

−λ dtdx

= qout forx = h (91)

where qin, qout is density of input heat flow, output heat flowin element which is defined by (92), (93), h is the thicknessof element.

The boundary conditions of fourth type are held at theboundary of the wall layers. The heat flows on the lining andthe slabs are given by convection and radiation

Qc = α(T1 − T2)S (92)

Qri = σ

⎡⎣ n∑j=1

TAijT4j −

n∑i=j

TAjiT4i

⎤⎦ (93)

where Qc is the convective heat flow [W], α is the convectivecoefficient [W m−2 K−1], S is the surface [m2], T1, T2 aretemperatures of volume and real zones respectively [K], σ isStefan–Boltzman’s constant (5.6703 10−8 [W m−2 K−4]), Qr

– radiation heat flow [W], T – temperature of radiation surface[K], TAij , TAji total heat exchange surface between zones iand j.

The temperature of combustion products in each i-th vol-ume zone was determined by solving the non linear balanceequations

Vfi H

fi + V air

i cairi t

airi +

∑V ci+1c

ci+1t

ci+1p

ini+1

−Qslabi −Qwall

i −Qwateri −

∑V ci c

ci t

cip

outi = 0 (94)

where Hf , is the calorific value of mixture fuel gas, V f , V air,V c is the volume flow of fuel, combustion air, combustionproducts, c is the specific heat capacity, Qslab, Qwall are totalheat flows by conection and radiation for slab and walls, Qwater

is the heat flow by water cooled skids, pin, pout are relativeparts of input or output volume flow combustion products intovolume zone.

The system equations (88)–(94) were solved for all volumezones during every discrete simulation time period. The sim-ulation model makes it possible to take into account differentdimensional configuration of the working space of the furnaceand slabs including their production range. The basic inputparameters besides constructed parameters, dimensions arevolume flows of fuel, pushing period for slabs, the compositionof fuel mixture (natural, coke gas and blast furnace gas), andoutput of furnace. The basic output parameters of simulationare the temperature distribution in the combustion in thecombustion products, slabs and lining in the cross-section andalong the length of the furnace, the composition of combustionproducts, temperature gradient in slabs and the mass of metalloss due to scaling. From standpoint of optimal control, thevector of control variables is defined as

u = (V f1 , V

f2 , . . ., V

f9 ,Δτ, p

cokegas, pbgas) (95)

where V fi for i = 1, . . . , 7 are fuel volume flows for upper

controlled zones and i = 8, 9 for lower zones, pcokegas, pbgas

are % contents of coke gas and blast furnace gas, Δτ is thepushing period of slabs.

2) Optimisation criterion: The aim is to find optimal con-trol u variables, optimal trajectory tslab(x, τ) for the minimumof following functional

J(u) = k1J1 + k2J2 + k3J3 (96)

where k1, k2, k3 are weighting (prices) constants.First term in (96) expresses the quality requirement on

temperature field of out pushed slabs in following quadraticcriterion

J1 =

∫ h

0

(tr − t(x, τk))2 dx. (97)

where h is the thickness of slab, τk is the time when first inputslab leaves the furnace, tr is the desirable slab temperature inthe moment of exit from furnace.

The second term stands for fuel consumption

J2 =

∫τ

9∑i=1

ui(τ)dτ (98)

1652012 13th International Carpathian Control Conference (ICCC)

Page 14: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

The third term presents the metal loss due to scaling.

J3 =

∫τ

f [t (x, τ) , τ,%O2(τ) ] dτ forx = h and x = 0

(99)where %O2 means the concentration of oxygen in furnace.

3) Constraints: The constraints were given in followingform

• The produced mass of hot slabs Q = 800 kt.• Time of production τp ≤ 7500 h.• V min

i ≤ V fi ≤ V max

i where V mini , V max

i are constants.4) Optimisation technique: In this case, the simulation

model was used given by equations (88)–(94). The existence ofthe simulation model allowed direct optimization with model(see Fig. 4). Optimization procedures were based at first onprobe algorithm and next and algorithm based on gradientmethod with constraints was employed. The effectiveness ofthe second technique was better. Fig. 8 shows the course J(u)depending on optimization steps for the first technique. Valuesof functionals (96)–(99) have economical interpretation, e.g.(96) as total costs of heating.

Fig. 8. Values optimised criterion during simulation.

5) Optimal control of heating process in pushing furnace:Obtained optimal control variables are given as:

• Volume flow of fuel, [m3 s−1]:

(u1, . . . , u5) = (0.0, 0.35, 0.68, 0.63, 0.11)

(u6, . . . , u9) = (0.05, 0.12, 1.22, 1.1)

• Period of pushing, [s]: u10 = 530• Coke, Blast furnace, Natural gas: u11 = 60%, u12 =

40%, 0%These control variables provide the heating with minimal

value of optimization criterion. Temperature gradient of lastslab was 13.6 ◦C. It is very good quality of reheating slabs.Fig. 9 shows optimal trajectories of slab in cross section.If control system consists from two levels (stabilization +optimization), desirable values for stabilization level will bedefined by temperature trajectory for upper surfaces of slabsin furnace.

C. Time optimal membrane filtrationThe aim is to study dynamic operation of a batch membrane

filtration process [64]. The process model is described by a setnonlinear ordinary differential equations that are input affine.The objective is to find time-optimal control trajectory. Weapply Pontryagin’s minimum principle to formulate necessaryconditions of optimality. We show that optimality conditions

Fig. 9. Optimal trajectory for slab.

are sufficient to determine optimal operation analytically forthis simple process.

A schematic diagram of a discontinuous membrane filtrationprocess is shown in Figure 10. Considering a process liqueur

feed tank

diluant

u(t)

permeate

q(t)

membranemodule

retentate

Fig. 10. Schematic representation of a generalized batch filtration process.

with two solutes, the general purpose of a batch plant canbe summarized as to increase the macro-solute concentrationfrom c1,0 to c1,f and to reduce the micro-solute concentrationfrom c2,0 to c2,f. The fractionation is accomplished by perform-ing a so called diafiltration mode in which the micro-solute iswashed out of the process liqueur by introducing fresh buffer(i.e. diluant) into the feed reservoir while simultaneouslyremoving the macro-solute-free permeate.

1) Model: The balance of each solute can be written asdcidt

=ciq

V(Ri − α), ci(0) = ci0, i = 1, 2 (100)

where V is the retentate volume at time t. The rejectioncoefficients are R1 = 1 for macro-solute (does not passthrough the membrane) and R2 = 0 for micro-solute. Weassume concentration polarization model with

q(c1) = kA lncw

c1(101)

166 2012 13th International Carpathian Control Conference (ICCC)

Page 15: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

where k is the mass transfer coefficient, A is the membranearea, and cw is the macro-solute concentration at the membranewall.

As R1 = 1, the volume balance follows from the materialbalance of the macro-solute

c1V = c1,0V0 (102)

where V0 represents the initial tank volume.2) Minimum Time Problem: The objective of this optimi-

sation task is to find the time dependent function α(t) whichuses minimum time to drive the process from initial state toa prescribed terminal state. Mathematical formulation of thisdynamic optimisation problem is as follows

J1 =minα(t)

tf (103a)

s.t.

c1 =c1q

V(1− α), c1(0) = c1,0, c1(tf) = c1,f, (103b)

c2 = −c2qVα, c2(0) = c2,0, c2(tf) = c2,f, (103c)

V =c1,0

c1V0 (103d)

α ≥ 0. (103e)

3) Solution: Hamiltonian of this problem is linear in controlvariable α. Therefore, its minimum will be attained with α onits boundaries (bang-bang control) if its derivative with respectto α will not be zero. If it is zero, the singular case occursand we inspect time derivatives of Hamiltonian and requirethem to be zero. These conditions in our case yield α = 1 ifc1 = cw/e.

Therefore, the optimal process will then consist of consec-utive operational steps of three basic operational modes ina certain order. These operational modes can be technicallycharacterized as concentration mode (α = 0), constant volumediafiltration mode (α = 1), and pure dilution (α = ∞). Thislatter case, α = ∞, corresponds to instantaneous addition ofdiluant.

This theoretical result was confirmed by methods of numericdynamic optimization as mentioned in Section III-B. Theoptimal minimum-time operation for this type of membranecan be stated as follows:

1) The first (optional) step is either pure dilution (α =∞)or pure ultrafiltration (α = 0) until optimal macro-soluteconcentration c = cg/e is obtained.

2) The second step is CVD (α = 1) maintaining theoptimal macro-solute concentration. This step finishes ifeither final concentration of micro-solute or final ratio ofmacro-solute to micro-solute concentration is obtained.

3) Finally, the third (optional) step is again either puredilution (α = ∞) or pure ultrafiltration (α = 0) untilfinal concentration of both components are obtained.

D. Two-stage batch reactor controlWe consider connected two-stage batch reactors [65], [44],

[45]. The first one is filled with diluted solution of compound

A with initial concentration cA(t0) up to volume V1 and someportion of catalyst. The heating coil is a control variable duringthe first stage. In the first reactor, the chain reaction

Ak1→ B

k2→ C (104)

takes place till an undetermined time tp. At this instant, thedynamic of the process will change. The second batch reactoris filled with the products from the first reaction and an amountS of diluted solution of compound B with the concentrationcsB

is added. Three parallel reactions at isothermal conditionstake place in the reactor

B→ D, B→ E, 2B→ F. (105)

The objective is to maximize an amount of compound D

equal to V2cD subject to a minimal desired concentration cwD

ofcompound D at final time tf and subject to process equations.The decision variables are the reactor temperature T at thefirst reaction stage, the switching time tp between two stages,final time tf and the amount of S. In overall, the optimizationproblem is given as follows:cost function:

minS,tp,T [0,tf ]

V2cD(tf ) (106a)

constraints:process⎡⎢⎢⎢⎢⎢⎢⎣

cAcBcCcDcEcF

⎤⎥⎥⎥⎥⎥⎥⎦=

⎡⎢⎢⎢⎢⎢⎢⎣

−2 k1(T ) c2Ak1(T ) c

2A− k2(T ) cB

k2(T ) cB000

⎤⎥⎥⎥⎥⎥⎥⎦

t ∈ [0, tp] (106b)

⎡⎢⎢⎢⎢⎢⎢⎣

cAcBcCcDcEcF

⎤⎥⎥⎥⎥⎥⎥⎦=

⎡⎢⎢⎢⎢⎢⎢⎣

0−0.02 cB − 0.05 cB − 0.00008 c2

B

00.02 cB0.05 cB

0.00004 c2B

⎤⎥⎥⎥⎥⎥⎥⎦

t ∈ [tp, tf ]

(106c)

terminal

cD(tf )− cwD ≥ 0 (106d)

with kinetic rate constants defined as

k1(T ) = 0.0444 e−2500

T (107a)

k2(T ) = 6889 e−5000

T (107b)

and mixing operations at the switching time tp are

V2 cA(t+p ) = V1 cA(t

p ) (108a)V2 cB(t

+p ) = V1 cB(t

p ) + S csB (108b)V2 cC(t

+p ) = V1 cC(t

p ) (108c)

where V2 = V1 + S; cA(t−p ), cB(t−p ), cC(t−p ) are output con-centrations of compounds A, B, and C in the first stage;

1672012 13th International Carpathian Control Conference (ICCC)

Page 16: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

cA(t+p ), cB(t

+p ), cC(t

+p ) are initial input concentrations of com-

pounds A, B, and C in the second stage; S stands for an amountof addition of compound B at switching time tp with fixedconcentration cs

B. The values of process parameters are the

following: V1 = 0.1m3, cA(t0) = 2000molm−3, cB−F(t0) =0molm−3, cs

B= 600molm−3, cw

D= 150molm−3. The final

time is constrained by tf ≤ 180min.Numerical solution of the optimization problem was ob-

tained by orthogonal collocation method. Both control andstate profiles are parametrized and transformed from infinitetime domain into finite time domain, i.e. continuous problembecomes NLP. In the first stage, 4 intervals with 10 discretiza-tion points are used for state variable, 3 intervals with 4,5,3collocation points are used for control variables, respectively.In the second stage, single interval with 10 collocation pointsis used for state variables. No control variable is presentedhere as the process operates at isothermal conditions.

The optimal value of performance index is J = 25.56moland the value of the addition is S = 0.0702m3. Bothconstraints are satisfied and active: the final concentration cDof compound D is equal to desired 150molm−3 and the finaltime tf coincides with the maximum 180min. The resultingoptimal switching time takes value tp = 105.8min. Thenominal solutions for control and state variables are shownin Fig. 11.

E. Alternating activated sludge process

We study a small size single basin wastewater treatmentplant. The removal of nitrogen (N) requires two biologicalprocesses: nitrification and denitrification. The former takesplace under aerobic conditions, whereas the latter requiresanoxic environment. For small size plants, i.e. less than 20,000p.e. (population equivalent), the two processes are very oftencarried out in a single basin using surface turbines. Thenitrification process (respectively denitrification process) isrealized by simply switching the turbines on (respectively off).

The process considered consists of a unique aerationtank equipped with mechanical surface aerators (turbines)which provide oxygen and mix the incoming wastewater withbiomass (Fig. 12). The settler is a cylindrical tank where thesolids are either recirculated to the aeration tank or extractedfrom the system.

We assume daily variations of both influent flowrate andorganic load variations during dry weather conditions that arebased on measured data from the plant.

The objectives is to determine an optimal sequence ofaeration/non-aeration times so that for a typical diurnal patternof disturbances, the effluent constraints are respected, the plantremains in periodical steady state, and energy consumption isminimized [66].

1) Process Model: The model we use is based on theActivated Sludge Model No.1 (ASM 1) by [67]. This is themost popular mathematical description of the biochemicalprocesses in the reactors for nitrogen and chemical oxygendemand (COD) removal. The biodegradation model consists of

0 50 100 1500

200

400

600

800

1000

1200

1400

1600

1800

2000

t [s]

c A−C

[mol

.m−3

]

0 50 100 1500

50

100

150

200

250

300

350

400

t [s]

c D−F

[mol

.m−3

]

0 50 100 150300

310

320

330

340

350

360

370

380

t [s]

T [K

]

Fig. 11. Optimal control of two-stage reactor. Top: optimal concentrationprofiles of compounds A− C. Middle: optimal concentration profiles ofcompounds E− F. Bottom: optimal control profile.

168 2012 13th International Carpathian Control Conference (ICCC)

Page 17: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

Influent

Aeration tank

Settler

Recycled sludge

Effluent

Excess sludge

Fig. 12. Typical small-size activated sludge treatment plant

11 state variables and 20 parameters and was fully describedin [66].

If we assume Nc cycles of on/off operations during one day,the 11 system differential equations can be given as

dxdτ

= u(τ)f (x, ub) , 0 ≤ τ ≤ 2Nc (109)

where ub is a binary sequence switching between 1 and 0 andu(τ) is piece-wise constant sequence of switching times

u(τ) = ui = Δti, i− 1 ≤ τ < i, i = 1, 2, . . . , 2Nc

(110)2) Definition of Optimal Operation:

a) Cost Function: About 3/4 of the total cost is relatedto energy consumption of the aeration turbines [68]. As theseoperate in on/off mode, minimizing the time of aeration willdecrease the operating costs. Therefore, the dimensionless costfunction is defined as

minuJ =

Nc∑i=1

u2i−1

T(111)

where T represents time interval of one day.b) Constraints: According to the new European Union

regulations on the effluent of wastewater treatment plants,the maximum concentrations in terms of chemical oxygendemand (COD), biological oxygen demand (BOD), suspendedsolids (SS), and total nitrogen (TN) are to be respected.The most critical is the total nitrogen constraint since theother constraints are usually satisfied during normal operatingconditions for this plant.

TNmax ≤ 10mg/l (112)

In addition, some limitations are imposed on the aeration timesto ensure the feasibility of the computed aeration profiles andto prevent the turbines from damaging.

The minimum air-on and air-off times are set to 15 minutesto avoid too frequent cycling of the turbines and to ensure thatthe activated sludge after anoxic periods will be sufficientlyaerated and mixed in the aeration tank.

Maximum times of 120 minutes are also considered toprevent floc sedimentation in the aeration tank as well asanaerobic conditions, hence modifying the degradation per-formances.

In order to find a stationary regime, initial conditions ofthe plant are assumed to be unknown and are subject to

0 2 4 6 8 10 12 14 16 18 20 22 247

7.5

8

8.5

9

9.5

10

10.5Total Nitrogen

N [m

g/l]

time [h]

0 2 4 6 8 10 12 14 16 18 20 22 240

0.1

0.2

0.3

0.4

0.5

0.6

Aeration rates

time [h]

Aer

atio

n ra

te [−

]

Fig. 13. Optimal stationary trajectories for J = 39.51%. Top: Nitrogenconstraint, bottom: aeration policy

optimization. Then, the requirement of a stationary regimedictates that states at the final optimization time are the sameas the initial states.

The final time of optimization T has been chosen as oneday as the disturbances are periodic with this frequency. Thisresults in an equality constraint that sum of all aeration andnon-aeration times should be equal to T .

3) Simulation Results and Discussion: To solve the prob-lem, we have applied package DYNO [69] implemented inFORTRAN. It is based on CVP method.

The number of cycles has been set to Nc = 29 and was nota subject of further optimization as this would lead to mixedinteger dynamic optimization.

This optimization problem converged to the optimal aerationprofile with the average aeration rate of 39.51% shown inFig. 13. Aeration rate is defined as the ratio of aeration timeto one cycle time., i.e. u2i−1/(u2i−1 +u2i). It can be noticedthat the total nitrogen hits the maximum constraint.

The optimal trajectories of nitrate and nitrite nitrogenconcentration SNO and dissolved oxygen concentration SO

1692012 13th International Carpathian Control Conference (ICCC)

Page 18: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

0 1 2 3 4 53

4

5

6

7

8

9

10

11Total Nitrogen

N [m

g/l]

time [day]

perturbednominal

0 1 2 3 4 50

0.1

0.2

0.3

0.4

0.5

0.6

Aeration rates

time [day]

Aer

atio

n ra

te [−

]

Fig. 14. Trajectories for rule based control. Top: Nominal and perturbednitrogen constraint, bottom: nominal aeration policy

show only a limited sensitivity towards the disturbances (inletflowrates and inlet concentrations). More precisely, switchingof the turbines in the obtained optimal stationary regime occursfrequently either if the concentration of SNO falls close to zeroor if the concentration of SO is sufficiently high. Moreover,these two states can be measured. A simple feedback controlstrategy can then be proposed from their behavior:

1) Start aeration when SNO decreases sufficiently close tozero,

2) Stop aeration when SO reaches a certain value.The application of these simple rules is shown in Fig. 14

where the total nitrogen concentration is shown for two cases.The first one denoted as nominal applies the rules with valuesof SNO(min) = 0.01mg/l and SO(max) = 0.7mg/l. In theperturbed case it is supposed that the third day is rainy with300% increase of the influent flow and with 50% decrease ofthe influent concentrations for the whole day.

This simple feedback control strategy handles the usualdaily variations of influent flow and total nitrogen load very

well and keeps the effluent total nitrogen limit within theconstraint in the first five days.

Continuation of the simulation for another 200 days (notshown here) to attain stationary operation gives the averageaeration rate approximately the same as that of the optimalcontrol (39.60%) with the peak concentration of the totalnitrogen only slightly higher (10.74mg/l).

V. CONCLUSIONS

In this survey paper we have presented basic conceptsfrom optimization and optimal control. These are applied tokey technological units in process industries to improve theiroperation.

Static optimization is mainly used in process managementwhere dynamical properties of process plants can be neglected.It can be thought of as a tool for managers and as an aid tomeet correct decisions.

Dynamic optimization or optimal control of processes isinherently tied with unsteady operations, transient changes,disturbance rejection, and batch process operation. Its targetsare often minimum time or minimum energy operation.

Optimal process control does not require additional costs orspecial investments in technologies. Against the backdrop ofgrowing global energy consumption, process optimal control isin a competitive battle with alternatives, especially investmentsto manufacture, the payback period of which is also relativelylong. That will lead to the attractiveness of these methods andtechniques because the payback period of investments will beshortened.

The paper contains selected examples chosen to illustratetheoretical properties as well as practical aspects of optimalprocess operation.

VI. ACKNOWLEDGMENT

This work was supported by the Slovak Research andDevelopment Agency under the contract no. APVV-0582-06and grants VEGA No. 1/0036/12, and 1/0095/11.

REFERENCES

[1] T. H. Lee, G. E. Adams, and W. M. Gaines, Computer Process Control:Modeling and Optimisation. John Wiley & Sons, Inc., 1968.

[2] D. P. Bertsekas, Nonlinear Programming. Cambridge, MA.: AthenaScientific, 1999.

[3] I. B. Vapnyarskii, “Lagrange multipliers,” in Encyclopaedia ofMathematics, Hazewinkel and Michiel, Eds. Springer, 2001.[Online]. Available: http://www.encyclopediaofmath.org/index.php?title=L/l057190

[4] H. J. Baptiste and C. Lemarechal, Advanced theory and bundle methods.Grundlehren der Mathematischen Wissenschaften [Fundamental Princi-ples of Mathematical Sciences], ser. 306. Berlin: Springer-Verlag, 1993,vol. 2, ch. Convex analysis and minimization algorithms, pp. 136–193.

[5] L. S. Lasdon, Optimization theory for large systems, ser. Macmillanseries in operations research. New York: The Macmillan Company,1970.

[6] G. B. Dantzig, “Programming of interdependent activities: II math-ematical mode,” Econometrica, vol. 17, no. 3, pp. 200–211, 1949,doi:10.2307/1905523.

[7] ——, Linear inequalities and related systems. Princeton UniversityPress, 1956.

[8] ——, Linear programming and extensions. Princeton University Pressand the RAND Corporation, 1963.

170 2012 13th International Carpathian Control Conference (ICCC)

Page 19: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

[9] K. Kostur, Optimal control I. (Optimalne riadenie I.). Kosice: TechnicalUniversity, 1984.

[10] ——, Optimization of processes (Optimalizacia procesov). Kosice:Technical University, 1989.

[11] S. Boyd and L. Vandenberghe, Convex Optimization. CambridgeUniversity Press, 2004.

[12] Wikipedia, “Mathematical optimization — Wikipedia, the freeencyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/Mathematical optimization

[13] A. Mordecai, Nonlinear Programming: Analysis and Methods. DoverPublishing, 2003.

[14] D. G. Luenberger and Y. Ye, Linear and nonlinear programming, 3rd ed.,ser. International Series in Operations Research & Management Science.New York: Springer, 2008.

[15] R. E. Steuer, Multiple Criteria Optimization: Theory, Computations, andApplication. New York: John Wiley & Sons, Inc., 1986.

[16] Y. Sawaragi, H. Nakayama, and T. Tanino, Theory of MultiobjectiveOptimization. Orlando, FL: Academic Press Inc., 1985, vol. 176 ofMathematics in Science and Engineering.

[17] C. Fox, An introduction to the calculus of variations, ser. Dover Bookson mathematics. Dover Publications, 1987.

[18] L. Lebedev and M. Cloud, The calculus of variations and functionalanalysis: with optimal control and applications in mechanics, ser. Serieson stability, vibration and control of systems: Series A. World Scientific,2003.

[19] J. Ferguson, “Brief survey of the history of the calculus ofvariations and its applications,” 2004. [Online]. Available: http://arxiv.org/abs/math/0402357v1

[20] I. M. Ross, A Primer on Pontryagin’s Principle in Optimal Control.Collegiate Publishers, 2009.

[21] V. M. Becerra, “Optimal control,” Scholarpedia, vol. 3, no. 1, p. 5354,2008. [Online]. Available: http://dx.doi.org/10.4249/scholarpedia.5354

[22] J. Mikles and M. Fikar, Process Modelling, Identification, and Control.Berlin: Springer, 2007.

[23] R. Kalman, “A new approach to linear filtering and prediction problems,”Transactions of the ASME, Journal of Basic Engineering, vol. 82, pp.34–45, 1960.

[24] A. Filasova, “Robust control design: An optimal control approach,” inProc. of the IEEE International Conference on Intelligent EngineeringSystems INES’99, Stara Lesna, Slovakia, 1999, pp. 515–518.

[25] M. Bakosova, D. Puna, P. Dostal, and J. Zavacka, “Robust stabilizationof a chemical reactor,” Chemical Papers, vol. 63, no. 5, pp. 527–536,2009.

[26] M. Bakosova, D. Puna, J. Zavacka, and K. Vanekova, “Robust staticoutput feedback control of a mixing unit,” in Proceedings of theEuropean Control Conference 2009. Budapest: EUCA, 2009, pp. 4139–4144.

[27] D. Krokavec and A. Filasova, Diagnostics of dynamical system (Diag-nostika dynamickych systemov). Kosice: Elfa, 2007.

[28] ——, “Fault detection based on linear quadratic control performances,”in Proceedings of the 10th International Science and Technology Confer-ence Diagnostics of Processes and Systems DPS’2011, Zamosc, Poland,2011, pp. 52–56.

[29] K. Kostur, Optimal control II. (Optimalne riadenie II.). Kosice:Technical University, 1985.

[30] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F.Mishchenko, The Mathematical Theory of Optimal Processes, 1962.

[31] R. E. Bellman, Dynamic Programming. Princeton, NJ: PrincetonUniversity Press, 1957.

[32] K. Kostur, “Optimal control system for group of technological equip-ments (System optimalneho riadenia skupiny agregatov),” Automatizace,vol. 30, pp. 199–203, 1987.

[33] ——, “Optimization of heating slabs using a simulation model,” inTransaction of Technical University of Kosice. Riecansky sciencepublishing Co, 1991, vol. 1, pp. 199–202.

[34] ——, “Optimization of a tunnel furnace,” in Transaction of TechnicalUniversity of Kosice. Riecansky science publishing Co, 1994, vol. 4,pp. 189–192.

[35] ——, “Fuel and metal losses cost minimalization during slab heating inthe push heating furnace,” Hutnicke listy, vol. 49, pp. 189–192, 1994.

[36] ——, “The analysis of optimal production conditions in a pushingfurnace,” Metallurgy, vol. 36, pp. 43–47, 1996.

[37] K. Kostur and I. Pokorny, “Optimization of heating in pushing furnace,”in Proceedings of international conference on automation in control.Kosice: House of technology, 1990, pp. 163–167.

[38] K. Kostur, “The optimization of burners on tunnel furnace,” Metallurgy,vol. 37, pp. 209–214, 1998.

[39] ——, “Simulation model and optimisation of tunnel furnace,” in FirstJoint Conference of International Simulation Societies Proceedings.ETH Zurich, 1994, pp. 230–233.

[40] D. Krokavec, “Convergence of action dependent dual heuristic dynamicprogramming algorithms in LQ control tasks,” in Intelligent Technolo-gies - Theory and Application, New Trends in Intelligent Technologies.Amsterdam: IOS Press, 2002, pp. 72–80.

[41] T. F. Edgar and D. M. Himmelblau, Optimization of Chemical Processes.McGraw-Hill, New York, 1988.

[42] C. Guntern, A. Keller, and K. Hungerbuhler, “Economic Optimizationof an Industrial Semi-batch Reactor Applying Dynamic Programming,”Industrial and Engineering Chemistry Research, vol. 37, no. 10, pp.4017–4022, 1998.

[43] W. Ray, Advanced Process Control. Mc Graw Hill, New York, 1981.[44] V. S. Vassiliadis, R. W. H. Sargent, and C. C. Pantelides, “Solution

of a Class of Multistage Dynamic Optimization Problems. 1. Problemswithout Path Constraints,” Eng. Chem. Res., vol. 33, no. 9, pp. 2111–2122, 1994.

[45] ——, “Solution of a Class of Multistage Dynamic Optimization Prob-lems. 2. Problems with Path Constraints,” Eng. Chem. Res., vol. 33,no. 9, pp. 2123–2133, 1994.

[46] E. Sorensen, S. Macchietto, G. Stuart, and S. Skogestad, “OptimalControl and Online Operation of Reactive Batch Distillation,” Comp.Chem. Eng., vol. 20, no. 12, pp. 1491–1498, 1996.

[47] I. M. Mujtaba and S. Macchietto, “Efficient Optimization of BatchDistillation with Chemical Reaction using Polynomial Curve FittingTechniques,” Ind. Eng. Chem. Res., vol. 36, no. 6, pp. 2287–2295, 1997.

[48] T. Ishikawa, Y. Natori, L. Liberis, and C. Pantelides, “Modeling andOptimization of Industrial Batch Process for the Production of DioctylPhthalte,” Comp. Chem. Eng., vol. 21, pp. 1239–1244, 1997.

[49] S. Storen and T. Hertzberg, “The Sequential Quadratic ProgrammingAlgorithm for Solving Dynamic Optimization Problems – A Review,”Comp. Chem. Eng., vol. 19, pp. 495–500, 1995.

[50] V. Dovi and A. Reverberi, “Optimal Solution of Processes Describedby System of Differential Algebraic Equations,” Chemical EngineeringScience, vol. 48, no. 14, pp. 2609–2614, 1993.

[51] L. T. Biegler and R. Hughes, “Process Optimization: A comparativeCase Study,” Comp. Chem. Eng., vol. 7, no. 5, pp. 645–661, 1983.

[52] K. Lau and D. Ulrichson, “Effects of Local Constraints on the Conver-gence Behavior of Sequential Modular Simulators,” Comp. Chem. Eng.,vol. 15, no. 9, pp. 887–892, 1992.

[53] T. Hertzberg and O. Asbjornsen, Parameter Estimation in NonlinearDifferential Equations: Computer Applications in the Analysis of Dataand Plants. Science Press, Princeton, 1977.

[54] L. Biegler, “Solution of Dynamic Optimization Problems by SuccessiveQuadratic Programming and Orthogonal Collocation,” Comp. Chem.Eng., vol. 8, no. 3-4, pp. 243–248, 1984.

[55] J. Renfro, A. Morshedi, and O. Asbjornsen, “Simultaneous optimizationand solution of systems described by differential algebraic equations,”Comp. Chem. Eng., vol. 11, no. 5, pp. 503–517, 1987.

[56] G. Carey and F. B.A., “Orthogonal Collocation on Finite Elements,”Chemical Engineering Science, vol. 30, pp. 587–1596, 1975.

[57] B. Finlayson, Nonlinear Analysis in Chemical Engineering. McGraw-Hill, New York, 1980.

[58] J. E. Cuthrell and L. T. Biegler, “Simultaneous-optimization and SolutionMethods for Batch Reactor Control Profiles,” AIChE J., vol. 13, pp. 49–62, 1989.

[59] J. S. Logsdon and L. T. Biegler, “Accurate Solution of DifferentialAlgebraic Optimization Problems,” Ind. Eng. Chem. Res., vol. 18, no. 11,pp. 1628–1639, 1989.

[60] J. W. Eaton and J. B. Rawlings, “Feedback-control of Chemical Pro-cesses using Online Optimization Techniques,” Comp. Chem. Eng.,vol. 14, pp. 469–479, 1990.

[61] D. Ruppen, C. Benthack, and D. Bonvin, “Optimization of Batch ReactorOperation under Parametric Uncertainty – Computational Aspects,”Journal of Process Control, vol. 5, no. 4, pp. 235–240, 1995.

[62] M. Cizniar, M. Fikar, and M. A. Latifi, “Matlab dynamic optimisationcode dynopt. user’s guide,” KIRP FCHPT STU Bratislava, SlovakRepublic, Technical Report, 2005.

1712012 13th International Carpathian Control Conference (ICCC)

Page 20: [IEEE 2012 13th International Carpathian Control Conference (ICCC) - High Tatras, Slovakia (2012.05.28-2012.05.31)] Proceedings of the 13th International Carpathian Control Conference

[63] T. Hirmajer, M. Cizniar, M. Fikar, E. Balsa-Canto, and J. R. Banga,“Brief introduction to DOTcvp – dynamic optimization toolbox,” inProceedings of the 8th International Scientific - Technical ConferenceProcess Control, Kouty nad Desnou, Czech Republic, 2008.

[64] R. Paulen, G. Foley, M. Fikar, Z. Kovacs, and P. Czermak, “Minimizingthe process time for ultrafiltration/diafiltration under gel polarizationconditions,” Journal of Membrane Science, vol. 380, no. 1-2, pp. 148–154, Aug. 2011.

[65] T. Hirmajer and M. Fikar, “Optimal Control of a Two-Stage ReactorSystem,” Chemical Papers, vol. 60, no. 5, pp. 381–387, 2006.

[66] M. Fikar, B. Chachuat, and M. A. Latifi, “Optimal operation of alternat-ing activated sludge processes,” Control Engineering Practice, vol. 13,no. 7, pp. 853–861, 2005.

[67] M. Henze, C. P. L. Grady, W. Gujer, G. v. R. Marais, and T. Matsuo,“Activated Sludge Model No. 1,” IAWQ, London, Tech. Rep. 1, 1987.

[68] J.-L. Vasel, “Contribution a l’etude des transferts d’oxygene en gestiondes eaux,” Ph.D. dissertation, Fondation Universitaire Luxemourgeoise,Luxembourg, Arlon, 1988.

[69] M. Fikar and M. A. Latifi, “User’s guide for FORTRAN dynamic opti-misation code DYNO,” LSGC CNRS, Nancy, France; STU Bratislava,Slovak Republic, Tech. Rep. mf0201, 2002.

172 2012 13th International Carpathian Control Conference (ICCC)


Recommended