Date post: | 05-Jan-2016 |
Category: |
Documents |
Upload: | elvin-walton |
View: | 213 times |
Download: | 0 times |
1
Outline
• About Trondheim and myself• Control structure design (plantwide control)• A procedure for control structure design
I Top Down • Step 1: Degrees of freedom• Step 2: Operational objectives (optimal operation)• Step 3: What to control ? (self-optimizing control)• Step 4: Where set production rate?
II Bottom Up • Step 5: Regulatory control: What more to control ?• Step 6: Supervisory control• Step 7: Real-time optimization
• Case studies
2
Optimal operation (economics)
• What are we going to use our degrees of freedom for?• Define scalar cost function J(u0,x,d)
– u0: degrees of freedom– d: disturbances– x: states (internal variables)Typical cost function:
• Optimal operation for given d:
minuss J(uss,x,d)subject to:
Model equations: f(uss,x,d) = 0
Operational constraints: g(uss,x,d) < 0
J = cost feed + cost energy – value products
3
Optimal operation distillation column
• Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V
• Cost to be minimized (economics)
J = - P where P= pD D + pB B – pF F – pV V
• Constraints
Purity D: For example xD, impurity · max
Purity B: For example, xB, impurity · max
Flow constraints: min · D, B, L etc. · max
Column capacity (flooding): V · Vmax, etc.
Pressure: 1) p given, 2) p free: pmin · p · pmax
Feed: 1) F given 2) F free: F · Fmax
• Optimal operation: Minimize J with respect to steady-state DOFs
value products
cost energy (heating+ cooling)
cost feed
4
Optimal operation
1. Given feedAmount of products is then usually indirectly given and J = cost energy.
Optimal operation is then usually unconstrained:
2. Feed free Products usually much more valuable than feed + energy costs small.
Optimal operation is then usually constrained:
minimize J = cost feed + cost energy – value products
“maximize efficiency (energy)”
“maximize production”
Two main cases (modes) depending on marked conditions:
Control: Operate at bottleneck (“obvious”)
Control: Operate at optimal trade-off (not obvious how to do and what to control)
5
Comments optimal operation
• Do not forget to include feedrate as a degree of freedom!!– For paper machine it may be optimal to have max. drying and adjust the
speed of the paper machine!
• Control at bottleneck – see later: “Where to set the production rate”
6
Outline
• About Trondheim and myself• Control structure design (plantwide control)• A procedure for control structure design
I Top Down • Step 1: Degrees of freedom• Step 2: Operational objectives (optimal operation)• Step 3: What to control ? (self-optimizing control)• Step 4: Where set production rate?
II Bottom Up • Step 5: Regulatory control: What more to control ?• Step 6: Supervisory control• Step 7: Real-time optimization
• Case studies
7
Step 3. What should we control (c)? (primary controlled variables y1=c)
Outline
• Implementation of optimal operation
• Self-optimizing control
• Uncertainty (d and n)
• Example: Marathon runner
• Methods for finding the “magic” self-optimizing variables:A. Large gain: Minimum singular value rule
B. “Brute force” loss evaluation
C. Optimal combination of measurements
• Example: Recycle process
• Summary
8
Implementation of optimal operation
• Optimal operation for given d*:
minu J(u,x,d)subject to:
Model equations: f(u,x,d) = 0
Operational constraints: g(u,x,d) < 0
→ uopt(d*)
Problem: Usally cannot keep uopt constant because disturbances d change
How should be adjust the degrees of freedom (u)?
9
Problem: Too complicated
(requires detailed model and description of
uncertainty)
Implementation of optimal operation (Cannot keep u0opt constant)
”Obvious” solution: Optimizing control
Estimate d from measurements and recompute uopt(d)
10
In practice: Hierarchical decomposition with separate layers
What should we control?
11
Self-optimizing control: When constant setpoints is OK
Constant setpoint
12
What c’s should we control?
• Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u,d) = 0
• CONTROL ACTIVE CONSTRAINTS!– cs = value of active constraint
– Implementation of active constraints is usually “obvious”, but may need “back-off” (safety limit) for hard output constraints
• WHAT MORE SHOULD WE CONTROL?– Find “self-optimizing” variables c for remaining
unconstrained degrees of freedom u.
13
• Cost to be minimized (economics)
J = - P where P= pD D + pB B – pF F – pV V
• ConstraintsPurity D: For example xD, impurity · max
Purity B: For example, xB, impurity · max
Flow constraints: 0 · D, B, L etc. · max
Column capacity (flooding): V · Vmax, etc.
value products
cost energy (heating+ cooling)
cost feed
Recall: Optimal operation distillation
14
Expected active constraints distillation
• Valueable product: Purity spec. always active
– Reason: Amount of valuable product (D or B) should always be maximized
• Avoid product “give-away”
(“Sell water as methanol”)
• Also saves energy
• Control implications valueable product: Control
purity at spec.
valuable product methanol + max. 0.5% water
cheap product(byproduct)water + max. 0.1%methanol
methanol+ water
15
Expected active constraints distillation:Cheap product
• Over-fractionate cheap product? Trade-off:– Yes, increased recovery of valuable product (less loss)– No, costs energy
• Control implications cheap product:1. Energy expensive: Purity spec. active
→ Control purity at spec.2. Energy “cheap”: Overpurify
(a) Unconstrained optimum given by trade-off between energy and recovery.
In this case it is likely that composition is self-optimizing variable → Possibly control purity at optimum value (overpurify)
(b) Constrained optimum given by column reaching capacity constraint→ Control active capacity constraint (e.g. V=Vmax)
– Methanol + water example: Since methanol loss anyhow is low (0.1% of water), there is not much to gain by overpurifying. Nevertheless, with energy very cheap, it is probably optimal to operate at V=Vmax.
valuable product methanol + max. 0.5% water
cheap product(byproduct)water + max. 0.1%methanol
methanol+ water
16
Summary: Optimal operation distillation
• Cost to be minimizedJ = - P where P= pD D + pB B – pF F – pV V
• N=2 steady-state degrees of freedom• Active constraints distillation:
– Purity spec. valuable product is always active (“avoid give-away of valuable product”).
– Purity spec. “cheap” product may not be active (may want to overpurify to avoid loss of valuable product – but costs energy)
• Three cases:1. Nactive=2: Two active constraints (for example, xD, impurity = max. xB,
impurity = max, “TWO-POINT” COMPOSITION CONTROL)
2. Nactive=1: One constraint active (1 unconstrained DOF)
3. Nactive=0: No constraints active (2 unconstrained DOFs)Can happen if no purity specifications(e.g. byproducts or recycle)
WHAT SHOULD WE CONTROL (TO SATISFY
UNCONSTRAINED DOFs)?Solution: Often compositions but not always!
17
What should we control? – Sprinter
• Optimal operation of Sprinter (100 m), J=T– One input: ”power/speed”
– Active constraint control:• Maximum speed (”no thinking required”)
18
What should we control? – Marathon
• Optimal operation of Marathon runner, J=T– No active constraints
– Any self-optimizing variable c (to control at constant setpoint)?
• c1 = distance to leader of race• c2 = speed• c3 = heart rate• c4 = level of lactate in muscles
19
Further examples self-optimizing control
• Marathon runner
• Central bank
• Cake baking
• Business systems (KPIs)
• Investment portifolio
• Biology
• Chemical process plants: Optimal blending of gasoline
Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self-optimizing control)
20
More on further examples
• Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%)• Cake baking. J = nice taste, u = heat input. c = Temperature (200C)• Business, J = profit. c = ”Key performance indicator (KPI), e.g.
– Response time to order– Energy consumption pr. kg or unit– Number of employees– Research spendingOptimal values obtained by ”benchmarking”
• Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%)
• Biological systems:– ”Self-optimizing” controlled variables c have been found by natural
selection– Need to do ”reverse engineering” :
• Find the controlled variables used in nature• From this possibly identify what overall objective J the biological system has
been attempting to optimize
21
Unconstrained variables:What should we control?
• Intuition: “Dominant variables” (Shinnar)
• Is there any systematic procedure?
22
What should we control?Systematic procedure
• Systematic: Minimize cost J(u,d*) w.r.t. DOFs u.
1. Control active constraints (constant setpoint is optimal)
2. Remaining unconstrained DOFs (if any): Control “self-optimizing” variables c for which constant setpoints cs = copt(d*) give small (economic) loss
Loss = J - Jopt(d)
when disturbances d ≠ d* occur
c = ? (economics)
y2 = ? (stabilization)
23
Unconstrained variables:
Self-optimizing control
• Self-optimizing control:
Constant setpoints cs give
”near-optimal operation” (= acceptable loss L for expected
disturbances d and implementation errors n)
Acceptable loss ) self-optimizing control
25
The “easy” constrained variables
Cost J
Constrained variableCCopt opt = C= Cminmin
JJoptopt
c
“Obvious” that we want to keep (control) c at copt
26
The “easy” constrained variables
CCoptopt
JJoptopt
c
1) If c = u = manipulated input (MV): • Implementation trivial: Keep c=u at uopt (=umin or umax)
2) If c = y = output variable (CV): • Need to introduce backoff (safety margin):
Keep c at cs=cmin + backoff, or at cs=cmax - backoff
a) If constraint on c can be violated dynamically (only average matters)• Backoff = steady-state measurement error for c (”bias”)
b) If constraint on c can be not be violated (”hard constraint”) • Backoff = bias + dynamic control error
27
Back-off for CV-constraints
CCoptopt
JJoptopt
c
Backoff = meas.error (bias) + dynamic control error
Error can be measured by variance
Rule: “Squeeze and shift” Reduce variance (“Squeeze”) and reduce backoff (“shift”)
CCss = c = cmin min + backoff+ backoff
Loss
backoffbackoff
28
0 50 100 150 200 250 300 350 400 4500
0.5
1
1.5
2
OFF
SPEC
QUALITY
N Histogram
Q1
Sigma 1
Q2
Sigma 2
DELTA COST (W2-W1)
LEVEL 0 / LEVEL 1
Sigma 1 -- Sigma 2
LEVEL 2
Q1 -- Q2
W1
W2
COST FUNCTION
« SQUEEZE AND SHIFT »
© Richalet
29
The difficult unconstrained variables
Cost J
Selected controlled variable (remaining unconstrained)
ccoptopt
JJoptopt
c
30
Optimal operation
Cost J
Controlled variable cccoptopt
JJoptopt
Two problems:
• 1. Optimum moves because of disturbances d: copt(d)
d
LOSS
31
Optimal operation
Cost J
Controlled variable cccoptopt
JJoptopt
Two problems:
• 1. Optimum moves because of disturbances d: copt(d)
• 2. Implementation error, c = copt + n
d
n
LOSS
32
Effect of implementation error on cost (“problem 2”)
BADGood
33
Candidate controlled variables
• We are looking for some “magic” variables c to control.....What properties do they have?’
• Intuitively 1: Should have small optimal range delta copt
– since we are going to keep them constant!
• Intuitively 2: Should have small “implementation error” n• “Intuitively” 3: Should be sensitive to inputs u (remaining unconstrained
degrees of freedom), that is, the gain G0 from u to c should be large – G0: (unscaled) gain from u to c– large gain gives flat optimum in c– Charlie Moore (1980’s): Maximize minimum singular value when selecting temperature
locations for distillation
• Will show shortly: Can combine everything into the “maximum gain rule”:
– Maximize scaled gain G = Go / span(c)
Unconstrained degrees of freedom:
span(c)
34
Optimizer
Controller thatadjusts u to keep
cm = cs
Plant
cs
cm=c+n
u
c
n
d
u
c
J
cs=copt
uopt
n
Unconstrained degrees of freedom:
Justification for “intuitively 2 and 3”
Want the slope (= gain G0 from u to c) large –
corresponds to flat optimum in c
Want small n
35
Mathematic local analysis(Proof of “maximum gain rule”)
u
cost J
uopt
36
Minimum singular value of scaled gain
Maximum gain rule (Skogestad and Postlethwaite, 1996):Look for variables that maximize the scaled gain (G) (minimum singular value of the appropriately scaled steady-state gain matrix G from u to c)
(G) is called the Morari Resiliency index (MRI) by Luyben
Detailed proof: I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad,
``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).
37
Maximum gain rule for scalar system
Unconstrained degrees of freedom:
Juu: Hessian for effect of u’s on cost
Problem: Juu can be difficult to obtain
Fortunate for scalar system: Juu does not matter
38
Maximum gain rule in words
Select controlled variables c for which
the gain G0 (=“controllable range”) is large compared to
its span (=sum of optimal variation and control error)
39
B. “Brute-force” procedure for selecting (primary) controlled variables (Skogestad, 2000)
• Step 1 Determine DOFs for optimization
• Step 2 Definition of optimal operation J (cost and constraints)
• Step 3 Identification of important disturbances
• Step 4 Optimization (nominally and with disturbances)
• Step 5 Identification of candidate controlled variables (use active constraint control)
• Step 6 Evaluation of loss with constant setpoints for alternative controlled variables : Check in particular for feasibility
• Step 7 Evaluation and selection (including controllability analysis)
Case studies: Tenneessee-Eastman, Propane-propylene splitter, recycle process, heat-integrated distillation
40
FeasibilityExample: Tennessee Eastman plant
J
c = Purge rate
Nominal optimum setpoint is infeasible with disturbance 2
Oopss..bends backwards
Conclusion: Do not use purge rate as controlled variable
41
Unconstrained degrees of freedom:
C. Optimal measurement combination (Alstad, 2002)
42
Unconstrained degrees of freedom:
C. Optimal measurement combination (Alstad, 2002)
Basis: Want optimal value of c independent of disturbances ) copt = 0 ¢ d
• Find optimal solution as a function of d: uopt(d), yopt(d)
• Linearize this relationship: yopt = F d • F – sensitivity matrix
• Want:
• To achieve this for all values of d:
• Always possible if
• Optimal when we disregard implementation error (n)
43
Alstad-method continued
• To handle implementation error: Use “sensitive” measurements, with information about all independent variables (u and d)
44
Summary unconstrained degrees of freedom:
Looking for “magic” variables to keep at constant setpoints.
How can we find them systematically?
Candidates
A. Start with: Maximum gain (minimum singular value) rule:
B. Then: “Brute force evaluation” of most promising alternatives. Evaluate loss when the candidate variables c are kept constant.
In particular, may be problem with feasibility
C. If no good single candidates: Consider linear combinations (matrix H):
45
Toy Example
46
Toy Example
47
Toy Example
48
EXAMPLE: Recycle plant (Luyben, Yu, etc.)
1
2
3
4
5
Given feedrate F0 and column pressure:
Dynamic DOFs: Nm = 5 Column levels: N0y = 2Steady-state DOFs: N0 = 5 - 2 = 3
49
Recycle plant: Optimal operation
mT
1 remaining unconstrained degree of freedom
50
Control of recycle plant:Conventional structure (“Two-point”: xD)
LC
XC
LC
XC
LC
xB
xD
Control active constraints (Mr=max and xB=0.015) + xD
51
Luyben rule
Luyben rule (to avoid snowballing):
“Fix a stream in the recycle loop” (F or D)
52
Luyben rule: D constant
Luyben rule (to avoid snowballing):
“Fix a stream in the recycle loop” (F or D)
LCLC
LC
XC
53
A. Maximum gain rule: Steady-state gain
Luyben rule:
Not promising
economically
Conventional:
Looks good
54
How did we find the gains in the Table?
1. Find nominal optimum
2. Find (unscaled) gain G0 from input to candidate outputs: c = G0 u.• In this case only a single unconstrained input (DOF). Choose at u=L• Obtain gain G0 numerically by making a small perturbation in u=L while adjusting
the other inputs such that the active constraints are constant (bottom composition fixed in this case)
3. Find the span for each candidate variable• For each disturbance di make a typical change and reoptimize to obtain the optimal
ranges copt(di) • For each candidate output obtain (estimate) the control error (noise) n• The expected variation for c is then: span(c) = i |copt(di)| + |n|
4. Obtain the scaled gain, G = |G0| / span(c)5. Note: The absolute value (the vector 1-norm) is used here to "sum up" and get the overall span. Alternatively, the 2-norm could be used,
which could be viewed as putting less emphasis on the worst case. As an example, assume that the only contribution to the span is the implementation/measurement error, and that the variable we are controlling (c) is the average of 5 measurements of the same y, i.e. c=sum yi/5, and that each yi has a measurement error of 1, i.e. nyi=1. Then with the absolute value (1-norm), the contribution to the span from the implementation (meas.) error is span=sum abs(nyi)/5 = 5*1/5=1, whereas with the two-norn, span = sqrt(5*(1/5^2) = 0.447. The latter is more reasonable since we expect that the overall measurement error is reduced when taking the average of many measurements. In any case, the choice of norm is an engineering decision so there is not really one that is "right" and one that is "wrong". We often use the 2-norm for
mathematical convenience, but there are also physical justifications (as just given!).
IMPORTANT!
55
B. “Brute force” loss evaluation: Disturbance in F0
Loss with nominally optimal setpoints for Mr, xB and c
Luyben rule:
Conventional
56
B. “Brute force” loss evaluation: Implementation error
Loss with nominally optimal setpoints for Mr, xB and c
Luyben rule:
57
C. Optimal measurement combination
• 1 unconstrained variable (#c = 1)
• 1 (important) disturbance: F0 (#d = 1)
• “Optimal” combination requires 2 “measurements” (#y = #u + #d = 2)– For example, c = h1 L + h2 F
• BUT: Not much to be gained compared to control of single variable (e.g. L/F or xD)
58
Conclusion: Control of recycle plant
Active constraintMr = Mrmax
Active constraintxB = xBmin
L/F constant: Easier than “two-point” control
Assumption: Minimize energy (V)
Self-optimizing
59
Recycle systems:
Do not recommend Luyben’s rule of fixing a flow in each recycle loop
(even to avoid “snowballing”)
60
Summary: Self-optimizing Control
Self-optimizing control is when acceptable operation can be achieved using constant set points (c
s)
for the
controlled variables c (without the need to re-optimizing when disturbances occur).
c=cs
61
Summary: Procedure selection controlled variables
1. Define economics and operational constraints2. Identify degrees of freedom and important disturbances3. Optimize for various disturbances4. Identify (and control) active constraints (off-line calculations)
• May vary depending on operating region. For each operating region do step 5:
5. Identify “self-optimizing” controlled variables for remaining degrees of freedom1. (A) Identify promising (single) measurements from “maximize gain rule” (gain =
minimum singular value)• (C) Possibly consider measurement combinations if no promising
2. (B) “Brute force” evaluation of loss for promising alternatives• Necessary because “maximum gain rule” is local. • In particular: Look out for feasibility problems.
3. Controllability evaluation for promising alternatives
62
Summary ”self-optimizing” control
• Operation of most real system: Constant setpoint policy (c = cs)
– Central bank
– Business systems: KPI’s
– Biological systems
– Chemical processes
• Goal: Find controlled variables c such that constant setpoint policy gives acceptable operation in spite of uncertainty ) Self-optimizing control
• Method A: Maximize (G)
• Method B: Evaluate loss L = J - Jopt
• Method C: Optimal linear measurement combination: c = H y where HF=0
63
Outline
• Control structure design (plantwide control)
• A procedure for control structure designI Top Down
• Step 1: Degrees of freedom
• Step 2: Operational objectives (optimal operation)
• Step 3: What to control ? (self-optimzing control)
• Step 4: Where set production rate?
II Bottom Up • Step 5: Regulatory control: What more to control ?
• Step 6: Supervisory control
• Step 7: Real-time optimization
• Case studies