+ All Categories
Home > Documents > Online Data Reconciliation with Poor Redundancy Systems

Online Data Reconciliation with Poor Redundancy Systems

Date post: 07-Feb-2017
Category:
Upload: sauro
View: 214 times
Download: 0 times
Share this document with a friend
10
Published: November 10, 2011 r2011 American Chemical Society 14105 dx.doi.org/10.1021/ie202259b | Ind. Eng. Chem. Res. 2011, 50, 1410514114 ARTICLE pubs.acs.org/IECR Online Data Reconciliation with Poor Redundancy Systems Flavio Manenti,* Maria Grazia Grottoli, and Sauro Pierucci CMIC Department Giulio Natta, Politecnico di Milano Piazza Leonardo da Vinci 32, I-20133 Milano, Italy ABSTRACT: The paper deals with the integrated solution of dierent model-based optimization levels to face the problem of inferring and reconciling online plant measurements practically, under the condition of poor measure redundancy, because of a lack of instrumentation installed in the eld. The novelty of the proposed computer-aided process engineering (CAPE) solution is in the simultaneous integration of dierent optimization levels: (i) the data reconciliation based on a detailed process simulation; (ii) the introduction and estimation of certain adaptive parameters, to match the current process conditions as well as to confer a certain generality on it; and (iii) the use of a set of ecient optimizers to improve plant operations. The online feasibility of the proposed CAPE solution is validated on a large-scale sulfur recovery unit (SRU) of an oil renery. 1. INTRODUCTION Process data reconciliation and its benets have been widely studied in the literature for a long time, 13 and, nowadays, research groups are focusing their activities on the dynamic data reconciliation and on the search for robustness to determine gross errors. 49 Nevertheless, beyond these and other open issues, it is not immediate to realize why the basic data reconcilia- tion is not yet used everywhere in process industry. 10 It is worth making a premise to understand it. Many industrial plants and processes within oil reneries were constructed with the objective of producing the maximum amount of chemical products and commodities, without accounting for the need to reduce emissions, save energy and resources, and safeguard the environment. Certain renery processes, above all of the ones that do not increase the net present value of the plant, have been designed, engineered, and constructed to reduce the instrumentation costs. A typical example is the case of sulfur recovery units (SRUs), which do not directly increase the net prot margin of the oil renery (the elemental sulfur has a low market price 11 ), but they are becoming the key point to handle the pollutant emissions at the stacks, especially looking forward to the more and more stringent environmental regulations of several industrialized countries. These processes are characterized by very poor instrumentation, such that, even though it seems to be unbelievable nowadays, the same control and eld operators do not really know the eective plant operating conditions. Clearly, this situation hinders the application of any type of advanced solutions to manage and optimize operations, starting from the data reconciliation to the model predictive control and the real-time optimization with all their related benets. 1214 The present paper provides a possible computer-aided process engineering (CAPE) solution to bridge the current gap between the theory and industrial practice, without the need to install any additional instrumentation. The key point is the integration of dierent optimization levels to increase the level of redundancy of the overall system by means of a detailed model, to adapt the model to the current plant conditions by estimating certain selected parameters, to optimize the plant performances, based on the inferred and reconciled data (sections 2 and 3). The proposed CAPE solution has been proven on a distri- buted control system (DCS) of an operating SRU to check its online feasibility and its eectiveness to promptly provide a coherent picture of the plant operating conditions also when the few raw data available are aected by gross errors. 2. ARCHITECTURE OF THE CAPE SOLUTION Interactive software, hardware, and information technology systems are qualitatively represented in Figure 1. Certain reconciled datasets are acquired from the historical server of the DCS and used to initialize the so-called Error-In-Variablemethod 1518 (EVM, described later), which is an optimization problem aimed at estimating the adaptive parameters and, hence, adapting the detailed process simulation to the current plant operating conditions. The adaptive parameters are used to simulate the process and reproduce a reliable and coherent picture of the operating conditions; in this case, the data used to initialize the process simulation are the fresh data coming from the eld (not yet reconciled). The adapted pro- cess simulation is used as the basis to solve the optimization problem of data reconciliation. The use of commercial simulators forces us to adopt a sequential strategy to solve the data reconcilia- tion. With sequential strategy, we mean that the process simula- tion is solved iteratively, at each step of the optimizer within the reconciler. Such an approach is numerically similar to the feasible path (or partially discretized) approach adopted in the dynamic optimization. 19,20 Since there is the need to nd the possible gross errors that aect the dataset, certain very robust methodologies must be used to accomplish this step, with the possible problem of heavy computational times. Once the fresh dataset has been reconciled, it is collected on the historical database and made available for the possible next executions of the EVM method and re-estimation of the adaptive parameters. Now, with a reconciled dataset, it is possible to perform an eective plant optimization, called the real-time optimization (RTO); in this case, ecient optimizers are needed, since possible gross errors have been already removed by robust optimizers by the robust methodologies. Received: May 5, 2011 Accepted: November 10, 2011 Revised: November 1, 2011
Transcript
Page 1: Online Data Reconciliation with Poor Redundancy Systems

Published: November 10, 2011

r 2011 American Chemical Society 14105 dx.doi.org/10.1021/ie202259b | Ind. Eng. Chem. Res. 2011, 50, 14105–14114

ARTICLE

pubs.acs.org/IECR

Online Data Reconciliation with Poor Redundancy SystemsFlavio Manenti,* Maria Grazia Grottoli, and Sauro Pierucci

CMIC Department “Giulio Natta”, Politecnico di Milano Piazza Leonardo da Vinci 32, I-20133 Milano, Italy

ABSTRACT: The paper deals with the integrated solution of different model-based optimization levels to face the problem ofinferring and reconciling online plantmeasurements practically, under the condition of poormeasure redundancy, because of a lack ofinstrumentation installed in the field. The novelty of the proposed computer-aided process engineering (CAPE) solution is in thesimultaneous integration of different optimization levels: (i) the data reconciliation based on a detailed process simulation; (ii) theintroduction and estimation of certain adaptive parameters, to match the current process conditions as well as to confer a certaingenerality on it; and (iii) the use of a set of efficient optimizers to improve plant operations. The online feasibility of the proposedCAPE solution is validated on a large-scale sulfur recovery unit (SRU) of an oil refinery.

1. INTRODUCTION

Process data reconciliation and its benefits have been widelystudied in the literature for a long time,1�3 and, nowadays,research groups are focusing their activities on the dynamic datareconciliation and on the search for robustness to determinegross errors.4�9 Nevertheless, beyond these and other openissues, it is not immediate to realize why the basic data reconcilia-tion is not yet used everywhere in process industry.10

It is worth making a premise to understand it. Many industrialplants and processes within oil refineries were constructed withthe objective of producing the maximum amount of chemicalproducts and commodities, without accounting for the need toreduce emissions, save energy and resources, and safeguard theenvironment. Certain refinery processes, above all of the ones thatdo not increase the net present value of the plant, have been designed,engineered, and constructed to reduce the instrumentation costs.

A typical example is the case of sulfur recovery units (SRUs),which do not directly increase the net profit margin of the oilrefinery (the elemental sulfur has a low market price11), but theyare becoming the key point to handle the pollutant emissions at thestacks, especially looking forward to the more and more stringentenvironmental regulations of several industrialized countries. Theseprocesses are characterized by very poor instrumentation, such that,even though it seems to be unbelievable nowadays, the same controland field operators do not really know the effective plant operatingconditions. Clearly, this situation hinders the application of any typeof advanced solutions to manage and optimize operations, startingfrom the data reconciliation to the model predictive control and thereal-time optimization with all their related benefits.12�14

The present paper provides a possible computer-aided processengineering (CAPE) solution to bridge the current gap betweenthe theory and industrial practice, without the need to install anyadditional instrumentation. The key point is the integration ofdifferent optimization levels to increase the level of redundancyof the overall system by means of a detailed model, to adapt themodel to the current plant conditions by estimating certainselected parameters, to optimize the plant performances, basedon the inferred and reconciled data (sections 2 and 3).

The proposed CAPE solution has been proven on a distri-buted control system (DCS) of an operating SRU to check its

online feasibility and its effectiveness to promptly provide acoherent picture of the plant operating conditions also when thefew raw data available are affected by gross errors.

2. ARCHITECTURE OF THE CAPE SOLUTION

Interactive software, hardware, and information technologysystems are qualitatively represented in Figure 1. Certain reconcileddatasets are acquired from the historical server of the DCS andused to initialize the so-called “Error-In-Variable”method15�18

(EVM, described later), which is an optimization problem aimedat estimating the adaptive parameters and, hence, adapting the detailedprocess simulation to the current plant operating conditions. Theadaptive parameters are used to simulate the process and reproducea reliable and coherent picture of the operating conditions; in thiscase, the data used to initialize the process simulation are the freshdata coming from the field (not yet reconciled). The adapted pro-cess simulation is used as the basis to solve the optimizationproblem of data reconciliation. The use of commercial simulatorsforces us to adopt a sequential strategy to solve the data reconcilia-tion. With sequential strategy, we mean that the process simula-tion is solved iteratively, at each step of the optimizer within thereconciler. Such an approach is numerically similar to the feasiblepath (or partially discretized) approach adopted in the dynamicoptimization.19,20 Since there is the need to find the possible grosserrors that affect the dataset, certain very robust methodologiesmust be used to accomplish this step, with the possible problemofheavy computational times. Once the fresh dataset has beenreconciled, it is collected on the historical database and madeavailable for the possible next executions of the EVMmethod andre-estimation of the adaptive parameters. Now, with a reconcileddataset, it is possible to perform an effective plant optimization,called the real-time optimization (RTO); in this case, efficientoptimizers are needed, since possible gross errors have been alreadyremoved by robust optimizers by the robust methodologies.

Received: May 5, 2011Accepted: November 10, 2011Revised: November 1, 2011

Page 2: Online Data Reconciliation with Poor Redundancy Systems

14106 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

2.1. Detailed Process Simulation. From a practical point ofview, the traditional approach is to develop a detailed simulationto properly infer all the missing measures. Since SRUs usuallyinvolve a thermal reaction furnace, together with a series ofcatalytic (Claus) converters, complex kinetic schemes coupledwith computational fluid-dynamics (CFD) studies could beadopted in their modeling. Unfortunately, a CFD approach iswell-known to be computationally heavy and clearly ineffectivefor online applications (section 3), thus, certain specific and well-established correlations are adopted to reasonably characterizethe behavior of these complex reactors (see Signor et al.11).Correlations have been fully integrated in the commercial processsimulator PRO/II21 by Simsci-Esscor/Invensys; hence, PRO/IIhas been used to model all of the remaining unit operations andthe ancillary equipment.Nevertheless, a detailed simulation is not enough to properly

reconcile raw datasets since the current conditions of the realplantmay be far from the ideal operating conditions of the processsimulation that, however detailed, usually cannot account forcertain phenomena (fouling, cleanliness, efficiency, etc.). Thus,there is the need to introduce the adaptive parameters andestimate them to match the real plant conditions and follow itsmedium- and long-term evolution. From a more general point ofview, these adaptive parameters may be used to adapt the processsimulation not only to different operating conditions, but also todifferent plants with similar layouts.At last, there is the need to couple the detailed and adaptive

simulation with the most appealing numerical libraries to availfrom both robust and efficient optimizers. Robust optimizers areessential to identify gross errors within the datasets coming fromthe field/DCS and, if possible, to correct them; efficient solversare required to accelerate the overall procedure and ensure itsonline effectiveness of the plant optimization based on thereconciled and inferred measures. The complexity of the de-scribed solution is qualitatively reported in Figure 1.Some authors22 underlined that optimization problems sub-

ject to process model constraints are better solved by means ofsimultaneous strategies, whereas the use of a commercial packageas PRO/II forces to adopt sequential strategies.19,20 Nevertheless,

as recently discussed by Signor et al.,11 the wide diffusion of thecommercial process simulators in the process industry may taketo a series of benefits:• The mathematical model can cover each level of detail,

according to the model libraries proposed by the commer-cial simulator. The selection of degree of detail should be agood compromise between the process characterizationand the computational effort.

• Some consolidated solutions, especially dictated by thepractical experience and already implemented in the mostcommon commercial process simulators, could be success-fully involved in the solution of the data reconciliationproblems.

• When using a commercial simulator that is even adopted forprocess design, the data reconciliation may have a feedbackon both the instrumentation and the same process design(plant debottlenecking, revamping, etc.).

• Engineering societies and production sites usually have thelicenses of process simulators (no additional charges forsoftware licenses).

• Last, but not least, the wide diffusion of commercial simulatorsin process industries could confer the proposed approach acertain generality at least for SRUs having similar layouts(almost 80% of the SRUs operating worldwide).

2.2. Optimization Levels. Three different optimization levelsof the process control hierarchy must be solved and coupled withthe aforementioned detailed process simulation when we under-go the condition of poor instrumentation installed by the field:data reconciliation, adaptive parameter estimation, and economicplant optimization based on reliable data.2.2.1. Data Reconciliation. Data reconciliation is a useful tool

to reconcile the measures to fulfill material and energy balancesof every process unit and plant subsection characterized by anadequate measurement redundancy. The idea of reconcilingmeasures is quite old and can be brought back to the early1950s. Nevertheless, the recent implementation of advancedprocess control23�25 and optimization techniques, as well asthe advances in the information technology, which is the funda-mental support to the enterprise resource planning and the

Figure 1. Hardware�software architecture of the proposed computer-aided process engineering (CAPE) solution.

Page 3: Online Data Reconciliation with Poor Redundancy Systems

14107 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

decision-making process,26,27 are forcing the process industriesto be interested in performance and the use of robust tools forprocess data analysis and reconciliation.28,29

To provide a coherent picture of the plant, the objective is tominimize a function F, which is usually the weighed sum-of-squaresresiduals between measured and reconciled variable values:

minx

F ¼ ∑iμiðmi � xiÞ2 ð1Þ

subject to

gðxÞ ¼ 0hðxÞ e 0

where g(x) = 0 and h(x)e 0 are equality and inequality constraints(model equations of the process);μi is the weight vector, usually theinverse of the variance or standard deviation; xi is the reconciledvalue; and mi is the measured value.Linear data reconciliation is usually employed in facilities and

utilities for steam generation, where the only component is water(other components are practically negligible). These processes,typical of the power field and of the facilities and utilities processesof the oil and gas field, do not require any in-line analyzer tomeasure molar compositions. By chance, the overall mass (andenergy) balances are sufficient to characterize the process. Apartfrom a stream compensation that has the objective to equalize thevapor flow rates with the liquid ones, the data reconciliationproblem can be easily defined as follows:

minwrec

F ¼ ∑iμi wi, meas � wi, rec� �2 ð2Þ

subject to

g wrecð Þ ¼ 0h wrecð Þ e 0

where wi is the mass flow rate. Depending on the redundancy, orthe degree of redundancy (DOR), which is defined as

DOR ¼ equations þ measures� reconciled ð3Þfive different situations can (globally or locally) occur:• The total amount of measures and equations is significantlylarger than the number of process flow rates. The datareconciliation can be regularly carried out.

• The total amount of measures and equations is slightlylarger than the number of process flow rates. In such acondition of reduced redundancy, the data reconciliationcan be regularly carried out, even if it may be difficult todetect possible outliers.30,31

• The total amount of measures and equations is equal to thenumber of process flow rates. The reconciliation becomes criticalsince the presence of one outlier makes the reconciliationinfeasible andmay lead to the so-called “masking”and “swamping”effects.7,9 However, the data reconciliation can be still carried out.

• The total amount of measures and equations is slightlysmaller than the number of process flow rates (and subjectto certain conditions not analyzed here). The reconciliationproblem is transformed to a coaptation problem. Under theassumption of total absence of outliers, the missing data cangenerally be evaluated.

• The total amount of measures and equations is significantlysmaller than the number of process flow rates. No actions are

currently possible in this case, except for the data reconciliationof certain specific subsections that locally agree with one ofthe previous points.

It is worth highlighting that a feasible reconciliation can beguaranteed by ensuring an adequate measurement distributionby the field and the development of an appropriate processcontrol scheme. Actually, even though the overall DOR > 0, thedata reconciliation could be infeasible for certain plant subsec-tions; analogously, when DOR < 0, certain plant subsectionscould be however reconciled. In other words, a positive DOR is anecessary, but not sufficient, condition for data reconciliation.Contrary to utilities and steam generation processes, with

multicomponent process flow rates, the data reconciliation prob-lem is extended from the linear case to the bilinear case. Actually,it is necessary to reconcile not only the overall material flow rate,but also the mass component rate. It unavoidably requires certainin-line analyzers, besides the flow, pressure, and temperaturemeasures.The data reconciliation problem assumes the following form:

minwrec, nrec

F ¼ ∑iμi ni, meas 3wi, meas � ni, rec 3wi, rec� �2

ð4Þ

subject to

g nrec,wrecð Þ ¼ 0h nrec,wrecð Þ e 0

According to several authors,1,3 it is useful to keep the problemlinear, when possible, from a computational point of view. Thus,introducing the overall component flow rate (Ni = ni 3wi), it ispossible to write the following bilinear objective function:

minwrec ,Nrec

F ¼ ∑iμiðwi, meas � wi, recÞ2 þ ∑

iνiðNi, meas �Ni, recÞ2

ð5Þsubject to

g Nrec,wrecð Þ ¼ 0h Nrec,wrecð Þ e 0

where νi is a weight vector. The formulation described by eq 5allows one to overcome the initial nonlinearities of the objectivefunction, by means of specific data pre- and post-processing.The same procedure can be followed, for example, in the

simultaneous solution of mass and energy balances:

minwrec ,Trec

F ¼ ∑iμi wi, meas 3 cp 3Ti, meas � wi, rec 3 cp 3Ti, rec� �2

ð6Þsubject to

g Trec,wrecð Þ ¼ 0h Trec,wrecð Þ e 0

by introducing Hi = wi 3 cp 3Ti and, therefore, ~Hi = Hi/wi. Thereconciliation problem becomes

minwrec ,Hrec

F ¼ ∑iμi wi, meas � wi, rec� �2 þ ∑

iνi ~Hi, meas � ~Hi, rec

� �2

ð7Þ

Page 4: Online Data Reconciliation with Poor Redundancy Systems

14108 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

subject to

g Trec,wrecð Þ ¼ 0h Trec,wrecð Þ e 0

Sometimes, certain problems require the simultaneous reconcilia-tion of overall mass, component, and energy balances. Throughthe aforementioned devices, it is possible to transform the originalproblem to a multilinear data reconciliation. Theoretically, thereare no limits to the number of linear terms.Contrary to the aforementioned techniques, nonlinear data

reconciliation is fast acquiring interest in process industry,especially since the coupling of a reconciliation tool with anexisting process simulation package may lead to several advan-tages. First of all, there is the possibility to base the entirereconciliation on detailedmathematical models, which go beyondthe basic mass and energy balances. It may strongly increase thedata reconciliation accuracy and significantly support the detec-tion of gross errors and masked outliers. The formulation ofnonlinear data reconciliation problems corresponds to the afore-mentioned cases, with a relevant difference in the constraints:physical-chemical properties, thermodynamic, equilibrium, hy-draulic relations, etc., all deriving from a detailed process model-ing, are introduced as additional (and nonlinear) constraints.For example, let us consider a depropanizer column with three

flow measures on the feed, bottom, and distillate streams,respectively. To obtain an effective reconciliation, it is surelypreferable to solve the mass reconciliation problem by imple-menting the entire physical-chemical model of the column ratherthan basing our results only on the overall mass balance. Thecomputational effort increases to solve themodel-based (nonlinear)problem, but, considering the industrial clock requirements (i.e.,one data reconciliation per hour), the current computational powerand existing algorithms ensure faster solutions of nonlinear recon-ciliation problems by making their online implementability feasiblewith very large CPU margins (see section 3).2.2.2. Adaptive Parameter Estimation. The second problem

to be solved is the so-called “Error-in-Variable” method (EVM),originally proposed by Biegler, to whom one could refer for moredetails.16,17,32 Briefly, it is formulated as follows:

minxi ,θ

Φ ¼ ∑SSC

i¼ 1ðmi � xiÞTQ �1ðmi � xiÞ ð8Þ

subject to

f ðxi, θÞ ¼ 0

whereΦ is the is the objective function; x and m are the vectorsof reconciled and measured values, respectively;Q is the positivedefinite diagonal matrix of weights; g(x) = 0 and h(x) e 0 areequality and inequality constraints, respectively, to which theminimization problem is subjected. The key point of EVM is itslarge-size dimension, with respect to the data reconciliation,since its degrees of freedom are the adaptive parameters θ andthe reconciled vectors of each steady-state condition (SSC)acquired by the historical server of the DCS.2.2.3. Economic Real-Time Optimization . The third optimi-

zation problem is of economic nature. The reliable and coherentpicture of how the process is really operating can be exploited toimplement certain production policies to improve, for example,the process yield, the process unit efficiency, or the energy saving. Itis needless to say that every economic optimization is ineffective

whenever the process data are not properly reconciled, since theeffects of a small error/inconsistency in process data are stronglywidened in the decision-making process and in the implementa-tion of any type of economic policy.A general formulation of the economic optimization level is as

follows:

minx, y ∈ R;b, n ∈ N

Φ ¼ ∑Nplants

i¼ 1::: ∑

Nprocesses

j¼ 1∑Nunits

k¼ 1REVENUESðx, y, b, nÞi, :::, j, k

�COSTSðx, y, b, nÞi, :::, j, k ð9Þsubject to

f ðx, y, b, nÞ ¼ 0gðx, y, b, nÞ g 0

where REVENUES and COSTS are related to each process unitof each process, of each plant, of each production site. Thedegrees of freedom of the economic optimization are continuousvariables (x, y) (e.g., the throughput), but it is also possible toenter discrete (Boolean (b) and integer (n)) variables thatinvolve the decision-making process (e.g., the ith process is on(bi = 1) or off (bi = 0)).Often, the economic optimization requires very efficient

optimizers to ensure its real-time application and, hence, itseffectiveness. Certain moving horizon and rolling horizon meth-odologies have been defined and applied by the field.12,33

2.3. Object-Oriented Programming. The use of adaptivesimulation to address the parameters estimation and the datareconciliation is described in the previous work by Signor et al.,11

which is the fundamental basis for the present research activity.The novelty of the CAPE solution proposed in this work is thefull integration (see Figure 1) of all the solvers for the adaptivesimulation-based optimization problems mentioned above into asingle, conscious algorithm. This is possible by exploiting certainfeatures of the object-oriented programming such as the mask-ing, encapsulation, polymorphism, and inheritance.Before starting the discussion, it is worth emphasizing that all

the constraints of the problem aremanaged as a black box, since weare using a commercial package to simulate the SRU. This meansthat, although with several important checks, we are forced toconsider the optimization problems as a type of unconstrainedoptimization,where the constraints are separately solved (bymeansof the nonlinear system solver of the same commercial package).The important checks are especially related to the convergence ofthe procedure, where several discontinuities coming from the blackbox of constraints may lead to strong multimodalities.34

Nevertheless, the separation of the constraints from theconvergence of the optimal problems makes feasible the combi-nation of different optimizers into a single conscious optimizer,which is able to self-manage its convergence path and becomeeither more robust or more efficient, according to the specificsituation, also exploiting the parallel computing, if available. Infact, the consciousness of the integrated optimizer allows one toidentify the number of available processors for shared-memorymachines automatically and send the calculations there, to eitheraccelerate the convergence or improve the robustness accord-ingly. This is particularly important if we think that the datareconciliation will become automatically more and more robust,based on the number of gross errors detected by the numericalmethods designed to identify them (section 2.4), whereas theEVM and plant optimization becomes more and more efficient

Page 5: Online Data Reconciliation with Poor Redundancy Systems

14109 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

while the convergence is approaching. Nevertheless, it is im-portant to emphasize that the parallel computing is conscious foropenMP directives only and MPI directives are not considered.At last, it is worth noting that, in turn, all the single entitiesincluded in the integrated CAPE solution have implementedseveral numerical methods, which are automatically managed toimprove the solution of the specific problem. The tests on thecomputational efforts described in section 3 are performed on asingle core (disabled parallel computing), for the sake of clarity.Thus, rather than using three different objects coming from

three different C++ classes to solve, respectively, the datareconciliation, the parameters estimation, and the plant optimi-zation, it is possible to exploit the features of C++ and to developa single object managing all the optimization problems at best.From a mathematical point of view, it corresponds to merge thethree aforementioned optimization issues into a single globaloptimization with the summation of all of the degrees of freedominvolved in the original problems. This is possible especially forthree reasons: (i) the C++ polymorphism allows to merge theoptimizers in a single C++ class; (ii) the C++ inheritance allowsone to preserve their single features and make them availablewhen the specific situation requires them; and (iii) the con-straints are invoked by the global solver when needed and thenumber of times usually is proportional to the robustness of thesystem. More details on the conscious approach using the object-oriented programming are reported elsewhere.12

2.4. Numerical Methods. To solve the aforementioned opti-mization problems, it is necessary to use an appropriate combina-tion of tools: we need the robust optimizers and observers, toaccomplished the data reconciliation; efficient optimizers, toaccomplish the large-scale problem of parameters estimationthrough the EVM; and efficient optimizers, to be effective inoptimizing the plant performances. For parameters estimationand optimization of plant performances, we adopted the optimi-zers of the BzzMath library in their parallel computing release,already explained in detail elsewhere.9,12,34,35 Conversely, at thedata reconciliation level, the gross errors and bad quality datacoming from the field must be effectively identified and, ifpossible, revised online. To do so, a new family of algorithms isadopted: the so-called clever mean and clever variance methodsare implemented to check if the single value of each measure isgood or not and, hence, to have a filter on possible gross errors.The general problem broached here is the detection of grosserrors in a set of n of experimental points yi (i = 1, ..., n) of thepopulation Y with n being very large (typical of industrial cases).We adopt a novel robust criterion that has the same efficiency ofmean evaluation, but also the same robustness of median-basedobservers.6,7 Their estimation is quite trivial; when the dataset isbeing read, the following quantities are calculated:

sum ¼ ∑n

i¼ 1yi ð10Þ

sq ¼ ∑n

i¼ 1y2i ð11Þ

and a predetermined number of maximum and minimum valuesis collected. Let

cm0 ¼∑n

i¼ 1yi

n¼ y̅ ð12Þ

denote the zeroth-order clever mean and

cv0 ¼∑n

i¼ 1ðyi � cm0Þ2

n� 1¼ s2 ð13Þ

the zeroth-order clever variance. Assuming y1/ to be the first

possible outlier, and removing that value using the mean andvariance, results in

cm1 ¼∑n

i¼ 1yi � y�1

n� 1ð14Þ

cv1 ¼∑n

i¼ 1ðyi � cm1Þ2 � ðy�1 � cm1Þ2

n� 2

¼ sq þ n 3 ðcm1Þ2 � 2 3 cm1 3 sum� ðy�1 � cm1Þ2n� 2

ð15Þ

If

jcm1 � y�1j > δ 3

ffiffiffiffiffifficv1

p ð16Þ

where δ is a threshold value (i.e., 2.5), the experimental para-meter y1

/ can be considered to be an outlier and the values of cm1

and cv1 are estimations of the first-order clever mean and clevervariance, respectively.If an outlier exists, the procedure is iterated: a new possible

outlier y2/ is selected and cm2 and cv2 are both calculated by also

simulating the removal of this new point y2/; if the elimination of

this point satisfies the relation

jcm2 � y�2j > δ 3

ffiffiffiffiffifficv2

p ð17Þ

it also must be considered to be an outlier. The procedure goeson until yk

/ satisfies the condition described by eq 17:

jcmk � y�k j > δ 3ffiffiffiffiffifficvk

p ð18Þ

when its removal is simulated, whereas yk+1/ does not:

jcmkþ1 � y�k þ 1j < δ 3

ffiffiffiffiffiffiffiffiffifficvkþ1

p ð19Þ

Please note the following:• The selection of a possible outlier yk

/, is very simple indeed: itis whichever one that minimizes cvk between the twoobservations that currently represent the minimum andmaximum values after the removal of previous outliers.

• The clever mean (cm) might maintain its value whileoutliers are progressively removed for two reasons: whenthe number of data is particularly large, the arithmetic meancan change slightly, even though an outlier is removed; byremoving two outliers that are symmetric, with respect tothe expected value, the clever mean remains unchanged. Itwould be an error to check for outliers only by looking atthe value of the clever mean.

Page 6: Online Data Reconciliation with Poor Redundancy Systems

14110 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

• The clever variance (cv), on the other hand, has a mono-tonous decreasing trend while outliers are gradually re-moved; it is, moreover, practically unvaried or increasesfurther when the observation removed is not a real outlier.

• If the clever variance does not increase when the observa-tion yk

/ is removed and if yk/ satisfies the relation described

by eq 18, yk/ is an outlier.

2.5. Hardware. The program must be completely interfacedwith the DCS to get both the data from the plant historicaldatabase and the fresh raw data acquired by the field to bereconciled. This is not a problem since, nowadays, the majority ofthe DCS includes an application server and an OPC (OLE forprocess control) interface. The former one is a server justdedicated to all the external tools that must continuously interactwith the DCS for input and output signals; the latter one is a setof directives to transfer input/output signals. Also, DCS aremanaged by information technology systems able to connect alltags to each type of external package (e.g., PI by OSIsoft).In the specific case that we are analyzing (Figure 2), the

measures are sent to the junction boxes placed in the field andthen are sent to the raw data server. These data, together withcertain data of the historian server, are sent to the applicationserver, where an OPC server is installed to allow the commu-nications to and from the client level of the DCS. The CAPEsolution that solves the data reconciliation, the adaptive para-meter estimation, and the real-time optimization is within theclient level and dialogues with the DCS by means of theOPC client.Once the adaptive parameters θ have been evaluated bymeans

of the EVM, initialized by the sets of steady-state conditionscoming from the historian server, the adapted process simulationcan be run by keeping the adaptive parameters constant andstarting from the current (fresh) raw dataset coming from the rawdata server. The adapted process simulation is iteratively calledby the data reconciliation routine to properly detect possiblegross errors affecting the fresh raw dataset. Next, the economicoptimization can be performed, basing on the reconciled data.Finally, the reconciled data are sent back to theDCS viaOPC andare collected into the historian server. The procedure is iterated.Since the proposed approach allows one to remove possible

gross errors and, hence, obtain a coherent picture of the plant,there is the possibility to exploit efficient algorithms to performthe real-time optimization and, hence, evaluate the optimal plantconditions according to the current specifications and plantperformances. In such a case, the possible actions to optimizethe operations are sent back to the field and implemented by thecontrol system.

3. ONLINE FEASIBILITY

To verify the online effectiveness of the proposed approach, itis important to compare the optimization problems involved withthe levels of the process control hierarchy.26,33 (See Figure 3.)The machine that we adopted to measure the computationaleffort is an INTEL CORE 2 QUAD CPU (2.83 GHz, 3 GB ofRAM, operative system MSWINDOWS 2003, and compiler MSVISUAL STUDIO 2008). Solvers and optimizers of the BzzMathlibrary (version 6.0),36 are used for data reconciliation, EVM, andplant optimization.

The process simulation requires no more than 10 s. It isiteratively invoked by the optimizer of the data reconciliationprocedure. The most expensive simulations are the initial ones,

where many states are significantly changed when the datareconciliation receives the fresh dataset. Data reconciliation isaccomplished within no more than 12 min. The plant optimiza-tion can be computationally performed within <1 h, while thetypical time discretization adopted in chemical plants and oilrefineries is on the order of many hours or days. Finally, EVM isthe largest nonlinear, multidimensional, constrained, optimiza-tion that wemust solve, and it is accomplished within a few hours.It is important to remark that the adaptive parameters obtainedby the EVM slowly change; thus, the EVMmust be able to followthe aging process of the plant and the fouling of unit operationsby gradually covering slow process dynamics typical of schedul-ing and maintenance problems (with a time scale of someweeks). This slow variation allows one to have a reasonablygood initialization of the adaptive parameters using the estima-tions of the previous iteration, and, hence, the possibility toconverge faster with the prompt use of efficient algorithms.

Although it is not given in detail, since the exact computationaltimes are dependent on many effects and conditions, we canconclude that the online feasibility of the proposed CAPE solutionis ensured on a computer with medium performance, certainly lessperforming than the dedicated machines mounted in DCS.

4. CASE STUDY

The selected case study is an operating Claus process. The taskof Claus processes is to recover the elemental sulfur from thehydrogen sulfide and, more generally, from the byproduct gasesoriginated by physical and chemical gas and oil treatment units inrefineries, natural gas processing, and amine scrubbers.

It consists of a thermal reaction furnace, a waste heat boiler,and a series of catalytic (Claus) reactors and condensers. Theoverall reaction characterizing the process is

2H2S þ O2 f S2 þ 2H2O

and behind it, certain complex kinetic mechanisms occur37,38 inboth the thermal reactor furnace and the catalytic reactors. Aqualitative layout of a typical Claus process with two catalyticreactors is reported in Figure 4.

In the thermal furnace, one-third of the hydrogen sulfide(H2S) is oxidized to sulfur dioxide (SO2) using air. Tempera-tures are usually on the order of 1100�1400 �C. The oxidizing

Figure 2. From the field to the CAPE solution and feedback on theoperations; solid lines are information flows, and dashed lines aredecision/action flows.

Page 7: Online Data Reconciliation with Poor Redundancy Systems

14111 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

reaction,

H2S þ 32O2 f SO2 þ H2O

is exothermic and without any thermodynamic restriction. Theremaining two-thirds of unreactedH2S reactwith the SO2 toproduceelemental sulfur through the so-called Claus reaction,

2H2S þ SO2 f32S2 þ 2H2O

which occurs at high temperatures in the thermal furnace with anendothermic contribution or at low temperatures in the catalyticconverters with an exothermic contribution.

The off-gas leaving the thermal furnace enters the waste heatboiler, where it is quenched to ∼300 �C, to prevent recombina-tion reactions; then, before entering the catalytic region, the firstseparation of liquid elemental sulfur is carried out in the firstcondenser. The hydrogen sulfide conversion,

2H2S þ SO2 f38S8 þ 2H2O

proceeds in the catalytic region.A condenser is installed downstream from each catalytic

reactor to condense and make the separation of the elemental

Figure 3. Computational efforts to solve optimal problems and corresponding time scales in the process control hierarchy. (NMPC = nonlinear modelpredictive control; RTO = real-time optimization; ERP = enterprise resource planning.)

Figure 4. Typical qualitative layout of the SRU.

Page 8: Online Data Reconciliation with Poor Redundancy Systems

14112 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

sulfur before entering the next catalytic reactor feasible, with the2-fold advantage of preventing sulfur condensation into thedownstream catalytic reactor and shifting toward the productsside the equilibrium of the Claus reaction. In addition, thehydrolysis reactions of COS and CS2 occur in the catalyticregion, according to the following reactions:

COS þ H2O f CO2 þ H2S

and

CS2 þ 2H2O f CO2 þ 2H2S

The most important key performance indicator is the molarratio H2S/SO2 exiting the waste heat boiler. It should be as closeas possible to the value of 2 in order to get the maximumconversion in sulfur.11 This index is a controlled variablemeasured by means of an online analyzer and included in afeedback control where the inlet air feed flow rate is themanipulated variable to close the loop. This measure is particu-larly sensitive to several process perturbations.

A few adaptive parameters were selected as explainedelsewhere.11 These parameters are strictly related to the process.In this specific case, some of the most relevant parameters are asfollows: the coefficient of COS conversion within thermal re-actor furnace; the CS2 conversion by hydrolysis reaction; theΔT approach in the Claus converters, particularly important tomatch the catalyst deactivation; the efficiency of heat exchange inthe waste heat boiler, to match its fouling factor and efficiency;the heat losses at the first Claus converter; and the heat losses atthe second Claus converter. According to Figure 4, the availablemeasures are• the acid gas flow rate• the acid gas temperature• the sour water stripper flow rate• the sour water stripper temperature• the combustion air flow rate• the combustion air temperature• the furnace temperature• the waste heat boiler temperature• the outlet temperature of the first condenser• the inlet temperature of the first Claus reactor• the outlet temperature of the first Claus reactor• the outlet temperature of the second condenser• the inlet temperature of the second Claus reactor• the steam generated at the waste heat boilerIn addition, it is possible to infer, with a certain reliability, the

molar composition of the inlet acid gas (H2O, CO2, H2S) andsour water stripper (H2O, H2S, NH3) as well, the total amountof H2S entering the SRU, the content of H2S and SO2 at thetail gas, the ratio H2S/SO2, the recovered sulfur, and the sulfurconversion.

5. SELECTED SCENARIOS

The proposed solution for adaptive data reconciliation hasbeen tested online for a period of three months on a large-scaleSRU plant. The present work reports two significant scenarios,selected among many days of industrial test, involving theinference of missing measures and the effectiveness of the onlinedetection of gross errors.5.1. Inference of the Inlet Feed. As mentioned above, SRUs

are characterized by poor instrumentation installed in the field;thus, very often, no online analyzers are placed on the acid gas

stream entering the SRU. Without this important information, itis hard to understand how the process is running. For the sake ofclarity, let us suppose to have a sudden increase of the inlet molarcomposition of CO2, despite the H2S. This unavoidably takes tosmaller temperature inside the thermal furnace and Claus con-verters and one could erroneously suppose that either (i) thefurnace is burning less H2S content, since the inlet air flow rate isinsufficient, or (ii) the catalyst within Claus converters hasinferior performances. Indeed, the latter possibility is prevented,since the EVM method should ensure certain reliability on thecatalyst deactivation, but no one can disprove the former possi-bility if nomeasures (or appropriate redundancy to allow the datareconciliation) are placed on the air inlet stream. The problem issomehow more complex if we think that many alternatives maybe noted to explain this situation in a large-scale SRU, such as theone used in this study.Thanks to the detailed adaptive process simulation on which

the proposed CAPE solution is based, it is possible to infer notonly what is entering the SRU, but also what is exiting the stackdownstream from the process. It is worth highlighting that anoverall balance starting from the inferred values of what isentering could be ineffective to estimate the stack emissions,because of the time delay of process units and reactors involved.Table 1 shows a comparison between the molar compositionsobtained by laboratory analysis and the corresponding infer-ences obtained by the CAPE solution. Laboratory analyses andinferences are in good agreement, denoting a certain reliabilityof the adaptive data reconciliation for the main compounds. It isworth saying that it is not possible to exploit the overall andcomponent mass balances to calculate the feed composition,because the residence time and the holdups within unit opera-tions and reactors are relevant.5.2. Gross Error Detection.Another relevant scenario is the

adaptive data reconciliation in the presence of gross errors. Itis well-known that a single gross error can significantlyinfluence the data reconciliation if it is not opportunelyidentified. Several robust approaches have been proposedto search for gross errors.39,40 As a robust optimizationprocedure, we adopted the least clever sum of squares alreadyexplained elsewhere and validated on large-scale datasets (thetypical case in the process industry).6,7,9

A selection of trends for the most relevant variables is reportedfrom Figures 5�7. These figures show a limited trend ofreconciled values against the measures acquired by the field forthe acid gas, the sour water stripper, and the combustion air flowrates. In this case, the measures are acquired every 4 h.Note that Figures 5 and 7 show certain small gaps, because of

the fact that the data reconciliation moves the value away, tosatisfy themass and energy balances, without leading to significant

Table 1. Laboratory Analysis Versus Inferred Measures

inlet stream

[mol/mol] compound

laboratory

measures

inferred

composition

acid gas H2O 0.0875 0.0922

CO2 0.0884 0.0885

H2S 0.8241 0.8193

sour water stripper H2O 0.3191 0.3007

H2S 0.2873 0.2891

NH3 0.3936 0.4102

Page 9: Online Data Reconciliation with Poor Redundancy Systems

14113 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

variations. On the other hand, the test run in correspondencewith the set acquired at 8:00AM on February 13 shows a relevantdeviation between the reconciled and measured values of thesour water stripper flow rate. The proposed solution is able toidentify this point as a gross error and properly correct it onlineby preserving all the other measures, as well as inferences. Theidentification and correction is performed within 8 min, 42 s,ensuring the online effectiveness in accordance with the samplingtime adopted for the measurement acquisition.This is of particular importance since all the simulation-based

methodologies, such as the one we are proposing, are particularlysensitive to bad points; actually, one bad point may result inserious convergence problems by returning the wrong valuesback to the optimizers, to which the process simulation is joined.

6. CONCLUSIONS

The paper describes an integrated solution to handle theproblem of inferring and reconciling online plant measurementsunder the condition of poor or no redundancy inmeasures due tothe lack of instrumentation installed in the field.

The proposed computer-aided process engineering (CAPE)solution simultaneously involves the development of a de-tailed process simulation that allows one to infer the missingmeasures, the estimation of a set of adaptive parameters tomatch the current process conditions, to adapt the model tothe specific situation of the plant, and the use of robust andefficient optimizers.

A commercial simulator has been fully integrated in C++,together with the solvers and the optimizers of the BzzMath

Figure 5. Acid gas flow rate.

Figure 7. Combustion air flow rate.

Figure 6. Sour water stripper flow rate.

Page 10: Online Data Reconciliation with Poor Redundancy Systems

14114 dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research ARTICLE

library, and the resulting adaptive data reconciliation has beeneffectively solved by keeping the computational requirementsreasonably small, to ensure the online feasibility. An industrialvalidation has been performed on a large-scale sulfur recoveryunit (SRU) operating in a refinery.

It is worth highlighting that this specific solution is a generalpurpose methodology based on the object-oriented programming,which is easy to apply to each type of process, subject to poor instru-mentation installed in the field and the consequent lack in data andredundancy and its largest effectiveness is in the online application.

’AUTHOR INFORMATION

Corresponding Author*Tel.: +39 02 2399 3273. Fax: +39 02 7063 8173. E-mail:[email protected].

’REFERENCES

(1) Crowe, C. M. Reconciliation of Process Flow Rates by MatrixProjection. II: Nonlinear Case. AIChE J. 1989, 32, 616–623.(2) Crowe, C. M. Data Reconciliation—Progress and Challenges. In

Proceedings of PSE94, 1994.(3) Crowe, C. M.; Campos, Y. A. G.; Hrymak, A. Reconciliation of

Process Flow Rates by Matrix Projection. I: Linear Case. AIChE J. 1983,29, 881–888.(4) Binder, T.; Blank, L.; Dahmen, W.; Marquardt, W. On the

regularization of dynamic data reconciliation problems. J. Process Control2002, 12 (4), 557–567.(5) Bagajewicz, M. J.; Jiang, Q. Gross Error Modeling and Detection

in Plant Linear Dynamic Reconciliation. Comput. Chem. Eng. 1998, 22(12), 1789–1810.(6) Buzzi-Ferraris, G.; Manenti, F. Outlier Detection in Large Data

Sets. Comput. Chem. Eng. 2010, 35, 388–390.(7) Buzzi-Ferraris, G.; Manenti, F. Data Interpretation and Correla-

tion. InKirk�Othmer Encyclopedia, 7th ed.; JohnWiley: New York, 2011.(8) Hampel, F. R.; Ronchetti, E. M.; Rousseeuw, P. J.; Stahel, W. A.

Robust Statistics: The Approach Based on Influence Functions, 2nd Ed.;Wiley: New York, 2005.(9) Buzzi-Ferraris, G.; Manenti, F. Interpolation and Regression

Models for theChemical Engineer: SolvingNumerical Problems;Wiley�VCH:Weinheim, Germany, 2010.(10) Bagajewicz, M. J. Data Reconciliation and Instrumentation

Upgrade. Overview and Challenges. In FOCAPO 2003, 4th InternationalConference of Computer-Aided Process Operations, Coral Springs, FL,2003; pp 103�116.(11) Signor, S.; Manenti, F.; Grottoli, M. G.; Fabbri, P.; Pierucci, S.

Sulfur Recovery Units: Adaptive Simulation andModel Validation on anIndustrial Plant. Ind. Eng. Chem. Res. 2010, 49 (12), 5714–5724.(12) Manenti, F. Considerations on Nonlinear Model Predictive

Control Techniques. Comput. Chem. Eng. 2011, 35, 2491–2509.(13) Manenti, F.; Rovaglio, M. Integrated multilevel optimization in

large-scale poly(ethylene terephthalate) plants. Ind. Eng. Chem. Res.2008, 47 (1), 92–104.(14) Vettenranta, J.; Smeds, S.; Yli-Opas, K.; Sourander,M.; Vanhamaki,

V.; Aaljoki, K.; Bergman, S.; Ojala, M. Dynamic Real Time Optimiza-tion Increases Ethylene Plant Profits. Hydrocarbon Process. 2006, 10,59–66.(15) Albuqerque, J. S.; Biegler, L. T. Decomposition Algorithms for

Online Estimation with Nonlinear Models. Comput. Chem. Eng. 1995,19 (10), 1031–1039.(16) Albuquerque, J. S.; Biegler, L. T.; Kass, R. E. Inference in

dynamic error-in-variable-measurement problems. AIChE J. 1997, 43 (4),986–996.(17) Esposito, W. R.; Floudas, C. A. Global optimization in para-

meter estimation of nonlinear algebraic models via the error-in-variablesapproach. Ind. Eng. Chem. Res. 1998, 37 (5), 1841–1858.

(18) Esposito, W. R.; Floudas, C. A. Parameter estimation in non-linear algebraicmodels via global optimization.Comput. Chem. Eng. 1998,22, S213–S220.

(19) Biegler, L. T.; Hughes, R. R. Infeasible Path Optimization withSequential Modular Simulators. AIChE J. 1982, 28 (6), 994–1002.

(20) Biegler, L. T.; Hughes, R. R. Feasible Path Optimization withSequential Modular Simulators. Comput. Chem. Eng. 1985, 9, 379–394.

(21) PRO/II, User Guide; Simsci-Esscor: Lake Forest, CA, 2002(www.simsci-esscor.com).

(22) Lid, T.; Skogestad, S. Scaled steady state models for effectiveon-line applications. Comput. Chem. Eng. 2008, 32 (4�5), 990–999.

(23) Vigan�o, L.; Vallerio, M.; Manenti, F.; Lima, N. M. N.; ZunigaLinan, L.; Manenti, G. Model Predictive Control of a CVD Reactor forProduction of Polysilicon Rods. Chem. Eng. Trans. 2010, 21, 523–528.

(24) Morari, M.; Lee, J. H. Model predictive control: past, presentand future. Comput. Chem. Eng. 1999, 23 (4�5), 667–682.

(25) Rawlings, J. B. Tutorial Overview of Model Predictive Control.IEEE Control Syst. Mag. 2000, 20 (3), 38–52.

(26) Varma, V. A.; Reklaitis, G. V.; Blau, G. E.; Pekny, J. F. Enterprise-wide modeling & optimization - An overview of emerging researchchallenges andopportunities.Comput.Chem.Eng.2007,31 (5�6), 692–711.

(27) Grossmann, I. E. Challenges in the new millennium: Productdiscovery and design, enterprise and supply chain optimization, globallife cycle assessment. Comput. Chem. Eng. 2004, 29 (1), 29–39.

(28) Romagnoli, J. A.; Sanchez, M. C. Data Processing & Reconcilia-tion for Chemical Process Operations; Process Systems Engineering, Vol. 2;Academic Press: San Diego, London, 1999.

(29) Narasimhan, S.; Jordache, C. Data Reconciliation & Gross ErrorDetection: An Intelligent Use of Process Data; Gulf Publishing Co.: Houston,TX, 2000.

(30) Buzzi-Ferraris, G.; Manenti, F. Kinetic Models Analysis. Chem.Eng. Sci. 2009, 64 (5), 1061–1074.

(31) Manenti, F.; Buzzi-Ferraris, G. Criteria for Outliers Detectionin Nonlinear Regression Problems. Computer Aided Chem. Eng. 2009,26, 913�917.

(32) Tjoa, I. B.; Biegler, L. T. Reduced Successive Quadratic-Programming Strategy for Errors-in-Variables Estimation. Comput.Chem. Eng. 1992, 16 (6), 523–533.

(33) Busch, J.; Oldenburg, J.; Santos, M.; Cruse, A.; Marquardt, W.Dynamic Predictive Scheduling of Operational Strategies for Contin-uous Processes Using Mixed-logic Dynamic Optimization. Comput.Chem. Eng. 2007, 31, 574–587.

(34) Buzzi-Ferraris, G.; Manenti, F. A Combination of ParallelComputing and Object-Oriented Programming to Improve OptimizerRobustness and Efficiency.Comput.-Aided Chem. Eng. 2010, 28, 337–342.

(35) Buzzi-Ferraris, G.;Manenti, F. Fundamentals and Linear Algebrafor the Chemical Engineer: Solving Numerical Problems; Wiley�VCH:Weinheim, Germany, 2010.

(36) Buzzi-Ferraris, G. BzzMath: Numerical library in C++. Poli-tecnico di Milano, http://chem.polimi.it/homes/gbuzzi, 2010.

(37) Glarborg, P.; Kubel, D.; Dam-Johansen, K.; Chiang, H.-M.;Bozzelli, J. W. Impact of SO2 andNO onCO oxidation under post-flameconditions. Int. J. Chem. Kinet. 1996, 28 (10), 773–790.

(38) Alzueta, M. U.; Bilbao, R.; Glarborg, P. Inhibition and sensitiza-tion of fuel oxidation by SO2.Combust. Flame 2001, 127 (4), 2234–2251.

(39) Rousseeuw, P. J. Least median of squares regressions. J. Am.Stat. Assoc. 1984, 79, 871–880.

(40) Rousseeuw, P. J.; Leroy, A. M. Robust Regression and OutlierDetection; John Wiley & Sons: New York, 1987.


Recommended