+ All Categories
Home > Documents > Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained...

Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained...

Date post: 19-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
34
Max Planck Institute Magdeburg Preprints Yongjin Zhang, Lihong Feng, Suzhou Li and Peter Benner Accelerating PDE constrained optimization by the reduced basis method: application to batch chromatography MPIMD/14-09 May 26, 2014 FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG MAX-PLANCK-INSTITUT
Transcript
Page 1: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Max Planck Institute MagdeburgPreprints

Yongjin Zhang Lihong Feng Suzhou Li and Peter Benner

Accelerating PDE constrained optimization

by the reduced basis method

application to batch chromatography

MPIMD14-09 May 26 2014

FUumlR DYNAMIK KOMPLEXER

TECHNISCHER SYSTEME

MAGDEBURG

MAXminusPLANCKminusINSTITUT

Abstract

In this work we show that the reduced basis method accelerates a PDE con-strained optimization problem where a nonlinear discretized system with a largenumber of degrees of freedom must be repeatedly solved during optimizationSuch an optimization problem arises for example from batch chromatographyInstead of solving the full system of equations a reduced model with a smallnumber of equations is derived by the reduced basis method such that onlythe small reduced system is solved at each step of the optimization process Anadaptive technique for selecting the snapshots is proposed so that the complexityand runtime for generating the reduced basis are largely reduced An output-oriented error bound is derived in the vector space whereby the construction ofthe reduced model is managed automatically An early-stop criterion is proposedto circumvent the stagnation of the error and to make the construction of thereduced model more efficient Numerical examples show that the adaptive tech-nique is very efficient in reducing the offline time The optimization based on thereduced model is successful in terms of the accuracy and the runtime for gettingthe optimal solution

Keywords reduced basis method empirical interpolation adaptive snapshotselection optimization batch chromatography

Imprint

Max Planck Institute for Dynamics of Complex Technical Systems Magdeburg

PublisherMax Planck Institute forDynamics of Complex Technical Systems

AddressMax Planck Institute forDynamics of Complex Technical SystemsSandtorstr 139106 Magdeburg

httpwwwmpi-magdeburgmpgdepreprints

1 IntroductionIn the last decade the optimization with constraints given by partial differential equa-tions (PDE constrained optimization for short) has emerged as a challenging researcharea It has increasingly arisen in various engineering contexts such as optimal de-sign control and parameter estimation Over the past years besides the increasingprogress of the computing hardware a large number of attempts have been devotedto the development of efficient algorithms and strategies for solving such optimizationproblems see for example [6 7 8 26] and references therein

Model order reduction (MOR) is a powerful technique for constructing a low-costapproximation of large-scale systems resulting from the discretization of PDEs Thelow-cost approximation often called reduced order model (ROM) on the one handshould have the same structure as the original large-scale system but with a muchsmaller number of degrees of freedom (DOFs) on the other hand it must have accept-able accuracy for the input-output representation of the original system Due to thesmall size and negligible error the derived ROM is used as a surrogate model of thelarge-scale system in various disciplines such as optimization and control fluid dy-namics structural dynamics circuit design and so on In particular for optimizationproblems with nonlinear PDE constraints proper orthogonal decomposition (POD) isoften used to derive a ROM which has been applied to accelerate optimization prob-lems [13 14] However a ROM from POD is reliable only in the neighborhood of theinput parameter setting at which the ROM is constructed There is no guarantee forthe accuracy of the ROM at a different parameter setting To circumvent the problema trust-region technique was suggested to manage the POD-based ROM in [13] Herethe ROM is updated according to the quality of the approximation However therepeated construction of the ROM reduces the significance of the reduction in compu-tational resources obtained by MOR In contrast the technique of parametric modelorder reduction (PMOR) enables the generation of a parametric ROM with acceptableaccuracy over the feasible parameter domain such that a single ROM is sufficient forthe optimization process Among the various PMOR methods [1 3 5 10 15 16]few of them are applicable for nonlinear problems with parameters The reduced ba-sis method (RBM) however has been developed for nonlinear parametric systems[2 11 18 34] Moreover endowed with a posteriori error estimation the parametricROM can be generated automatically

The RBM has been proved to be powerful tools for rapid and reliable evaluation ofthe parameterized PDEs [2 11 18 34] The reduced basis (RB) used to constructthe ROM is computed from snapshots (the solutions of the PDEs at certain selectedsamples of the parameters andor chosen time steps) through a greedy algorithmWhen applied to optimization the original system resulting from the discretization ofPDEs is first replaced by a ROM generated by the RBM then the related quantities canbe evaluated rapidly by solving the cheap ROM rather than the original expensive oneSo far research on the application of RBMs to PDE constrained optimization is verylimited In [33] the authors mainly focused on RBMs for affinely parameterized linearproblems Shape optimization employing RBMs for viscous flow in hemodynamics wasaddressed in [29] where the empirical interpolation method (EIM) [2] was exploited

1

to treat the nonaffinity in the linear parameterized system Applications to multiscaleproblems can be found in the recent work [32] However all these applications focuson finite element (FE) based RBMs for linear time-independent PDEs

In this paper we consider an optimization problem with PDE constraints wherethe PDEs are nonlinear time-dependent and have non-affine parameter dependencySuch problems arise for example from batch chromatography in chemical engineeringTo capture the dynamics precisely a large number of DOFs must be employed whichresults in a large-scale system Solving such a complex system during optimization istime-consuming Constructing a reduced model for a parameterized nonlinear time-dependent nonaffine system poses additional challenges for all kinds of MOR methodsgranting no exemption to RBMs Furthermore a careful choice of the discretizationscheme should be taken for nonlinear problems especially for convection-dominatedproblems The finite volume (FV) discretization is used to construct the full ordermodel (FOM) by which the conservation property of the system is well preservedThe FV-based RBM was first introduced for linear evolution equations in [24] andis extended to nonlinear problems afterwards [11 25] where the nonlinear operatorresulting from the discretization will be treated with empirical operator interpolationfor an efficient offline-online computation of the ROM

With no doubt an efficient rigorous and sharp a posteriori error estimation is crucialfor RBMs because it enables automatic generation of the RB and in turn a reliableROM with a desired accuracy with the help of a greedy algorithm Rapid and reliableevaluation of the input-output relationships for the associated PDEs is very importantfor efficiently solving the optimization problem where an output response rather thanthe field variable (the solution to the PDEs) is of interest When a ROM is employedfor such an evaluation the error of the output of interest rather than that of the fieldvariable should be estimated and used for the generation of the ROM We propose touse the output error for the generation of the RB There are some results on the outputerror bound for FE-based RBMs for elliptic or parabolic problems [35 36] Howeverthere is no study on the output error bound for FV-based RBMs for nonlinear evolutionequations so far In this work we present a residual-based error estimation for theoutput of the ROM derived by a FV-based RBM to obtain a goal-oriented ROM

With the help of an error estimate the construction of the ROM can be managedautomatically In some cases however the error bound may not work as well as oneexpects For example in the process of the basis being extended the error bounddecreases slowly or even stagnates after some steps but the true error is very smallalready As a result the basis extension is not stopped because the error bound doesnot go below the prespecified tolerance This means that the basis will be unnecessarilyextended if there is no reasonable remedy Certainly simply using the true error as theindicator is not a wise choice because it is typically time-consuming to compute thetrue error for all sample points in the training set To make full use of the availableerror estimate we propose an early-stop criterion for the basis extension by checkingthe true error at the parameter selected by the greedy algorithm according to theoutput error bound In this way the basis extension can be stopped in time and thesize of the resulting ROM can be kept reasonably small

Additionally the efficiency of the RBM is ensured by the strategy of offline-online

2

decomposition During the offline stage all full-dimension dependent and parameter-independent terms can be precomputed and a parameterized reduced model is obtaineda priori during the process of optimization a reliable output response can be obtainedrapidly by the online simulation based on the ROM at the parameter determined by theoptimization procedure In this way the ROM-based optimization can be solved moreefficiently compared to the FOM-based one Note that the offline time is usually nottaken into consideration although the offline computation is typically time-consumingespecially for time-dependent PDEs

To reduce the cost and complexity of the offline stage we propose a technique ofadaptive snapshot selection (ASS) for the generation of the RB For time-dependentproblems if the dynamics (rather than the solution at the final time) is of interest thesolution at the time instances in the evolution process should be collected as snapshotsHowever the trajectory for a given parameter might contain a large number of timesteps eg in the simulation of batch chromatography In such a case if the solutionsat all time steps are taken as snapshots the subsequent computation will be veryexpensive because the number of snapshots is too large if one just trivially selectspart of the solutions ie solutions at parts of the time instances (eg every twoor several time steps) the final RB approximation might be of low accuracy becauseimportant information may have been lost due to such a naive snapshot selection Wepropose to select the snapshot adaptively according to the variation of the solution inthe evolution process The idea is to make full use of the behavior of the trajectoryand discard the redundant (linearly dependent) information adaptively It enables thegeneration of the RB with a small number of snapshots but including only ldquousefulrdquoinformation In addition it is easily combined with other algorithms for the generationof the RB eg the POD-Greedy algorithm [24]

This paper is organized as follows We state the underlying PDE-constrained op-timization problem in detail in Section 2 Reviews of the RBM and EIM are givenin Section 3 and Section 4 respectively The adaptive technique of snapshot selec-tion and its implementation are addressed in detail in Section 5 Section 6 showsthe RB scheme for the batch chromatographic model including the derivation of theFOM based on the FV discretization the generation of the ROM and the strategyof the offline-online decomposition as well In Section 7 an output-oriented errorbound is derived in the vector space for evolution equations for the RBM based onFV-discretization An early-stop criterion is proposed to make the construction of theROM more efficient Numerical examples including optimization based on the ROMare carried out in Section 8 Conclusions are drawn in Section 9

2 Problem statementIn this work we consider the following PDE constrained optimization problem

minmicroisinPJ (u(t xmicro)micro)

st Ψ (u(t xmicro)micro) le 0Φ (u(t xmicro)micro) = 0

(1)

3

where J is the objective function Ψ defines the inequality constraints The fieldvariable u(t xmicro) is the solution to the underlying parametrized partial differentialequations Φ (u(t xmicro)) = 0 micro isin P Such an optimization problem arises in manyapplications such as aerodynamics fluid dynamics and chemical process In practicalcomputation the PDEs are usually discretized such that the optimization problem in(1) is replaced by an optimization problem in finite dimensions

minmicroisinP

J (uN (t micro)micro)

st Ψ(uN (t micro)micro

)le 0

Φ(uN (t micro)micro

)= 0

(2)

where uN = uN (t micro) isin RN is the solution to the discretized system of equationsΦ(uN (t micro)micro

)= 0 and J Ψ and Φ are the operators in the finite dimensional vector

space corresponding to J Ψ and Φ respectively The discretized equations are oftenof very large scale and complex At each iteration of the optimization process sucha large-scale complex system of equations must be solved at least once As a resultthe whole optimization process will be time-consuming To accelerate the underlyingoptimization a surrogate ROM can be employed to replace the original large-scalediscretized system for a rapid evaluation of the vector of unknowns uN

To further motivate and illustrate our methods we consider a particular exampleoptimal operation for batch chromatography Batch chromatography as a crucial sep-aration and purification tool is widely employed in food fine chemical and pharmaceu-tical industries The principle of batch elution chromatography for binary separationis shown schematically in Figure 1 During the injection period tin a mixture consist-ing of a and b is injected at the inlet of the column packed with a suitable stationaryphase With the help of the mobile phase the feed mixture flows through the columnSince the solutes to be separated exhibit different adsorption affinities to the station-ary phase they move at different velocities in the column and thus separate fromeach other when exiting the column At the column outlet component a is collectedbetween cutting points t3 and t4 and component b is collected between t1 and t2Here the positions of t1 and t4 are determined by a minimum concentration thresholdthat the detector can resolve and the positions of t2 and t3 are determined by thepurity specifications (Pua and Pub) imposed on the products After a cycle periodtcyc = t4 minus t1 the injection is repeated

The dynamic behavior of the chromatographic process is described by an axiallydispersed plug-flow model with limited mass-transfer rate characterized by a lineardriving force approximation The governing equations in the dimensionless form areformulated as follows

partczpartt

+ 1minus εε

partqzpartt

= minuspartczpartx

+ 1Pe

part2czpartx2 0 lt x lt 1

partqzpartt

= L

Q(εAc)κz(qEqz minus qz

) 0 le x le 1

(3)

where cz qz are the concentrations of the component z (z = a b) in the liquid and solidphase respectively Q the volumetric feed flow-rate Ac the cross-sectional area of the

4

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 2: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Abstract

In this work we show that the reduced basis method accelerates a PDE con-strained optimization problem where a nonlinear discretized system with a largenumber of degrees of freedom must be repeatedly solved during optimizationSuch an optimization problem arises for example from batch chromatographyInstead of solving the full system of equations a reduced model with a smallnumber of equations is derived by the reduced basis method such that onlythe small reduced system is solved at each step of the optimization process Anadaptive technique for selecting the snapshots is proposed so that the complexityand runtime for generating the reduced basis are largely reduced An output-oriented error bound is derived in the vector space whereby the construction ofthe reduced model is managed automatically An early-stop criterion is proposedto circumvent the stagnation of the error and to make the construction of thereduced model more efficient Numerical examples show that the adaptive tech-nique is very efficient in reducing the offline time The optimization based on thereduced model is successful in terms of the accuracy and the runtime for gettingthe optimal solution

Keywords reduced basis method empirical interpolation adaptive snapshotselection optimization batch chromatography

Imprint

Max Planck Institute for Dynamics of Complex Technical Systems Magdeburg

PublisherMax Planck Institute forDynamics of Complex Technical Systems

AddressMax Planck Institute forDynamics of Complex Technical SystemsSandtorstr 139106 Magdeburg

httpwwwmpi-magdeburgmpgdepreprints

1 IntroductionIn the last decade the optimization with constraints given by partial differential equa-tions (PDE constrained optimization for short) has emerged as a challenging researcharea It has increasingly arisen in various engineering contexts such as optimal de-sign control and parameter estimation Over the past years besides the increasingprogress of the computing hardware a large number of attempts have been devotedto the development of efficient algorithms and strategies for solving such optimizationproblems see for example [6 7 8 26] and references therein

Model order reduction (MOR) is a powerful technique for constructing a low-costapproximation of large-scale systems resulting from the discretization of PDEs Thelow-cost approximation often called reduced order model (ROM) on the one handshould have the same structure as the original large-scale system but with a muchsmaller number of degrees of freedom (DOFs) on the other hand it must have accept-able accuracy for the input-output representation of the original system Due to thesmall size and negligible error the derived ROM is used as a surrogate model of thelarge-scale system in various disciplines such as optimization and control fluid dy-namics structural dynamics circuit design and so on In particular for optimizationproblems with nonlinear PDE constraints proper orthogonal decomposition (POD) isoften used to derive a ROM which has been applied to accelerate optimization prob-lems [13 14] However a ROM from POD is reliable only in the neighborhood of theinput parameter setting at which the ROM is constructed There is no guarantee forthe accuracy of the ROM at a different parameter setting To circumvent the problema trust-region technique was suggested to manage the POD-based ROM in [13] Herethe ROM is updated according to the quality of the approximation However therepeated construction of the ROM reduces the significance of the reduction in compu-tational resources obtained by MOR In contrast the technique of parametric modelorder reduction (PMOR) enables the generation of a parametric ROM with acceptableaccuracy over the feasible parameter domain such that a single ROM is sufficient forthe optimization process Among the various PMOR methods [1 3 5 10 15 16]few of them are applicable for nonlinear problems with parameters The reduced ba-sis method (RBM) however has been developed for nonlinear parametric systems[2 11 18 34] Moreover endowed with a posteriori error estimation the parametricROM can be generated automatically

The RBM has been proved to be powerful tools for rapid and reliable evaluation ofthe parameterized PDEs [2 11 18 34] The reduced basis (RB) used to constructthe ROM is computed from snapshots (the solutions of the PDEs at certain selectedsamples of the parameters andor chosen time steps) through a greedy algorithmWhen applied to optimization the original system resulting from the discretization ofPDEs is first replaced by a ROM generated by the RBM then the related quantities canbe evaluated rapidly by solving the cheap ROM rather than the original expensive oneSo far research on the application of RBMs to PDE constrained optimization is verylimited In [33] the authors mainly focused on RBMs for affinely parameterized linearproblems Shape optimization employing RBMs for viscous flow in hemodynamics wasaddressed in [29] where the empirical interpolation method (EIM) [2] was exploited

1

to treat the nonaffinity in the linear parameterized system Applications to multiscaleproblems can be found in the recent work [32] However all these applications focuson finite element (FE) based RBMs for linear time-independent PDEs

In this paper we consider an optimization problem with PDE constraints wherethe PDEs are nonlinear time-dependent and have non-affine parameter dependencySuch problems arise for example from batch chromatography in chemical engineeringTo capture the dynamics precisely a large number of DOFs must be employed whichresults in a large-scale system Solving such a complex system during optimization istime-consuming Constructing a reduced model for a parameterized nonlinear time-dependent nonaffine system poses additional challenges for all kinds of MOR methodsgranting no exemption to RBMs Furthermore a careful choice of the discretizationscheme should be taken for nonlinear problems especially for convection-dominatedproblems The finite volume (FV) discretization is used to construct the full ordermodel (FOM) by which the conservation property of the system is well preservedThe FV-based RBM was first introduced for linear evolution equations in [24] andis extended to nonlinear problems afterwards [11 25] where the nonlinear operatorresulting from the discretization will be treated with empirical operator interpolationfor an efficient offline-online computation of the ROM

With no doubt an efficient rigorous and sharp a posteriori error estimation is crucialfor RBMs because it enables automatic generation of the RB and in turn a reliableROM with a desired accuracy with the help of a greedy algorithm Rapid and reliableevaluation of the input-output relationships for the associated PDEs is very importantfor efficiently solving the optimization problem where an output response rather thanthe field variable (the solution to the PDEs) is of interest When a ROM is employedfor such an evaluation the error of the output of interest rather than that of the fieldvariable should be estimated and used for the generation of the ROM We propose touse the output error for the generation of the RB There are some results on the outputerror bound for FE-based RBMs for elliptic or parabolic problems [35 36] Howeverthere is no study on the output error bound for FV-based RBMs for nonlinear evolutionequations so far In this work we present a residual-based error estimation for theoutput of the ROM derived by a FV-based RBM to obtain a goal-oriented ROM

With the help of an error estimate the construction of the ROM can be managedautomatically In some cases however the error bound may not work as well as oneexpects For example in the process of the basis being extended the error bounddecreases slowly or even stagnates after some steps but the true error is very smallalready As a result the basis extension is not stopped because the error bound doesnot go below the prespecified tolerance This means that the basis will be unnecessarilyextended if there is no reasonable remedy Certainly simply using the true error as theindicator is not a wise choice because it is typically time-consuming to compute thetrue error for all sample points in the training set To make full use of the availableerror estimate we propose an early-stop criterion for the basis extension by checkingthe true error at the parameter selected by the greedy algorithm according to theoutput error bound In this way the basis extension can be stopped in time and thesize of the resulting ROM can be kept reasonably small

Additionally the efficiency of the RBM is ensured by the strategy of offline-online

2

decomposition During the offline stage all full-dimension dependent and parameter-independent terms can be precomputed and a parameterized reduced model is obtaineda priori during the process of optimization a reliable output response can be obtainedrapidly by the online simulation based on the ROM at the parameter determined by theoptimization procedure In this way the ROM-based optimization can be solved moreefficiently compared to the FOM-based one Note that the offline time is usually nottaken into consideration although the offline computation is typically time-consumingespecially for time-dependent PDEs

To reduce the cost and complexity of the offline stage we propose a technique ofadaptive snapshot selection (ASS) for the generation of the RB For time-dependentproblems if the dynamics (rather than the solution at the final time) is of interest thesolution at the time instances in the evolution process should be collected as snapshotsHowever the trajectory for a given parameter might contain a large number of timesteps eg in the simulation of batch chromatography In such a case if the solutionsat all time steps are taken as snapshots the subsequent computation will be veryexpensive because the number of snapshots is too large if one just trivially selectspart of the solutions ie solutions at parts of the time instances (eg every twoor several time steps) the final RB approximation might be of low accuracy becauseimportant information may have been lost due to such a naive snapshot selection Wepropose to select the snapshot adaptively according to the variation of the solution inthe evolution process The idea is to make full use of the behavior of the trajectoryand discard the redundant (linearly dependent) information adaptively It enables thegeneration of the RB with a small number of snapshots but including only ldquousefulrdquoinformation In addition it is easily combined with other algorithms for the generationof the RB eg the POD-Greedy algorithm [24]

This paper is organized as follows We state the underlying PDE-constrained op-timization problem in detail in Section 2 Reviews of the RBM and EIM are givenin Section 3 and Section 4 respectively The adaptive technique of snapshot selec-tion and its implementation are addressed in detail in Section 5 Section 6 showsthe RB scheme for the batch chromatographic model including the derivation of theFOM based on the FV discretization the generation of the ROM and the strategyof the offline-online decomposition as well In Section 7 an output-oriented errorbound is derived in the vector space for evolution equations for the RBM based onFV-discretization An early-stop criterion is proposed to make the construction of theROM more efficient Numerical examples including optimization based on the ROMare carried out in Section 8 Conclusions are drawn in Section 9

2 Problem statementIn this work we consider the following PDE constrained optimization problem

minmicroisinPJ (u(t xmicro)micro)

st Ψ (u(t xmicro)micro) le 0Φ (u(t xmicro)micro) = 0

(1)

3

where J is the objective function Ψ defines the inequality constraints The fieldvariable u(t xmicro) is the solution to the underlying parametrized partial differentialequations Φ (u(t xmicro)) = 0 micro isin P Such an optimization problem arises in manyapplications such as aerodynamics fluid dynamics and chemical process In practicalcomputation the PDEs are usually discretized such that the optimization problem in(1) is replaced by an optimization problem in finite dimensions

minmicroisinP

J (uN (t micro)micro)

st Ψ(uN (t micro)micro

)le 0

Φ(uN (t micro)micro

)= 0

(2)

where uN = uN (t micro) isin RN is the solution to the discretized system of equationsΦ(uN (t micro)micro

)= 0 and J Ψ and Φ are the operators in the finite dimensional vector

space corresponding to J Ψ and Φ respectively The discretized equations are oftenof very large scale and complex At each iteration of the optimization process sucha large-scale complex system of equations must be solved at least once As a resultthe whole optimization process will be time-consuming To accelerate the underlyingoptimization a surrogate ROM can be employed to replace the original large-scalediscretized system for a rapid evaluation of the vector of unknowns uN

To further motivate and illustrate our methods we consider a particular exampleoptimal operation for batch chromatography Batch chromatography as a crucial sep-aration and purification tool is widely employed in food fine chemical and pharmaceu-tical industries The principle of batch elution chromatography for binary separationis shown schematically in Figure 1 During the injection period tin a mixture consist-ing of a and b is injected at the inlet of the column packed with a suitable stationaryphase With the help of the mobile phase the feed mixture flows through the columnSince the solutes to be separated exhibit different adsorption affinities to the station-ary phase they move at different velocities in the column and thus separate fromeach other when exiting the column At the column outlet component a is collectedbetween cutting points t3 and t4 and component b is collected between t1 and t2Here the positions of t1 and t4 are determined by a minimum concentration thresholdthat the detector can resolve and the positions of t2 and t3 are determined by thepurity specifications (Pua and Pub) imposed on the products After a cycle periodtcyc = t4 minus t1 the injection is repeated

The dynamic behavior of the chromatographic process is described by an axiallydispersed plug-flow model with limited mass-transfer rate characterized by a lineardriving force approximation The governing equations in the dimensionless form areformulated as follows

partczpartt

+ 1minus εε

partqzpartt

= minuspartczpartx

+ 1Pe

part2czpartx2 0 lt x lt 1

partqzpartt

= L

Q(εAc)κz(qEqz minus qz

) 0 le x le 1

(3)

where cz qz are the concentrations of the component z (z = a b) in the liquid and solidphase respectively Q the volumetric feed flow-rate Ac the cross-sectional area of the

4

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 3: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

1 IntroductionIn the last decade the optimization with constraints given by partial differential equa-tions (PDE constrained optimization for short) has emerged as a challenging researcharea It has increasingly arisen in various engineering contexts such as optimal de-sign control and parameter estimation Over the past years besides the increasingprogress of the computing hardware a large number of attempts have been devotedto the development of efficient algorithms and strategies for solving such optimizationproblems see for example [6 7 8 26] and references therein

Model order reduction (MOR) is a powerful technique for constructing a low-costapproximation of large-scale systems resulting from the discretization of PDEs Thelow-cost approximation often called reduced order model (ROM) on the one handshould have the same structure as the original large-scale system but with a muchsmaller number of degrees of freedom (DOFs) on the other hand it must have accept-able accuracy for the input-output representation of the original system Due to thesmall size and negligible error the derived ROM is used as a surrogate model of thelarge-scale system in various disciplines such as optimization and control fluid dy-namics structural dynamics circuit design and so on In particular for optimizationproblems with nonlinear PDE constraints proper orthogonal decomposition (POD) isoften used to derive a ROM which has been applied to accelerate optimization prob-lems [13 14] However a ROM from POD is reliable only in the neighborhood of theinput parameter setting at which the ROM is constructed There is no guarantee forthe accuracy of the ROM at a different parameter setting To circumvent the problema trust-region technique was suggested to manage the POD-based ROM in [13] Herethe ROM is updated according to the quality of the approximation However therepeated construction of the ROM reduces the significance of the reduction in compu-tational resources obtained by MOR In contrast the technique of parametric modelorder reduction (PMOR) enables the generation of a parametric ROM with acceptableaccuracy over the feasible parameter domain such that a single ROM is sufficient forthe optimization process Among the various PMOR methods [1 3 5 10 15 16]few of them are applicable for nonlinear problems with parameters The reduced ba-sis method (RBM) however has been developed for nonlinear parametric systems[2 11 18 34] Moreover endowed with a posteriori error estimation the parametricROM can be generated automatically

The RBM has been proved to be powerful tools for rapid and reliable evaluation ofthe parameterized PDEs [2 11 18 34] The reduced basis (RB) used to constructthe ROM is computed from snapshots (the solutions of the PDEs at certain selectedsamples of the parameters andor chosen time steps) through a greedy algorithmWhen applied to optimization the original system resulting from the discretization ofPDEs is first replaced by a ROM generated by the RBM then the related quantities canbe evaluated rapidly by solving the cheap ROM rather than the original expensive oneSo far research on the application of RBMs to PDE constrained optimization is verylimited In [33] the authors mainly focused on RBMs for affinely parameterized linearproblems Shape optimization employing RBMs for viscous flow in hemodynamics wasaddressed in [29] where the empirical interpolation method (EIM) [2] was exploited

1

to treat the nonaffinity in the linear parameterized system Applications to multiscaleproblems can be found in the recent work [32] However all these applications focuson finite element (FE) based RBMs for linear time-independent PDEs

In this paper we consider an optimization problem with PDE constraints wherethe PDEs are nonlinear time-dependent and have non-affine parameter dependencySuch problems arise for example from batch chromatography in chemical engineeringTo capture the dynamics precisely a large number of DOFs must be employed whichresults in a large-scale system Solving such a complex system during optimization istime-consuming Constructing a reduced model for a parameterized nonlinear time-dependent nonaffine system poses additional challenges for all kinds of MOR methodsgranting no exemption to RBMs Furthermore a careful choice of the discretizationscheme should be taken for nonlinear problems especially for convection-dominatedproblems The finite volume (FV) discretization is used to construct the full ordermodel (FOM) by which the conservation property of the system is well preservedThe FV-based RBM was first introduced for linear evolution equations in [24] andis extended to nonlinear problems afterwards [11 25] where the nonlinear operatorresulting from the discretization will be treated with empirical operator interpolationfor an efficient offline-online computation of the ROM

With no doubt an efficient rigorous and sharp a posteriori error estimation is crucialfor RBMs because it enables automatic generation of the RB and in turn a reliableROM with a desired accuracy with the help of a greedy algorithm Rapid and reliableevaluation of the input-output relationships for the associated PDEs is very importantfor efficiently solving the optimization problem where an output response rather thanthe field variable (the solution to the PDEs) is of interest When a ROM is employedfor such an evaluation the error of the output of interest rather than that of the fieldvariable should be estimated and used for the generation of the ROM We propose touse the output error for the generation of the RB There are some results on the outputerror bound for FE-based RBMs for elliptic or parabolic problems [35 36] Howeverthere is no study on the output error bound for FV-based RBMs for nonlinear evolutionequations so far In this work we present a residual-based error estimation for theoutput of the ROM derived by a FV-based RBM to obtain a goal-oriented ROM

With the help of an error estimate the construction of the ROM can be managedautomatically In some cases however the error bound may not work as well as oneexpects For example in the process of the basis being extended the error bounddecreases slowly or even stagnates after some steps but the true error is very smallalready As a result the basis extension is not stopped because the error bound doesnot go below the prespecified tolerance This means that the basis will be unnecessarilyextended if there is no reasonable remedy Certainly simply using the true error as theindicator is not a wise choice because it is typically time-consuming to compute thetrue error for all sample points in the training set To make full use of the availableerror estimate we propose an early-stop criterion for the basis extension by checkingthe true error at the parameter selected by the greedy algorithm according to theoutput error bound In this way the basis extension can be stopped in time and thesize of the resulting ROM can be kept reasonably small

Additionally the efficiency of the RBM is ensured by the strategy of offline-online

2

decomposition During the offline stage all full-dimension dependent and parameter-independent terms can be precomputed and a parameterized reduced model is obtaineda priori during the process of optimization a reliable output response can be obtainedrapidly by the online simulation based on the ROM at the parameter determined by theoptimization procedure In this way the ROM-based optimization can be solved moreefficiently compared to the FOM-based one Note that the offline time is usually nottaken into consideration although the offline computation is typically time-consumingespecially for time-dependent PDEs

To reduce the cost and complexity of the offline stage we propose a technique ofadaptive snapshot selection (ASS) for the generation of the RB For time-dependentproblems if the dynamics (rather than the solution at the final time) is of interest thesolution at the time instances in the evolution process should be collected as snapshotsHowever the trajectory for a given parameter might contain a large number of timesteps eg in the simulation of batch chromatography In such a case if the solutionsat all time steps are taken as snapshots the subsequent computation will be veryexpensive because the number of snapshots is too large if one just trivially selectspart of the solutions ie solutions at parts of the time instances (eg every twoor several time steps) the final RB approximation might be of low accuracy becauseimportant information may have been lost due to such a naive snapshot selection Wepropose to select the snapshot adaptively according to the variation of the solution inthe evolution process The idea is to make full use of the behavior of the trajectoryand discard the redundant (linearly dependent) information adaptively It enables thegeneration of the RB with a small number of snapshots but including only ldquousefulrdquoinformation In addition it is easily combined with other algorithms for the generationof the RB eg the POD-Greedy algorithm [24]

This paper is organized as follows We state the underlying PDE-constrained op-timization problem in detail in Section 2 Reviews of the RBM and EIM are givenin Section 3 and Section 4 respectively The adaptive technique of snapshot selec-tion and its implementation are addressed in detail in Section 5 Section 6 showsthe RB scheme for the batch chromatographic model including the derivation of theFOM based on the FV discretization the generation of the ROM and the strategyof the offline-online decomposition as well In Section 7 an output-oriented errorbound is derived in the vector space for evolution equations for the RBM based onFV-discretization An early-stop criterion is proposed to make the construction of theROM more efficient Numerical examples including optimization based on the ROMare carried out in Section 8 Conclusions are drawn in Section 9

2 Problem statementIn this work we consider the following PDE constrained optimization problem

minmicroisinPJ (u(t xmicro)micro)

st Ψ (u(t xmicro)micro) le 0Φ (u(t xmicro)micro) = 0

(1)

3

where J is the objective function Ψ defines the inequality constraints The fieldvariable u(t xmicro) is the solution to the underlying parametrized partial differentialequations Φ (u(t xmicro)) = 0 micro isin P Such an optimization problem arises in manyapplications such as aerodynamics fluid dynamics and chemical process In practicalcomputation the PDEs are usually discretized such that the optimization problem in(1) is replaced by an optimization problem in finite dimensions

minmicroisinP

J (uN (t micro)micro)

st Ψ(uN (t micro)micro

)le 0

Φ(uN (t micro)micro

)= 0

(2)

where uN = uN (t micro) isin RN is the solution to the discretized system of equationsΦ(uN (t micro)micro

)= 0 and J Ψ and Φ are the operators in the finite dimensional vector

space corresponding to J Ψ and Φ respectively The discretized equations are oftenof very large scale and complex At each iteration of the optimization process sucha large-scale complex system of equations must be solved at least once As a resultthe whole optimization process will be time-consuming To accelerate the underlyingoptimization a surrogate ROM can be employed to replace the original large-scalediscretized system for a rapid evaluation of the vector of unknowns uN

To further motivate and illustrate our methods we consider a particular exampleoptimal operation for batch chromatography Batch chromatography as a crucial sep-aration and purification tool is widely employed in food fine chemical and pharmaceu-tical industries The principle of batch elution chromatography for binary separationis shown schematically in Figure 1 During the injection period tin a mixture consist-ing of a and b is injected at the inlet of the column packed with a suitable stationaryphase With the help of the mobile phase the feed mixture flows through the columnSince the solutes to be separated exhibit different adsorption affinities to the station-ary phase they move at different velocities in the column and thus separate fromeach other when exiting the column At the column outlet component a is collectedbetween cutting points t3 and t4 and component b is collected between t1 and t2Here the positions of t1 and t4 are determined by a minimum concentration thresholdthat the detector can resolve and the positions of t2 and t3 are determined by thepurity specifications (Pua and Pub) imposed on the products After a cycle periodtcyc = t4 minus t1 the injection is repeated

The dynamic behavior of the chromatographic process is described by an axiallydispersed plug-flow model with limited mass-transfer rate characterized by a lineardriving force approximation The governing equations in the dimensionless form areformulated as follows

partczpartt

+ 1minus εε

partqzpartt

= minuspartczpartx

+ 1Pe

part2czpartx2 0 lt x lt 1

partqzpartt

= L

Q(εAc)κz(qEqz minus qz

) 0 le x le 1

(3)

where cz qz are the concentrations of the component z (z = a b) in the liquid and solidphase respectively Q the volumetric feed flow-rate Ac the cross-sectional area of the

4

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 4: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

to treat the nonaffinity in the linear parameterized system Applications to multiscaleproblems can be found in the recent work [32] However all these applications focuson finite element (FE) based RBMs for linear time-independent PDEs

In this paper we consider an optimization problem with PDE constraints wherethe PDEs are nonlinear time-dependent and have non-affine parameter dependencySuch problems arise for example from batch chromatography in chemical engineeringTo capture the dynamics precisely a large number of DOFs must be employed whichresults in a large-scale system Solving such a complex system during optimization istime-consuming Constructing a reduced model for a parameterized nonlinear time-dependent nonaffine system poses additional challenges for all kinds of MOR methodsgranting no exemption to RBMs Furthermore a careful choice of the discretizationscheme should be taken for nonlinear problems especially for convection-dominatedproblems The finite volume (FV) discretization is used to construct the full ordermodel (FOM) by which the conservation property of the system is well preservedThe FV-based RBM was first introduced for linear evolution equations in [24] andis extended to nonlinear problems afterwards [11 25] where the nonlinear operatorresulting from the discretization will be treated with empirical operator interpolationfor an efficient offline-online computation of the ROM

With no doubt an efficient rigorous and sharp a posteriori error estimation is crucialfor RBMs because it enables automatic generation of the RB and in turn a reliableROM with a desired accuracy with the help of a greedy algorithm Rapid and reliableevaluation of the input-output relationships for the associated PDEs is very importantfor efficiently solving the optimization problem where an output response rather thanthe field variable (the solution to the PDEs) is of interest When a ROM is employedfor such an evaluation the error of the output of interest rather than that of the fieldvariable should be estimated and used for the generation of the ROM We propose touse the output error for the generation of the RB There are some results on the outputerror bound for FE-based RBMs for elliptic or parabolic problems [35 36] Howeverthere is no study on the output error bound for FV-based RBMs for nonlinear evolutionequations so far In this work we present a residual-based error estimation for theoutput of the ROM derived by a FV-based RBM to obtain a goal-oriented ROM

With the help of an error estimate the construction of the ROM can be managedautomatically In some cases however the error bound may not work as well as oneexpects For example in the process of the basis being extended the error bounddecreases slowly or even stagnates after some steps but the true error is very smallalready As a result the basis extension is not stopped because the error bound doesnot go below the prespecified tolerance This means that the basis will be unnecessarilyextended if there is no reasonable remedy Certainly simply using the true error as theindicator is not a wise choice because it is typically time-consuming to compute thetrue error for all sample points in the training set To make full use of the availableerror estimate we propose an early-stop criterion for the basis extension by checkingthe true error at the parameter selected by the greedy algorithm according to theoutput error bound In this way the basis extension can be stopped in time and thesize of the resulting ROM can be kept reasonably small

Additionally the efficiency of the RBM is ensured by the strategy of offline-online

2

decomposition During the offline stage all full-dimension dependent and parameter-independent terms can be precomputed and a parameterized reduced model is obtaineda priori during the process of optimization a reliable output response can be obtainedrapidly by the online simulation based on the ROM at the parameter determined by theoptimization procedure In this way the ROM-based optimization can be solved moreefficiently compared to the FOM-based one Note that the offline time is usually nottaken into consideration although the offline computation is typically time-consumingespecially for time-dependent PDEs

To reduce the cost and complexity of the offline stage we propose a technique ofadaptive snapshot selection (ASS) for the generation of the RB For time-dependentproblems if the dynamics (rather than the solution at the final time) is of interest thesolution at the time instances in the evolution process should be collected as snapshotsHowever the trajectory for a given parameter might contain a large number of timesteps eg in the simulation of batch chromatography In such a case if the solutionsat all time steps are taken as snapshots the subsequent computation will be veryexpensive because the number of snapshots is too large if one just trivially selectspart of the solutions ie solutions at parts of the time instances (eg every twoor several time steps) the final RB approximation might be of low accuracy becauseimportant information may have been lost due to such a naive snapshot selection Wepropose to select the snapshot adaptively according to the variation of the solution inthe evolution process The idea is to make full use of the behavior of the trajectoryand discard the redundant (linearly dependent) information adaptively It enables thegeneration of the RB with a small number of snapshots but including only ldquousefulrdquoinformation In addition it is easily combined with other algorithms for the generationof the RB eg the POD-Greedy algorithm [24]

This paper is organized as follows We state the underlying PDE-constrained op-timization problem in detail in Section 2 Reviews of the RBM and EIM are givenin Section 3 and Section 4 respectively The adaptive technique of snapshot selec-tion and its implementation are addressed in detail in Section 5 Section 6 showsthe RB scheme for the batch chromatographic model including the derivation of theFOM based on the FV discretization the generation of the ROM and the strategyof the offline-online decomposition as well In Section 7 an output-oriented errorbound is derived in the vector space for evolution equations for the RBM based onFV-discretization An early-stop criterion is proposed to make the construction of theROM more efficient Numerical examples including optimization based on the ROMare carried out in Section 8 Conclusions are drawn in Section 9

2 Problem statementIn this work we consider the following PDE constrained optimization problem

minmicroisinPJ (u(t xmicro)micro)

st Ψ (u(t xmicro)micro) le 0Φ (u(t xmicro)micro) = 0

(1)

3

where J is the objective function Ψ defines the inequality constraints The fieldvariable u(t xmicro) is the solution to the underlying parametrized partial differentialequations Φ (u(t xmicro)) = 0 micro isin P Such an optimization problem arises in manyapplications such as aerodynamics fluid dynamics and chemical process In practicalcomputation the PDEs are usually discretized such that the optimization problem in(1) is replaced by an optimization problem in finite dimensions

minmicroisinP

J (uN (t micro)micro)

st Ψ(uN (t micro)micro

)le 0

Φ(uN (t micro)micro

)= 0

(2)

where uN = uN (t micro) isin RN is the solution to the discretized system of equationsΦ(uN (t micro)micro

)= 0 and J Ψ and Φ are the operators in the finite dimensional vector

space corresponding to J Ψ and Φ respectively The discretized equations are oftenof very large scale and complex At each iteration of the optimization process sucha large-scale complex system of equations must be solved at least once As a resultthe whole optimization process will be time-consuming To accelerate the underlyingoptimization a surrogate ROM can be employed to replace the original large-scalediscretized system for a rapid evaluation of the vector of unknowns uN

To further motivate and illustrate our methods we consider a particular exampleoptimal operation for batch chromatography Batch chromatography as a crucial sep-aration and purification tool is widely employed in food fine chemical and pharmaceu-tical industries The principle of batch elution chromatography for binary separationis shown schematically in Figure 1 During the injection period tin a mixture consist-ing of a and b is injected at the inlet of the column packed with a suitable stationaryphase With the help of the mobile phase the feed mixture flows through the columnSince the solutes to be separated exhibit different adsorption affinities to the station-ary phase they move at different velocities in the column and thus separate fromeach other when exiting the column At the column outlet component a is collectedbetween cutting points t3 and t4 and component b is collected between t1 and t2Here the positions of t1 and t4 are determined by a minimum concentration thresholdthat the detector can resolve and the positions of t2 and t3 are determined by thepurity specifications (Pua and Pub) imposed on the products After a cycle periodtcyc = t4 minus t1 the injection is repeated

The dynamic behavior of the chromatographic process is described by an axiallydispersed plug-flow model with limited mass-transfer rate characterized by a lineardriving force approximation The governing equations in the dimensionless form areformulated as follows

partczpartt

+ 1minus εε

partqzpartt

= minuspartczpartx

+ 1Pe

part2czpartx2 0 lt x lt 1

partqzpartt

= L

Q(εAc)κz(qEqz minus qz

) 0 le x le 1

(3)

where cz qz are the concentrations of the component z (z = a b) in the liquid and solidphase respectively Q the volumetric feed flow-rate Ac the cross-sectional area of the

4

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 5: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

decomposition During the offline stage all full-dimension dependent and parameter-independent terms can be precomputed and a parameterized reduced model is obtaineda priori during the process of optimization a reliable output response can be obtainedrapidly by the online simulation based on the ROM at the parameter determined by theoptimization procedure In this way the ROM-based optimization can be solved moreefficiently compared to the FOM-based one Note that the offline time is usually nottaken into consideration although the offline computation is typically time-consumingespecially for time-dependent PDEs

To reduce the cost and complexity of the offline stage we propose a technique ofadaptive snapshot selection (ASS) for the generation of the RB For time-dependentproblems if the dynamics (rather than the solution at the final time) is of interest thesolution at the time instances in the evolution process should be collected as snapshotsHowever the trajectory for a given parameter might contain a large number of timesteps eg in the simulation of batch chromatography In such a case if the solutionsat all time steps are taken as snapshots the subsequent computation will be veryexpensive because the number of snapshots is too large if one just trivially selectspart of the solutions ie solutions at parts of the time instances (eg every twoor several time steps) the final RB approximation might be of low accuracy becauseimportant information may have been lost due to such a naive snapshot selection Wepropose to select the snapshot adaptively according to the variation of the solution inthe evolution process The idea is to make full use of the behavior of the trajectoryand discard the redundant (linearly dependent) information adaptively It enables thegeneration of the RB with a small number of snapshots but including only ldquousefulrdquoinformation In addition it is easily combined with other algorithms for the generationof the RB eg the POD-Greedy algorithm [24]

This paper is organized as follows We state the underlying PDE-constrained op-timization problem in detail in Section 2 Reviews of the RBM and EIM are givenin Section 3 and Section 4 respectively The adaptive technique of snapshot selec-tion and its implementation are addressed in detail in Section 5 Section 6 showsthe RB scheme for the batch chromatographic model including the derivation of theFOM based on the FV discretization the generation of the ROM and the strategyof the offline-online decomposition as well In Section 7 an output-oriented errorbound is derived in the vector space for evolution equations for the RBM based onFV-discretization An early-stop criterion is proposed to make the construction of theROM more efficient Numerical examples including optimization based on the ROMare carried out in Section 8 Conclusions are drawn in Section 9

2 Problem statementIn this work we consider the following PDE constrained optimization problem

minmicroisinPJ (u(t xmicro)micro)

st Ψ (u(t xmicro)micro) le 0Φ (u(t xmicro)micro) = 0

(1)

3

where J is the objective function Ψ defines the inequality constraints The fieldvariable u(t xmicro) is the solution to the underlying parametrized partial differentialequations Φ (u(t xmicro)) = 0 micro isin P Such an optimization problem arises in manyapplications such as aerodynamics fluid dynamics and chemical process In practicalcomputation the PDEs are usually discretized such that the optimization problem in(1) is replaced by an optimization problem in finite dimensions

minmicroisinP

J (uN (t micro)micro)

st Ψ(uN (t micro)micro

)le 0

Φ(uN (t micro)micro

)= 0

(2)

where uN = uN (t micro) isin RN is the solution to the discretized system of equationsΦ(uN (t micro)micro

)= 0 and J Ψ and Φ are the operators in the finite dimensional vector

space corresponding to J Ψ and Φ respectively The discretized equations are oftenof very large scale and complex At each iteration of the optimization process sucha large-scale complex system of equations must be solved at least once As a resultthe whole optimization process will be time-consuming To accelerate the underlyingoptimization a surrogate ROM can be employed to replace the original large-scalediscretized system for a rapid evaluation of the vector of unknowns uN

To further motivate and illustrate our methods we consider a particular exampleoptimal operation for batch chromatography Batch chromatography as a crucial sep-aration and purification tool is widely employed in food fine chemical and pharmaceu-tical industries The principle of batch elution chromatography for binary separationis shown schematically in Figure 1 During the injection period tin a mixture consist-ing of a and b is injected at the inlet of the column packed with a suitable stationaryphase With the help of the mobile phase the feed mixture flows through the columnSince the solutes to be separated exhibit different adsorption affinities to the station-ary phase they move at different velocities in the column and thus separate fromeach other when exiting the column At the column outlet component a is collectedbetween cutting points t3 and t4 and component b is collected between t1 and t2Here the positions of t1 and t4 are determined by a minimum concentration thresholdthat the detector can resolve and the positions of t2 and t3 are determined by thepurity specifications (Pua and Pub) imposed on the products After a cycle periodtcyc = t4 minus t1 the injection is repeated

The dynamic behavior of the chromatographic process is described by an axiallydispersed plug-flow model with limited mass-transfer rate characterized by a lineardriving force approximation The governing equations in the dimensionless form areformulated as follows

partczpartt

+ 1minus εε

partqzpartt

= minuspartczpartx

+ 1Pe

part2czpartx2 0 lt x lt 1

partqzpartt

= L

Q(εAc)κz(qEqz minus qz

) 0 le x le 1

(3)

where cz qz are the concentrations of the component z (z = a b) in the liquid and solidphase respectively Q the volumetric feed flow-rate Ac the cross-sectional area of the

4

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 6: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

where J is the objective function Ψ defines the inequality constraints The fieldvariable u(t xmicro) is the solution to the underlying parametrized partial differentialequations Φ (u(t xmicro)) = 0 micro isin P Such an optimization problem arises in manyapplications such as aerodynamics fluid dynamics and chemical process In practicalcomputation the PDEs are usually discretized such that the optimization problem in(1) is replaced by an optimization problem in finite dimensions

minmicroisinP

J (uN (t micro)micro)

st Ψ(uN (t micro)micro

)le 0

Φ(uN (t micro)micro

)= 0

(2)

where uN = uN (t micro) isin RN is the solution to the discretized system of equationsΦ(uN (t micro)micro

)= 0 and J Ψ and Φ are the operators in the finite dimensional vector

space corresponding to J Ψ and Φ respectively The discretized equations are oftenof very large scale and complex At each iteration of the optimization process sucha large-scale complex system of equations must be solved at least once As a resultthe whole optimization process will be time-consuming To accelerate the underlyingoptimization a surrogate ROM can be employed to replace the original large-scalediscretized system for a rapid evaluation of the vector of unknowns uN

To further motivate and illustrate our methods we consider a particular exampleoptimal operation for batch chromatography Batch chromatography as a crucial sep-aration and purification tool is widely employed in food fine chemical and pharmaceu-tical industries The principle of batch elution chromatography for binary separationis shown schematically in Figure 1 During the injection period tin a mixture consist-ing of a and b is injected at the inlet of the column packed with a suitable stationaryphase With the help of the mobile phase the feed mixture flows through the columnSince the solutes to be separated exhibit different adsorption affinities to the station-ary phase they move at different velocities in the column and thus separate fromeach other when exiting the column At the column outlet component a is collectedbetween cutting points t3 and t4 and component b is collected between t1 and t2Here the positions of t1 and t4 are determined by a minimum concentration thresholdthat the detector can resolve and the positions of t2 and t3 are determined by thepurity specifications (Pua and Pub) imposed on the products After a cycle periodtcyc = t4 minus t1 the injection is repeated

The dynamic behavior of the chromatographic process is described by an axiallydispersed plug-flow model with limited mass-transfer rate characterized by a lineardriving force approximation The governing equations in the dimensionless form areformulated as follows

partczpartt

+ 1minus εε

partqzpartt

= minuspartczpartx

+ 1Pe

part2czpartx2 0 lt x lt 1

partqzpartt

= L

Q(εAc)κz(qEqz minus qz

) 0 le x le 1

(3)

where cz qz are the concentrations of the component z (z = a b) in the liquid and solidphase respectively Q the volumetric feed flow-rate Ac the cross-sectional area of the

4

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 7: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

column with the length L ε the column porosity κz the mass-transfer coefficient andPe the Peclet number The adsorption equilibrium qEq

z is described by the isothermequations of bi-Langmuir type

qEqz = fz(ca cb) = Hz1cz

1 +Ka1cfaca +Kb1cfbcb+ Hz2cz

1 +Ka2cfaca +Kb2cfbcb (4)

where cfz is the feed concentration of component z Hzj and Kzj are the Henry con-stants and thermodynamic coefficients respectively The initial and boundary condi-tions are given as follows

cz(0 x) = 0 qz(0 x) = 0 0 le x le 1partczpartx|x=0 = Pe

(cz(t 0)minus χ[0tin](t)

)

partczpartx|x=1 = 0

(5)

where tin is the injection period and χ[0tin] is the characteristic function

χ[0tin](t) =

1 if t isin [0 tin]0 otherwise

More details about the mathematical modeling for batch chromatography can be foundin [20]

Note that the feed flow rate Q and the injection period tin are often considered as theoperating variables denoted as micro = (Q tin) which play the role of parameters in thePDEs (3)minus(5) The system of PDEs is nonlinear time-dependent and has non-affineparameter dependency The nonlinearity of the system is reflected by (4) To capturethe system dynamics precisely a large number of DOFs must be introduced for thediscretization of the PDEs

The optimal operation of batch chromatography is of practical importance since itallows to exploit the full economic potential of the process and to reduce the separation

Column

ba

Solvent

Pump

a + b

b

a

b a

t

int

cyct

Q 1t 2t 3t 4t

Feed

t

Products

concentration

concentration

Figure 1 Sketch of a batch chromatographic process for the separation of a and b

5

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 8: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

cost Many efforts have been made for the optimization of batch chromatography overthe past several decades An extensive review of the early work can be found in [20]and references therein An iterative optimization approach for batch chromatographywas addressed in [17] A hierarchical approach to optimal control for a hybrid batchchromatographic process was developed in [19] Notably all these studies are basedon the finely discretized FOM Such a model with a large number of DOFs is able tocapture the dynamics of the process and the accuracy of the optimal solution obtainedfrom that can be guaranteed However the expensive FOM must be repeatedly solvedin the optimization process which makes the runtime for obtaining the optimal solutionrather too long

In this work the RBM is employed to generate a surrogate ROM of the param-eterized PDEs The resulting ROM is used to get a rapid evaluation of the outputresponse y(uN ) for the discretized system Φ(uN (t micro)micro) = 0 in (2) in the optimizationprocess In the next section we review the RBM and highlight some difficulties there

3 Reduced basis methodsReduced basis methods first introduced in the late of 1970s for nonlinear structuralanalysis [31] have gained increasing popularity for parameterized PDEs in the lastdecade [18 24 34] The basic assumption of RBMs is that the solution to parametrizedPDEs u(micro) depends smoothly on the parameter micro in the parameter domain P suchthat for any parameter micro isin P the corresponding solution u(micro) can be well approxi-mated by a properly precomputed basis called reduced basis In addition the RBMis often endowed with a posteriori error estimation which is used for the qualificationof the resulting ROM

Consider a parametrized evolution problem defined over the spatial domain Ω sub Rdand the parameter domain P sub Rp

parttu(t xmicro) + L[u(t xmicro)] = 0 t isin [0 T ] x isin Ω micro isin P (6)

where L[middot] is a spatial differential operator Let WN sub L2(Ω) be an N -dimensionaldiscrete space in which an approximate numerical solution to equation (6) is soughtLet 0 = t0 lt t1 lt middot middot middot lt tK = T be K + 1 time instants in the time interval [0 T ]Given micro isin P with suitable initial and boundary conditions the numerical solution atthe time t = tn un(micro) can be obtained by using suitable numerical methods eg thefinite volume method Assume that un(micro) isin WN satisfies the following form

LI(tn)[un+1(micro)] = LE(tn)[un(micro)] + g(un(micro) micro) (7)

where LI(tn)[middot] LE(tn)[middot] are linear implicit and explicit operators respectively and g(middot)is a nonlinear micro-dependent operator These operators are obtained from the discretiza-tion of the time derivative and spatial differential operator L For implicit scheme ofFVMs LI(tn) can be nonlinear see eg [11] but we only consider the linear casein this paper By convention un(micro) is considered as the ldquotruerdquo solution by assum-ing that the numerical solution is a faithful approximation of the exact (analytical)solution u(tn xmicro) at the time instance tn

6

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 9: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

The RBM aims to find a suitable low dimensional subspace

WN = spanV1 VN sub WN

and solve the resulting ROM to get the RB approximation un(micro) to the ldquotruerdquo solutionun(micro) In addition or alternatively to the field variable itself the approximation ofoutputs of interest can also be obtained inexpensively by y(micro) = y (u(micro)) Moreprecisely given a matrix V = [V1 VN ] whose columns span the reduced basis theGalerkin projection is employed to generate the ROM as follows

V TLI(tn)[V an+1(micro)] = V TLE(tn)[V an(micro)] + V T g (V an(micro)) (8)

where an(micro) = (an1 (micro) anN (micro))T isin RN is the vector of the weights in the ap-proximation un(micro) = V an(micro) =

sumNi=1 a

ni (micro)Vi and it is the vector of unknowns in

the ROM Thanks to the linearity of the operators LI and LE the ROM (8) can berewritten as

V TLI(tn)V [an+1(micro)] = V TLE(tn)V [an(micro)] + V T g (V an(micro)) (9)

where V TLI(tn)V and V TLE(tn)V can be precomputed and stored for the construc-tion of the ROM However the computation of the last term of (9) V T g (V an(micro))cannot be done analogously because of the nonlinearity of g This will be tackled byusing a technique of empirical interpolation to be addressed in the next section

How to generate the RB V is crucial and is still an active field of study A popularalgorithm for the generation of the RB for time-dependent problems is the POD-Greedy algorithm [24] as is shown in Algorithm 1

Algorithm 1 RB generation using POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] 0 micromax = micro0 ηN (micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=04 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode of the

matrix U = [u0 uK ] with un = un(micromax)minus ΠWN [un(micromax)] n = 0 KΠWN [u] is the projection of u onto the current space WN = spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

Remark 31 In Algorithm 1 the error ηN (micromax) is an indicator for the error of theROM It can be the true error or an error estimation Since the true error requiresthe ldquotruerdquo solution un(micro) by solving the full large system an error bound is usuallyused instead This is explored in Section 7 The first POD mode refers to the firstleft singular vector which corresponds to the largest singular value of the matrix underconsideration

7

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 10: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Remark 32 As is mentioned an error bound is usually used as the indicator inAlgorithm 1 since it is much cheaper to compute in comparison with the true error butit may not always work well For example in the process of the basis being extendedthe error bound decreases slowly or even stagnates after some steps To circumvent thisproblem we propose a remedy by checking the true error at the parameter determinedby the greedy algorithm and get an early-stop for the extension of the RB Details aregiven in Algorithm 5 in Section 73

The theoretical analysis about the convergence of the POD-Greedy algorithm isgiven in the recent work [21] However for some problems such as batch chromatog-raphy the implementation of Step 4 in Algorithm 1 will be time-consuming becausethe number of time steps K needs to be very large due to the nature of the problem(integration until a certain steady state is reached) In this work we propose to usean adaptive technique to reduce the cost of Step 4 which is discussed in Section 5

4 Empirical interpolationAs mentioned above if there are nonlinear andor nonaffine operators in the full modelthe computational complexity cannot be reduced by using projection because thenonlinear andor nonaffine part eg V T g (V an(micro)) in (8) requires the computationin the original full space In such a case EIM [2] or empirical operator interpolation [11]can be exploited to generate an efficient ROM The empirical operator interpolationmethod is an extension of the EIM and can be used to treat an operator whichdepends on the parameter field variable and spatial variable as well The idea of EIMintroduced in [2] is briefly presented as follows

Given a nonaffine microndashdependent function g(x micro) with sufficient regularity (x micro) isinΩ times P sub Rd times Rp the idea of EIM is to approximate g(x micro) by a linear combi-nation of a precomputed micro-independent basis W = [W1 WM ] termed as col-lateral reduced basis (CRB) with corresponding micro-dependent coefficients σ(micro) =[σ1(micro) σM (micro)]T ie

g(x micro) =Msumi=1

Wi(x)σi(micro)

Here the coefficients σi are parameter-dependent and determined by solving the linearsystem

g(xj micro) =Msumi=1

Wi(xj)σi(micro) j = 1 M (10)

where Wi(xj) refers to the j-th entry of the vector Wi and the analogous notation isalso used for ξm(xm) in (11) in Algorithm 2 Note that the approximation g(x micro) inter-polates the exact value g(x micro) at the EI points TM = x1 xM The generationof the CRB and the EI points is illustrated in Algorithm 2Remark 41 Algorithm 2 is used for a fast evaluation of a nonaffine function ofthe coordinate x and the parameter micro by using interpolation In [11 25] the idea

8

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 11: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Algorithm 2 Generation of CRB and EI pointsInput Lcrb

train = g(x micro) | micro isin Pcrbtrain tolCRB(lt 1)

Output CRB W = [W1 WM ] and EI points TM = x1 xM1 Initialization m = 1W0

EI = [ ] ξ0 = 12 while ξmminus1 gt tolCRB do3 For each g isin Lcrb

train compute the ldquobestrdquo approximation g =summminus1i=1 σiWi in

the current space Wmminus1EI = spanW1 Wmminus1 where σi can be obtained by

solving the linear system (10)4 Define gm = arg max

gisinLcrbtrain

g minus g and the error ξm = gm minus gm

5 if ξm le tolCRB then6 Stop and set M = mminus 17 else8 Determine the next EI point and basis

xm = arg supxisinΩ|ξm(x)| Wm = ξm

ξm(xm) (11)

9 end if10 m = m+ 111 end while

was extended to the more general case of empirical operator interpolation whichis more applicable for an operator that depends on the field variable u(t xmicro) egg(u(t xmicro) xmicro) The evaluation of g(xj micro) in (10) is thus replaced by g(u(t xj micro) xj micro)In this paper we use empirical operator interpolation where the nonaffine operatorappears as g(u(t xmicro)micro) The details are addressed in Section 62

5 Adaptive snapshot selectionIn this section we propose a technique of adaptive snapshot selection we call ASS toreduce the offline cost The basic idea of ASS is first presented in the ENUMATH2013conference and the following algorithms eg Algorithm 3 and Algorithm 4 can bealso found in [4] In this paper we address the ASS technique with more details andenhanced numerical results are given in Section 8

For the generation of the RB or CRB a training set Ptrain or Pcrbtrain of parameters

must be determined On the one hand the size of the training set is desired to beas large as possible so that information of the parametric system can be collected asmuch as possible On the other hand the RB or CRB should be efficiently generated

To reduce the cost for the generation of the RB many efforts have been made inthe last decade These include for example the hp certified RB method [12] adaptivegrid partition in parameter space [22 23] and the greedy-based adaptive samplingapproach for MOR by using model-constrained optimization [9] In these papers theauthors intend to choose the sample points adaptively and get an ldquooptimalrdquo training

9

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 12: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

set The ldquooptimalrdquo training set means that the original manifoldM = u(micro) | micro isin Pcan be well represented by the submanifold M = u(micro) | micro isin Ptrain induced by thesample set Ptrain with its size as small as possible

As aforementioned for time-dependent problems if the dynamics is of interest thesolution at the time instances should be collected as snapshots In such a case even foran ldquooptimalrdquo training set the number of snapshots can be huge if the total number oftime steps for a single parameter is large Such problems may arise from eg chemicalengineering fluid dynamics and aerodynamics etc A large number of snapshots meansthat it is time-consuming to generate the reduced basis because the POD mode inStep 4 in Algorithm 1 is hard to compute from the singular value decomposition of U due to the large size of the matrix U This is also true for the generation of the CRBif the operator to be approximated is time-dependent As a straightforward way toavoid using the solutions at all the time instances as snapshots one can simply pickout the solutions at certain time instances (eg every two or several time steps) assnapshots However the results might be of low accuracy because some importantinformation may have been lost during such a trivial snapshot selection

For an ldquooptimalrdquo or a selected training set we propose to select the snapshotsadaptively according to the variation of the trajectory of the solution un(micro)Kn=0The idea is to discard the redundant (ldquoclose tordquo linearly dependent) information fromthe trajectory In fact the linear dependency of two non-zero vectors v1 and v2 canbe reflected by the angle θ between them More precisely they are linearly dependentif and only if | cos(θ)| = 1 (θ = 0 or π) In other words the value 1minus | cos(θ)| is largeif the correlation between the two vectors is weak This implies that the quantity1minus |〈v1v2〉|

v1v2 (cos(θ) = 〈v1v2〉v1v2 ) is a good indicator for the linear dependency of v1 and

v2Given a parameter micro and the initial vector u0(micro) the numerical solution un(micro) (n =

1 K) can be obtained eg by using the evolution scheme (7) Define an indicatorInd (un(micro) um(micro)) = 1 minus |〈u

n(micro) um(micro)〉|un(micro)um(micro) which is used to measure the linear depen-

dency of the two vectors When Ind (un(micro) um(micro)) is large the correlation betweenun(micro) and um(micro) is weak Algorithm 3 shows the realization of the ASS un(micro) is takenas a new snapshot only when un(micro) and unj (micro) are ldquosufficientlyrdquo linearly independentby checking whether Ind (un(micro) unj (micro)) is large enough or not Here unj (micro) is thelast selected snapshotRemark 51 The inner product 〈middot middot〉 WN timesWN rarr R used above is properly definedaccording to the solution spaceWN and the norm middot is induced by the inner productcorrespondingly Therefore the ASS technique is applicable to any snapshot basedMOR method for time-dependent problems and it is independent of the discretizationmethodRemark 52 For the linear dependency it is also possible to check the angle betweenthe tested vector un(micro) and the subspace spanned by the selected snapshots SA Moreredundant information can be discarded but at higher cost However the data will becompressed further eg by using the POD-Greedy algorithm we simply choose theeconomical case shown in Algorithm 3 Note that the tolerance tolASS is prespecifiedand problem-dependent and the value at O(10minus4) gives good results for the numerical

10

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 13: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Algorithm 3 Adaptive snapshot selection (ASS)Input Initial vector u0(micro) tolASSOutput Selected snapshot matrix SA = [un1(micro) un2(micro) un`(micro)]

1 Initialization j = 1 nj = 0 SA = [unj (micro)]2 for n = 1 K do3 Compute the vector un(micro)4 if Ind (un(micro) unj (micro)) gt tolASS then5 j = j + 16 nj = n7 SA = [SA unj (micro)]8 end if9 end for

examples studied in Section 8 based on our observationThe ASS technique can be easily combined with the aforementioned algorithms for

the generation of the RB and CRB For example Algorithm 4 shows the combinationwith the POD-Greedy algorithm (Algorithm 1) There is only one additional step incomparison with the original Algorithm 1

Algorithm 4 RB generation using ASS-POD-GreedyInput Ptrain micro0 tolRB(lt 1)Output RB V = [V1 VN ]

1 Initialization N = 0 V = [ ] micromax = micro0 η(micromax) = 12 while ηN (micromax) gt tolRB do3 Compute the trajectory Smax = un(micromax)Kn=0 and adaptively select snapshots

using Algorithm 3 and get

SAmax = un1(micromax) un`(micromax)

4 Enrich the RB eg V = [V VN+1] where VN+1 is the first POD mode ofthe matrix UA = [un1 un` ] with uns = uns(micromax) minus ΠWN [uns(micromax)] s =1 ` ` K ΠWN [u] is the projection of u onto the current space WN =spanV1 VN

5 N = N + 16 Find micromax = arg max

microisinPtrainηN (micro)

7 end while

6 RB scheme for batch chromatographyReduced basis methods are used to perform a rapid solution to the PDEs The RBMbased on FV-discretization for evolution equations is proposed in [24] In this section

11

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 14: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

we show the derivation of the FOM based on the FV discretization for the batchchromatographic model (3)minus(5) and the efficient generation of the ROM

61 Full-order model based on FV discretizationAs is mentioned in Section 1 we use the FV discretization for the batch chromato-graphic model (3)minus(5) More specifically we use the Lax-Friedrichs flux [28] for theconvection contribution central difference approximation for the diffusion terms andthe Crank-Nicolson scheme for the time discretization The full discrete FV formula-tion for the system (3)minus(4) can be written as followsAcn+1

z = Bcnz + dnz minus1minus εε

∆thnz

qn+1z = qnz + ∆thnz

(12)

where cnz = cnz (micro) = (czn1 cznN )T qnz = qnz (micro) = (qzn1 qznN )T isin RN z = a bindicates the solutions of the field variables cz and qz at time instance t = tn (n =0 K) A and B are tridiagonal constant matrices dnz and hnz are parameter- andtime-dependent

dnz = dn0 e1 hnz = (hzn1 hznN )T

with dn0 = ∆xPe(λ2 + ν

)χ[0tin](tn) λ = ∆t

∆x ν = ∆tPe∆x2 e1 = (1 0 0)T isin RN

and hznj = hz(ca

nj cb

nj qz

nj ) = L

Q(εAc)κz(fz(ca

nj cb

nj )minus qznj

) j = 1 N

62 Reduced-order modelLet N isin N+ be the number of the RB vectors for cz and qz and M isin N+ be the numberof the CRB vectors for the operators ha and hb Here for simplicity of analysis we usethe same dimensionN of the RB for ca cb qa and qb but one can certainly take differentdimensions for the RB This also applies to ha and hb Assume that Wz isin RNtimesM isthe CRB for the nonlinear operator hz and Vcz

Vqzisin RNtimesN

(V Tcz

Vcz= I V Tqz

Vqz= I)

are the RB for the field variables cz and qz respectively ie

hnz asympWzβnz cnz asymp cnz = Vcz

ancz qnz asymp qnz = Vqz

anqz n = 0 K (13)

Applying Galerkin projection and empirical operator interpolation we formulate theROM for the FOM (12) as follows Acz

an+1cz

= Bczancz

+ dn0 dczminus 1minus ε

ε∆tHcz

βnz

an+1qz

= anqz+ ∆tHqz

βnz (14)

where ancz= ancz

(micro) = (aczn1 acz

nN )T anqz

= anqz(micro) =

(aqz

n1 aqz

nN

)T isin RN

are the reduced state vector of the ROM Acz= V Tcz

AVcz Bcz

= V TczBVcz

dcz= V Tcz

e1Hcz

= V TczWz Hqz

= V TqzWz are the reduced matrices

12

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 15: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Note that βnz = βnz (micro) = (βzn1 βznM )T isin RM are the vectors of coefficients

for the empirical interpolation of the nonlinear operator hnz and are parameter- andtime-dependent The evaluation of βnz is essentially the same as the computation ofthe coefficients σi(micro) in (10) in Algorithm 2 More specifically it can be obtained bysolving the following system of equations

Msumi=1

βzniWzi(xj) = hz

nj j = 1 M

Here the evaluation of hznj only needs the j-th entries (canj cb

nj and qznj ) of the solution

vectors (can cb

n and qzn) ie hznj = hz(ca

nj cb

nj qz

nj ) For the general operator

empirical interpolation the value of the operator at the interpolation point (eg xj)may depend on more entries of the solution vectors (eg the j-th entries and theirneighbors) For more details refer to [11 25]

63 Offline-online decompositionThe efficiency of the RB approximation is ensured by a strategy of suitable offline-online decomposition which decouples the generation and projection of the RB ap-proximation Computation entails a possibly expensive offline phase performed onlyonce and a cheap online phase for any chosen parameter in the parameter domainDuring the offline stage the RB the CRB reduced matrices and all N -dependentterms are computed and stored in the online process for any given parameter micro allparameter-dependent coefficients and the RB approximation are evaluated rapidly

More precisely in the offline process given training sampling sets Pcrbtrain and Ptrain

(they can be chosen differently) Algorithm 2 is implemented to generate the CRBWz for the nonlinear operator hz Then Algorithm 4 is used to generate the reducedbases Vcz and Vqz Consequently all N -dependent terms are precomputed and as-sembled to construct the reduced matrices (eg Acz Bcz dcz Hcz and Hqz ) and theN -independent ROM can be formulated as in (14) For a newly given parametermicro isin P the low dimensional model (14) is solved online and the solution to the FOM(12) can be recovered by (13)

7 Output-oriented error estimationIt is crucial to get a sharp rigorous and inexpensive posteriori error bound [34] whichenables reliable and low-cost construction of the RB In the past years many effortshave been made for different problems eg [11 18 24 25 35 36] One commontechnique for the derivation of the error estimator is based on the residual In [11 25]the authors provided an error estimation for the field variable in functional space forevolution equations Since all the simulations are done in the finite dimensional vectorspace in practice in this work we derive an error estimation for the field variabledirectly in vector space which is sharper than it is in the operator form Moreoverwe derive an output-oriented error bound based on the error estimation for the field

13

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 16: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

variable For many applications the output response y(uN ) is of interest Henceduring the process of the greedy algorithm eg Algorithm 1 or Algorithm 4 the errorestimation ηN (micromax) should be the error estimation for the output response which isexpected to be more accurate and reasonable

In what follows the inner product is defined as 〈z1 z2〉 = zT1 z2 forallz1 z2 isin RN The induced norm middot is the standard 2-norm in the Euclidean space However ifthe discrete system of equations is obtained by using the finite element method thesolution to the discrete system is actually the coefficient vector corresponding to thebasis vectors of the solution space In such a case the inner product should be definedproperly with the mass matrix of the solution space and the norm will be the inducednorm correspondingly

71 Output error estimation for the reduced order modelFor a parametrized evolution equation (6) we derive an output error bound in thevector space for the ROM (9) Recall that in Section 3 LI(tn) and LE(tn) are linearthe evolution scheme (7) can be rewritten as follows in the vector space

A(n)un+1(micro) = B(n)un(micro) + g (un(micro) micro) (15)

where A(n) B(n) isin RNtimesN are constant matrices and g (un(micro) micro) isin RN correspondsto the nonlinear term Note that A(n) and B(n) are nonsingular for a stable scheme inpractice n = 0 K minus 1

Given a parameter micro isin P let un(micro) = V an(micro) be the RB approximation of un(micro)and gn(micro) = IM [g(un(micro)] = Wβn(micro) be the interpolant of the nonlinear termwhere V isin RNtimesN W isin RNtimesM are the precomputed parameter-independent basesan(micro) isin RN βn(micro) isin RM are parameter-dependent coefficients In the followingfor the sake of simplicity we omit the explicit expression of the dependence on micro inun(micro) un(micro) an(micro) and βn(micro) and use un un an and βn instead The following aposteriori error estimation is based on the residual

rn+1(micro) = B(n)un + IM [g(un)]minusA(n)un+1 (16)

With simple computation we get the norm of the residual∥∥rn+1(micro)∥∥2 =

langrn+1(micro) rn+1(micro)

rang= (an)T V T (B(n))TB(n)V an + (βn)T WTWβn

+(an+1)T V T (A(n))TA(n)V an+1 + 2 (βn)T WTB(n)V an

minus 2(an)TV T (B(n))TA(n)V an+1 minus 2 (βn)T WTA(n)V an+1

(17)

Proposition 71 Assume that the operator g RN rarr RN is Lipschitz continuousie there exists a positive constant Lg such that

g(x)minus g(y) le Lgxminus y x y isin WN

14

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 17: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

and the interpolation of g is ldquoexactrdquo with a certain dimension of W = [W1 WM+M prime ]ie

IM+M prime [g(un)] =M+M primesumm=1

Wm middot βnm = g(un)

Assume again that for all micro isin P the initial projection error is vanishing e0(micro) = 0then the approximation error en(micro) = un minus un satisfies

en(micro) lenminus1sumk=0

∥∥∥(A(k))minus1∥∥∥ nminus1prodj=k+1

G(j)

(εkEI(micro) +∥∥rk+1(micro)

∥∥) (18)

where G(j) =∥∥(A(j))minus1

∥∥ (B(j)+ Lg) εnEI(micro) is the error due to the interpolation A

sharper error bound can be given as

en(micro) le ηnNM (micro)

=nminus1sumk=0

nminus1prodj=k+1

GF(j)

(∥∥∥(A(k))minus1∥∥∥ εkEI(micro) +

∥∥∥(A(k))minus1rk+1(micro)∥∥∥) (19)

where GF(j) =

∥∥(A(j))minus1B(j)∥∥+ Lg

∥∥(A(j))minus1∥∥ n = 0 K minus 1

Proof By forming the difference of (15) and (16) we have the error equation

A(n)en+1(micro) = B(n)en(micro) + g(un)minus IM [g(un)] + rn+1(micro)= B(n)en(micro) + (g(un)minus g(un)) + (g(un)minus IM [g(un)]) + rn+1(micro)

(20)

Multiplying by (A(n))minus1 on both sides of (20) we obtain

en+1(micro) =(A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g(un))+ (A(n))minus1 (g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro)

(21)

Applying the Lipschitz condition of g we have g(un)minus g(un) le Lg en(micro) Thenby the triangle inequality and the property of the matrix norm we have∥∥en+1(micro)

∥∥ le ∥∥∥(A(n))minus1∥∥∥((∥∥∥B(n)

∥∥∥+ Lg

)en(micro)+ εnEI(micro) +

∥∥rn+1(micro)∥∥) (22)

where εnEI(micro) = g(un)minus IM [g(un)] =sumM+M primem=M+1 Wm middot |βnm| Resolving the recursion

(22) with initial error∥∥e0(micro)

∥∥ = 0 yields the error bound in (18)To get the error bound in (19) we re-observe the equation in (21) and see that the

error bound in (22) is unnecessarily enlarged A sharper bound for∥∥en+1

∥∥ is of thefollowing form∥∥en+1(micro)

∥∥ le(∥∥∥(A(n))minus1B(n)∥∥∥+ Lg

∥∥∥(A(n))minus1∥∥∥) en(micro)

+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥ (23)

15

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 18: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

since the following two inequalities are true ie∥∥(A(n))minus1B(n)

∥∥ le ∥∥(A(n))minus1∥∥ ∥∥B(n)

∥∥and

∥∥(A(n))minus1rn+1∥∥ le ∥∥(A(n))minus1

∥∥∥∥rn+1∥∥ Resolving the recursion (23) with initial

error∥∥e0(micro)

∥∥ = 0 yields the proposed error bound in (19)

Remark 72 In many cases the operators LI(tn) and LE(tn) in (7) are independentof tn so that the coefficient matrices A(n) B(n) in (15) are constant matrices see egin (12) In such a case the error bound becomes much more simple see eg (31) and(33) in the next subsectionRemark 73 In [11] the derivation of the error bound is based on the general operatorform in the functional space The error bound in (18) corresponds to the operatorform (55) in [11] However the error bound may grow exponentially when G(j) =∥∥(A(j))minus1

∥∥ (∥∥B(j)∥∥+ Lg

)gt 1 in (18) In the vector space this problem can be easily

avoided by using (23) instead of (22) if GF(j) =

∥∥(A(n))minus1B∥∥ + Lg

∥∥(A(n))minus1∥∥ le 1

whereby the sharper error bound in (19) is obtainedRemark 74 For the computation of the error bound in (18) we need to compute thenorm of the residual rn(micro) by using (17) Note that all terms underlined in (17) canbe precomputed once V and W are obtained and they are only computed once for allparameters in the training set This is also true for the computation of

∥∥Aminus1rn(micro)∥∥

for the error bound in (19) Consequently the evaluation of the error bound is cheapdue to its independence of N In addition as is shown in [11] small M prime gives goodresults in practice we use M prime = 1 in the latter simulationsRemark 75 Since the 2-norm is applied to the above error bound and the 2-norm ofa matrix H is actually its spectral norm Therefore∥∥Hminus1∥∥ = σmax

(Hminus1) = 1

σmin(H)

As a result the error bounds above are computableIn many applications the quantity of interest is not the field variable itself but

some outputs In such cases it is desired to estimate the output error to construct aROM in a goal-oriented fashion Based on the error estimation for the field variableabove we have the output error estimate below

Proposition 76 Under the assumptions of Proposition 71 assume the output ofinterest y (un(micro)) can be expressed in the following form

y(un(micro)) = Pun (24)

where P isin RNotimesN is a constant matrix then the output error enO(micro) = Pun minus Punsatisfies∥∥en+1

O (micro)∥∥ le ηn+1

NM

= GO(n)ηnNM +

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (25)

where GO(n) =

∥∥P (A(n))minus1B(n)∥∥+ Lg

∥∥P (A(n))minus1∥∥ n = 0 K minus 1

16

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 19: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Proof Multiplying P from the left on both sides of the error equation (21) we get

Pen+1(micro) = P((A(n))minus1B(n)en(micro) + (A(n))minus1 (g(un)minus g (un))

+ (A(n))minus1(g(un)minus IM [g(un)]) + (A(n))minus1rn+1(micro))

Applying the Lipschitz condition of g and using the triangle inequality as well as theproperty of the matrix norm we have∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le GO(n) en(micro)+

∥∥∥P (A(n))minus1∥∥∥ εnEI(micro) + P

∥∥∥(A(n))minus1rn+1(micro)∥∥∥ (26)

Replacing en(micro) in (26) with its bound in (19) we get the proposed output errorbound in (25)

Remark 77 Once the error estimation for the field variable is obtained eg (19) atrivial error bound for the output (24) can be given as∥∥en+1

O (micro)∥∥ =

∥∥Pen+1(micro)∥∥

le P∥∥en+1(micro)

∥∥le P

(GF

(n) en(micro)+∥∥∥(A(n))minus1

∥∥∥ εnEI(micro) +∥∥∥(A(n))minus1rn+1(micro)

∥∥∥) (27)

The last inequality is true due to the inequality (23) It is obvious that the boundfor∥∥en+1O (micro)

∥∥ in (26) is sharper than that in (27) As a result the final output errorbound in (25) is sharper than the trivial output error bound derived in (27)

72 Output error estimation for the batch chromatographic modelThe above error estimates are derived for a scalar evolution equation a single PDEFor a system of several PDEs one can analogously derive an error estimation forthe underlying system by taking all the field variables as one vector However thebehavior of the solution to each equation might be quite different therefore it isdesired to generate different reduced bases for each field variable rather than using aunified basis for all the field variables

Here we propose to derive an error estimation for each field variable for the under-lying system (12) by following the error estimation in Section 71 Taking the errorbound for the field variable cz as an example and recalling the detailed simulation forcz (see (12))

Acn+1z = Bcnz + dnz minus

1minus εε

∆thnz (28)

the residual caused by the approximate solution cnz in (13) is

rn+1cz

(micro) = Bcnz + dnz minus1minus εε

∆tIM [hz(cnz )]minusAcn+1z (29)

17

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 20: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Notice that the coefficient matrices A and B are independent of time ie theyare constant matrices This indicates the following error bounds in (32) and (33) arerelatively simple and cheap to compute

Observe that (28) (29) correspond to (15) and (16) in the general case Comparedto the general form (15) the additional term dnz in (28) comes from the Neumannboundary condition which does not depend on the solution cnz Instead of requiring aLipschitz continuity condition for hz as a function of cna cnb and qnz we assume thereexists a positive constant Lh such that

hz(cna cnb qnz )minus hz(cna cnb qnz ) le Lh cnz minus cnz n = 0 K (30)

Assuming the initial projection error is vanishing e0cz

(micro) = 0 we have a similar esti-mation for the approximation error encz

(micro) = cnz minus cnz (n = 1 K) as follows

∥∥encz(micro)∥∥ le nminus1sum

k=0

∥∥Aminus1∥∥nminusk Cnminus1minusk (τεkEI(micro) +∥∥rk+1cz

(micro)∥∥) (31)

where C = B+ τLh τ = 1minusεε ∆t More tightly∥∥encz

(micro)∥∥ le ηnNMcz

(micro)

=nminus1sumk=0

(GFc)nminus1minusk (τ ∥∥Aminus1∥∥ εkEI(micro) +∥∥Aminus1rk+1

cz(micro)∥∥) (32)

where GFc =∥∥Aminus1B

∥∥+ τLh∥∥Aminus1

∥∥Analogously the error bound for the output of interest enczO

(micro) = Pcnz minus P cnz canbe obtained based on the error bound of the field variable Similar to (25) we have∥∥∥en+1

czO(micro)∥∥∥ le ηn+1

NMcz(micro)

= GOcηnNMcz

(micro) + τ∥∥PAminus1∥∥ εnEI(micro) + P

∥∥Aminus1rn+1cz

(micro)∥∥ (33)

where GOc =∥∥PAminus1B

∥∥ + τLh∥∥PAminus1

∥∥ Note that P = (0 0 1) isin RN in thismodel which means the norm of the output en+1

czO(micro) is the absolute value of the last

entry of the error vector en+1cz

(micro)Remark 78 The error estimate for qa and qb in (12) can also be obtained similarlyby following the derivation in Section 71 As the output of interest for the system in(12) only depends on ca and cb the error estimates for qa and qb are not needed forthe output error bound and therefore will not be presented hereRemark 79 As is mentioned above it is possible to derive an error estimation forthe field variables U = (ca cb qa qb)T by considering hz(ca cb qz) as a function of thevector U However for the output error bound in (33) the error bound ηnNMcz

(micro) forthe field variable cz is involved so is the desired error bound (denoted as ηnNMU(micro))for the vector U if the output error estimation is derived by considering all the field

18

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 21: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

variables together Obviously the error bound ηnNMU(micro) is much rougher than thebound ηnNMcz

(micro)The assumption (30) is easily fulfilled in practice In fact the constant Lh can be

conservatively chosen large and the weight τLh is still small because the time step ∆tis typically very small

73 An early-stop criterion for the Greedy algorithmFrom the expression of the error estimator above it is seen that the error bound forthe field variables (ηnNM (micro) or ηnNMcz

(micro) ) is accumulated with time Since ηnNM (micro)(or ηnNMcz

(micro) respectively) is involved in the output error bound in (25) (or (33)respectively) the output error bound is also accumulated with time As a resultthe output error bound at the final time steps may not reflect the true error aftera long evolution process Figure 2 in Section 82 illustrates this behavior In factsimilar phenomena are reported in the literature eg [30] where it is pointed outthat the error estimate eg in (18) may loose sharpness when many time instancestn n = 0 1 K are needed for a given time interval [0 T ] which is typical forconvection-diffusion equations with small diffusion terms However the output errorbound is cheap to compute and it may still provide a guidance for the parameterselection in the greedy algorithm

To circumvent the problem above we add a validation step to get an early-stop ofthe extension process as is shown in Algorithm 5 More precisely after Step 6 inAlgorithm 4 we compute the decay rate dη of the error bound If dη is smaller than apredefined tolerance indicating the error bound stagnates then we further check thetrue output error at the parameter micromax determined by the greedy algorithm Whenthe true output error at micromax is smaller than tolRB we assume that there is no needto include a new basis vector and the RB extension can be stopped otherwise theprocess should continue

Algorithm 5 RB generation using ASS-POD-Greedy with early-stopInput Ptrain micro0 tolRB(lt 1) toldecayOutput RB V = [V1 VN ]

1 Implement Step 1 in Algorithm 42 while the error ηN (micromax) gt tolRB do3 Implement Steps 3minus6 in Algorithm 44 Compute the decay rate of the error bound dη = ηNminus1(microold

max)minusηN (micromax)ηNminus1(microold

max) 5 if dη lt toldecay then6 Compute the true output error at the selected parameter micromax eN (micromax)7 if eN (micromax) lt tolRB then8 Stop9 end if

10 end if11 end while

19

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 22: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Remark 710 It can happen that the error bound stagnates for a while but then againdecays In order to monitor such a case the tolerance toldecay should be set to a verysmall value which allows some steps of stagnation If after some steps of stagnationthe error bound still does not decay then Step 5 in Algorithm 5 will be implemented

8 Numerical experimentsIn this work the RB methodology is employed to accelerate the optimization withnonlinear PDE constraints As a case study we investigate the optimal operationof batch chromatography More specifically the operating variable micro = (Q tin) isoptimally chosen in a reasonable parameter domain to maximize the production ratePr(micro) = s(micro)Q

tcyc while respecting the requirement of the recovery yield Rec(micro) =

s(micro)tin(cf

a+cfb) Here s(micro) =

int t4t3caO(tmicro)dt+

int t2t1cbO(tmicro)dt czO(tmicro) = cz(t 1micro) is the

concentration of component z (z = a b) at the outlet of the column We consider theoptimization problem of batch chromatography as follows

minmicroisinPminusPr(micro)

st Recmin minusRec(micro) le 0 micro isin Pcz(micro) qz(micro) are the solutions to the system (3)minus (5) z = a b

(34)

Notice that when solving the system (3)minus(5) the time step size has to be takenrelatively small so that the cutting points ti can be determined properly i = 1 4and the the integral in s(micro) can be evaluated accurately The small step size resultsin a very large number (up to O(104)) of total time steps for every parameter micro isin Pwhich causes a lot of difficulties in the error estimation and the generation of thereduced basis

The model parameters and operating conditions are presented in Table 1 The Henryconstants and thermodynamic coefficients in the isotherm equation (4) are given inTable 2 The parameter domain for the operating variable micro is P = [00667 01667]times[05 20] The minimum recovery yield Recmin is taken as 800 and the purityrequirements are specified as Pua = 950 Pub = 950 which determine the cuttingpoints t2 and t3 in s(micro) To capture the dynamics precisely the dimension of spatialdiscretization N in the FOM (12) is taken as 1500

Table 1 Model parameters and operating conditions for the chromatographic model

Column dimensions [cm] 26 times 105Column porosity ε [-] 04Peclet number Pe [-] 2000Mass-transfer coefficients κz z = a b [1s] 01Feed concentrations cfz z = a b [gl] 29

20

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 23: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Table 2 Coefficients of the adsorption isotherm equation

Ha1 [-] 269 Hb1 [-] 373Ha2 [-] 01 Hb2 [-] 03Ka1 [lg] 00336 Kb1 [lg] 00446Ka2 [lg] 10 Kb2 [lg] 30

In this section we first illustrate the performance of the adaptive snapshot selectionfor the generation of the RB and CRB and then show the output error estimation forthe generation of the RB Finally we present the results for the ROM-based optimiza-tion of batch chromatography All the computations were done on a PC with Intel(R)Core(TM)2 Quad CPU Q9550 283GHz RAM 400GB unless stated otherwise

81 Performance of the adaptive snapshot selectionTo investigate the performance of the technique of adaptive snapshot selection wecompare the runtime for the generation of the RB and CRB with different thresholdvalues tolASS As is shown in Algorithm 4 in Section 5 the ASS can be combined withthe POD-Greedy algorithm and used for the generation of the RB For the computationof the error indicator ηN (micromax) in Algorithm 4 EI is involved for an efficient offline-online decomposition To efficiently generate a CRB the ASS is also employed Thetraining set for the generation of the CRB is a sample set with 25 sample points ofmicro = (Q tin) uniformly distributed in the parameter domain For each sample pointAlgorithm 3 is used to adaptively choose the snapshots for the generation of the CRBThe runtimes for the CRB generation with different choices of tolASS are shown inTable 3 It is seen that the larger threshold is used the more runtime is saved Thismeans that a lot of redundant information is discarded due to the adaptive selectionprocess Particularly with the tolerance tolASS = 10times 10minus4 the computational timeis reduced by 903 compared to that of the original algorithm without ASS Howeverhow to choose an optimal threshold is empirical and problem-dependent

Table 3 Illustration of the generation of CRBs (Wa Wb) at the same error tolerance(tolCRB = 10 times 10minus7) with different thresholds M prime = 1 is the number ofthe basis for error estimation

tolASS Res(ξaM+M prime ξb

M+M prime) M (Wa Wb) Runtime [h]no ASS ndash 92times 10minus8 85times 10minus8 146 152 625 (-)ASS 10times 10minus4 96times 10minus8 81times 10minus8 147 152 605 (minus903)ASS 10times 10minus3 87times 10minus8 99times 10minus8 147 152 362(minus942)ASS 10times 10minus2 94times 10minus8 62times 10minus8 144 150 270 (minus957)

Table 4 shows the comparison of the runtime for the generation of the RB by usingthe POD-Greedy algorithm with and without ASS Note that the CRB is precomputed

21

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 24: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

with tolASS = 10times 10minus4 for the ASS and the corresponding runtime is not includedhere The training set is a sample set with 60 points uniformly distributed in theparameter domain Here and in the following the tolerances are chosen as tolCRB =10 times 10minus7 tolRB = 10 times 10minus6 tolASS = 10 times 10minus4 It is seen that the runtime forgenerating the ROM with the ASS is reduced by 512 compared to that without ASSat the same tolerance tolRB Moreover the accuracy of the resulting reduced modelwith ASS is almost the same as that without ASS as is shown in Table 5

Table 4 Comparison of the runtime for the RB generation using the POD-Greedyalgorithm with early-stop (Algorithm 5) with and without ASS

Algorithms Runtime [h]POD-Greedy 1622 1

ASS-POD-Greedy 792 (minus512)1 Due to the memory limitation of the PC the computation was done

on a Workstation with 4 Intel Xeon E7-8837 CPUs (8 cores per CPU)267 GHz RAM 1TB

82 Performance of the output error boundAs aforementioned it is wise to use an efficient error estimation for the output forthe generation of the RB In the chromatographic model given a parameter micro thevalues of Pr(micro) and Rec(micro) in (34) are determined by the concentrations at the outletof the column cnzO(micro) = Pcnz (micro) n = 0 K z = a b which constitute the outputof the FOM in (12) Consequently the output error bound will be taken as the errorindicator ηN (micro) in the greedy algorithm (eg Algorithm 4 5) for the generation of theRB which yields a goal-oriented ROM

Note that the error bound ηn+1NMcz

(micro) in (33) is the bound for the output error ofthe component cz at the time instance tn+1 for a given parameter micro isin P We use thefollowing error bound in Algorithm 4 ηN (micromax) = max

microisinPtrainmaxzisinab

macrηNMcz (micro) where

macrηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro) is the average of the error bound for the outputof cz in the whole evolution process In accordance we define the reference trueoutput error as emax

N = maxmicroisinPtrain

eN (micro) where eN (micro) = maxzisinab

eNcz(micro) eNcz

(micro) =1K

sumKn=1 cnzO(micro)minuscnzO(micro)|| and cnzO(micro) is the approximate output response computed

from the ROM in (14)Figure 2 shows the error decay and the true error as the RB is extended by using

Algorithm 4 It can be seen that the output error bound stagnates after certain stepsalthough the true error is very small already To circumvent the problem Algorithm 5is implemented to get an early-stop

Figure 3 shows the results for Algorithm 5 where tolASS = 003 Using the early-stop the greedy algorithm can be terminated in time and the dimension of the RB canbe kept small without loosing accuracy It can be seen that 47 RB vectors already make

22

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 25: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

the true output error very small and the output error bound begins to stagnate so thatthe early-stop gives a reasonable stopping criterion Figure 4 shows the parametersselected in the RB extension with the greedy algorithm The size of the circle showshow frequently the same parameter is selected for the RB extension

Here we want to point out that the difference between the error bound for the fieldvariable and the output error bound is not so big This is not surprising because thederivation of the error bound for the output is based on that of the field variable Thetechnique of using the dual system could be employed to improve the error estimateswhich will be investigated in the future

Finally to further assess the reliability and efficiency of the ROM we performed thedetailed and reduced simulation using a validation set Pval with 600 random samplepoints in the parameter domain Table 5 shows the average runtime over the validationset and the maximal error defined as Maxerror = max

microisinPval

eN (micro) It is seen that theaverage runtime for a detailed simulation is sped up by a factor of 53 and the maximaltrue error is below the prespecified tolerance tolRB In addition the concentrationsat the outlet of the column computed by using the FOM and the ROM at a givenparameter micro = (Q tin) = (01018 13487) are plotted in Figure 5 which shows thatthe ROM (14) reproduced the dynamics of the original FOM (12)

6 16 26 36 46 56 6610

minus9

10minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Field variable error bound

Output error bound

True output error

Figure 2 Illustration of the error bound decay during the RB extension using Algo-rithm 4 and the corresponding true output error The output error boundηN (micromax) and the maximal true output error emax

N are defined in Section 82the field variable error bound is defined as ηN = max

microisinPtrainmaxzisinab

ηNMcz (micro)

where ηNMcz(micro) = 1

K

sumKn=1 η

nNMcz

(micro)

23

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 26: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

6 16 26 36 46 5610

minus7

10minus5

10minus3

10minus1

101

103

Size of RB N

max

micro isin

Ptr

ain e

rror

Output error bound

True output error

Figure 3 Error bound decay during the RB extension using the early-stop techniqueAlgorithm 5 and the corresponding maximal true output error

00667 00867 01067 01267 01467 0166705

0667

0834

1001

1168

1335

1502

1669

1836

2

Injection period tin

Feed flow rate Q

Figure 4 Parameter selection in the generation of the RB The size of a circle indicateshow frequently the parameter is selected during the process The bigger thecircle the more often the parameter is selected

24

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 27: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Table 5 Runtime comparison of the detailed and reduced simulations over a validationset Pval with 600 random sample points Tolerances for the generation of theROM tolCRB = 10times 10minus7 tolASS = 10times 10minus4 tolRB = 10times 10minus6

Simulations Max error Average runtime [s]SpFFOM (N = 1500) ndash 31213(-)ROM POD-Greedy 379times 10minus7 63 50ROM ASS-POD-Greedy 458times 10minus7 63 50

0 1 2 3 4 5 6 7 8 9 10 110

03

06

0909

Dim

ensio

nle

ss C

oncentr

ation

Dimensionless Time

caminusFOM

cbminusFOM

caminusROM

cbminusROM

Figure 5 Concentrations at the outlet of the column using the FOM (N = 1500) andthe ROM (N = 47) at the parameter micro = (Q tin) = (01018 13487)

25

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 28: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

83 ROM-based OptimizationOnce the ROM (14) is obtained the original FOM-based optimization problem (34)can be approximated by the following ROM-based optimization problem

minmicroisinPminusP r(micro)

st Recmin minus Rec(micro) le 0cnz (micro) qnz (micro) are the RB approximations from the ROM (14) z = a b

Here P r(micro) and Rec(micro) are the approximated production and recovery yield respec-tively More specifically at each iteration of the optimization process for a selectedparameter micro the production and recovery yield are evaluated by solving the ROM(14) rather than the original FOM (12)

In this work we use the global optimizer NLOPT GN DIRECT L an efficientgradientndashfree algorithm in the open library NLopt [27] to solve the optimization prob-lems Let microk be the vector of parameters determined by the optimization procedureat the kth iteration k = 1 2 When microk+1 minus microk lt tolopt the optimizationprocess is stopped and the optimal solution is obtained The tolerance is specifiedas tolopt = 10 times 10minus4 The optimization results are shown in Table 83 The op-timal solution to the ROM-based optimization converges to that of the FOM-basedone Furthermore the runtime for solving the FOM-based optimization is significantlyreduced The speed-up factor (SpF) is 54

Note that if the offline cost ie the runtime for constructing the ROM is taken intoaccount the total cost of solving the ROM-based optimization is 7935 hours if thereis no ASS for the generation of the CRB and RB It is even longer than the runtime fordirectly solving the FOM-based optimization The latter is just 3388 hours Howeverwhen the technique of ASS was implemented for the construction of the ROM thetotal cost of solving the ROM-based optimization is only 146 hours which is less thanhalf of the runtime for solving the FOM-based one Needless to say the gain is muchmore when the ROM is repeatedly used for multi-query context

In fact although the offline cost is usually not considered in the RB communitythe total cost is an issue for many applications eg the ROM-based optimization inthis paper For many simulations with varying parameters the following two runtimesshould be well balanced one is constructing and using a surrogate ROM the other isdirectly using the original FOM

Table 6 Comparison of the optimization based on the ROM and FOMSimulations Objective (Pr) Opt solution (micro) N it 1 Runtime [h]SpFFOM-based Opt 0020264 (007964 105445) 202 3388 -ROM-based Opt 0020266 (007964 105445) 202 063 541 N it denotes the number of iterations required to converge

26

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 29: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

9 ConclusionsWe have discussed how to use a surrogate ROM to solve an optimization problemconstrained by parameterized PDEs with nonlinearity and applied it to batch chro-matography

As a robust PMOR method the RBM serves to generate the ROM The empiricaloperator interpolation has been employed for an efficient offline-online decompositionThe ASS technique is proposed for an efficient generation of the RB andor CRBby which the offline time was significantly reduced with negligible loss of accuracyAn output-oriented error estimation is derived based on the residual in the vectorspace However the output error bound is conservative due to the error accumulationwith time evolution To circumvent the stagnation of the error bound an early-stopcriterion was proposed to make the RB extension stop in time with a desired accuracyThe resulting goal-oriented ROM is reliable and efficient over the whole parameterdomain and is qualified for optimization To avoid the error accumulation in the errorbound output error estimation using the dual system should be a good candidate andis under current investigation

References[1] Z Bai Krylov subspace techniques for reduced-order modeling of large-scale dy-

namical systems Applied Numerical Mathematics 43 (2002) pp 9ndash44

[2] M Barrault Y Maday N C Nguyen and A T Patera An lsquoempir-ical interpolationrsquo method application to efficient reduced-basis discretization ofpartial differential equations Comptes Rendus Mathematique Academy ScienceParis Series I 339 (2004) pp 667ndash672

[3] U Baur C Beattie P Benner and S Gugercin Interpolatory projectionmethods for parameterized model reduction SIAM Journal on Scientific Comput-ing 33 (2011) pp 2489ndash2518

[4] P Benner L Feng S Li and Y zhang Reduced-order modeling and rom-based optimization of batch chromatography in ENUMATH 2013 Proceedingsaccepted

[5] P Benner S Gugercin and K Willcox A survey of model reductionmethods for parametric systems MPI Magdeburg Preprints (2013)

[6] P Benner E Sachs and S Volkwein Model Order Reduction for PDEConstrained Optimization Preprints Konstanzer Online-Publikations-System(KOPS) (2014)

[7] L T Biegler O Ghattas M Heinkenschloss D Keyes and B vanBloemen Waanders Real-Time PDE-constrained Optimization Society for In-dustrial and Applied Mathematics 2007

27

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 30: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

[8] L T Biegler O Ghattas M Heinkenschloss and B van BloemenWaanders Large-scale PDE-constrained Optimization Springer 2003

[9] T Bui-Thanh Model-constrained optimization methods for reduction of param-eterized large-scale systems PhD thesis Massachusetts Institute of Technology2007

[10] T Bui-Thanh K Willcox and O Ghattas Model reduction for large-scalesystems with high-dimensional parametric input space SIAM Journal on ScientificComputing 30 (2008) pp 3270ndash3288

[11] M Drohmann B Haasdonk and M Ohlberger Reduced basis approxima-tion for nonlinear parametrized evolution equations based on empirical operatorinterpolation SIAM Journal on Scientific Computing 34 (2012) pp 937ndash969

[12] J L Eftang D J Knezevic and A T Patera An hp certified reduced ba-sis method for parametrized parabolic partial differential equations Mathematicaland Computer Modelling of Dynamical Systems 17 (2011) pp 395ndash422

[13] M Fahl Trust-region methods for flow control based on reduced order modellingPhD thesis Universitat Trier 2000

[14] M Fahl and E W Sachs Reduced order modelling approaches to PDE-constrained optimization based on proper orthogonal decomposition in Large-scalePDE-constrained optimization Springer 2003 pp 268ndash280

[15] L Feng and P Benner A robust algorithm for parametric model order reduc-tion Proceedings in Applied Mathematics and Mechanics 7 (2008) pp 1021501ndash1021502

[16] L Feng P Benner and J G Korvink Subspace recycling accelerates theparametric macro-modeling of MEMS International Journal for Numerical Meth-ods in Engineering 94 (2013) pp 84ndash110

[17] W Gao and S Engell Iterative set-point optimization of batch chromatogra-phy Computers amp Chemical Engineering 29 (2005) pp 1401ndash1409

[18] M A Grepl Reduced-basis approximation a posteriori error estimation forparabolic partial differential equations PhD thesis Massachusetts Institute ofTechnology 2005

[19] D Gromov S Li and J Raisch A hierarchical approach to optimal con-trol of a hybrid chromatographic batch process in Advanced Control of ChemicalProcesses vol 7 2009 pp 339ndash344

[20] G Guiochon A Felinger D G Shirazi and A M Katti Fundamentalsof Preparative and Nonlinear Chromatography Academic Press 2006

[21] B Haasdonk Convergence rates of the POD-Greedy method ESAIM Mathe-matical Modelling and Numerical Analysis 47 (2013) pp 859ndash873

28

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 31: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

[22] B Haasdonk M Dihlmann and M Ohlberger A training set and multiplebases generation approach for parameterized model reduction based on adaptivegrids in parameter space Mathematical and Computer Modelling of DynamicalSystems 17 (2011) pp 423ndash442

[23] B Haasdonk and M Ohlberger Adaptive basis enrichment for the reducedbasis method applied to finite volume schemes in Proc 5th International Sympo-sium on Finite Volumes for Complex Applications 2008 pp 471ndash478

[24] Reduced basis method for finite volume approximations of parametrized lin-ear evolution equations ESAIM Mathematical Modelling and Numerical Analy-sis 42 (2008) pp 277ndash302

[25] B Haasdonk M Ohlberger and G Rozza A reduced basis method forevolution schemes with parameter-dependent explicit operators Electronic Trans-actions on Numerical Analysis 32 (2008) pp 145ndash161

[26] K-H Hoffmann G Leugering and F Troltzsch Optimal control ofpartial differential equations international conference in Chemnitz GermanyApril 20-25 1998 vol 133 Springer 1999

[27] S G Johnson The NLopt nonlinear-optimization package httpab-initiomitedunlopt (2010)

[28] R J LeVeque Finite volume methods for hyperbolic problems vol 31 Cam-bridge University Press 2002

[29] A Manzoni A Quarteroni and G Rozza Shape optimization for viscousflows by reduced basis methods and free-form deformation International Journalfor Numerical Methods in Fluids 70 (2012) pp 646ndash670

[30] N-C Nguyen G Rozza and A T Patera Reduced basis approximationand a posteriori error estimation for the time-dependent viscous Burgersrsquo equationCalcolo 46 (2009) pp 157ndash185

[31] A K Noor and J M Peters Reduced basis technique for nonlinear analysisof structures AIAA Journal 18 (1980) pp 145ndash161

[32] M Ohlberger and M Schaefer Reduced basis method for parameter op-timization of multiscale problems in Proceedings of ALGORITMY 2012 2012pp 1ndash10

[33] I B Oliveira and A T Patera Reduced-basis techniques for rapid reliableoptimization of systems described by affinely parametrized coercive elliptic partialdifferential equations Optimization and Engineering 8 (2007) pp 43ndash65

[34] A T Patera and G Rozza Reduced basis approximation and a posteriorierror estimation for parametrized partial differential equations Version 10 Copy-right MIT 2006

29

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 32: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

[35] C Prudrsquohomme D V Rovas K Veroy L Machiels Y Maday A TPatera and G Turinici Reliable real-time solution of parametrized partialdifferential equations Reduced-basis output bound methods Journal of Fluids En-gineering 124 (2002) pp 70ndash80

[36] D V Rovas Reduced-basis output bound methods for parametrized partial dif-ferential equations PhD thesis Massachusetts Institute of Technology 2003

30

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 33: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

31

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions
Page 34: Accelerating PDE constrained optimization by the reduced ...€¦ · tions (PDE constrained optimization, for short), has emerged as a challenging research area. It has increasingly

Max Planck Institute Magdeburg Preprints

  • Introduction
  • Problem statement
  • Reduced basis methods
  • Empirical interpolation
  • Adaptive snapshot selection
  • RB scheme for batch chromatography
    • Full-order model based on FV discretization
    • Reduced-order model
    • Offline-online decomposition
      • Output-oriented error estimation
        • Output error estimation for the reduced order model
        • Output error estimation for the batch chromatographic model
        • An early-stop criterion for the Greedy algorithm
          • Numerical experiments
            • Performance of the adaptive snapshot selection
            • Performance of the output error bound
            • ROM-based Optimization
              • Conclusions

Recommended