+ All Categories
Home > Documents > An adaptive multiquadric radial basis function method for expensive black-box mixed-integer...

An adaptive multiquadric radial basis function method for expensive black-box mixed-integer...

Date post: 20-Dec-2016
Category:
Upload: eren
View: 216 times
Download: 2 times
Share this document with a friend
24
This article was downloaded by: ["Queen's University Libraries, Kingston"] On: 06 October 2013, At: 15:05 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Engineering Optimization Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/geno20 An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization Kashif Rashid a , Saumil Ambani b & Eren Cetinkaya c a Schlumberger-Doll Research, Cambridge, MA, 02139, USA b Mechanical Engineering, University of Michigan, 2350 Hayward, Ann Arbor, MI, 48109, USA c OE and Ross School of Business, University of Michigan, 1205 Beal Avenue, Ann Arbor, MI, 48109, USA Published online: 23 Apr 2012. To cite this article: Kashif Rashid , Saumil Ambani & Eren Cetinkaya (2013) An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization, Engineering Optimization, 45:2, 185-206, DOI: 10.1080/0305215X.2012.665450 To link to this article: http://dx.doi.org/10.1080/0305215X.2012.665450 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Transcript
Page 1: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

This article was downloaded by: ["Queen's University Libraries, Kingston"]On: 06 October 2013, At: 15:05Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Engineering OptimizationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/geno20

An adaptive multiquadric radialbasis function method for expensiveblack-box mixed-integer nonlinearconstrained optimizationKashif Rashid a , Saumil Ambani b & Eren Cetinkaya ca Schlumberger-Doll Research, Cambridge, MA, 02139, USAb Mechanical Engineering, University of Michigan, 2350 Hayward,Ann Arbor, MI, 48109, USAc OE and Ross School of Business, University of Michigan, 1205 BealAvenue, Ann Arbor, MI, 48109, USAPublished online: 23 Apr 2012.

To cite this article: Kashif Rashid , Saumil Ambani & Eren Cetinkaya (2013) An adaptivemultiquadric radial basis function method for expensive black-box mixed-integernonlinear constrained optimization, Engineering Optimization, 45:2, 185-206, DOI:10.1080/0305215X.2012.665450

To link to this article: http://dx.doi.org/10.1080/0305215X.2012.665450

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Page 2: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 3: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering OptimizationVol. 45, No. 2, February 2013, 185–206

An adaptive multiquadric radial basis function methodfor expensive black-box mixed-integer nonlinear

constrained optimization

Kashif Rashida*, Saumil Ambanib and Eren Cetinkayac

aSchlumberger-Doll Research, Cambridge, MA 02139, USA; bMechanical Engineering, University ofMichigan, 2350 Hayward, Ann Arbor, MI 48109, USA; cIOE and Ross School of Business, University of

Michigan, 1205 Beal Avenue, Ann Arbor, MI 48109, USA

(Received 8 July 2011; final version received 16 January 2012)

Many real-world optimization problems comprise objective functions that are based on the output of one ormore simulation models.As these underlying processes can be time and computation intensive, the objectivefunction is deemed expensive to evaluate. While methods to alleviate this cost in the optimization procedurehave been explored previously, less attention has been given to the treatment of expensive constraints. Thisarticle presents a methodology for treating expensive simulation-based nonlinear constraints alongside anexpensive simulation-based objective function using adaptive radial basis function techniques. Specifically,a multiquadric radial basis function approximation scheme is developed, together with a robust trainingmethod, to model not only the costly objective function, but also each expensive simulation-based constraintdefined in the problem. The article presents the methodology developed for expensive nonlinear constrainedoptimization problems comprising both continuous and integer variables. Results from various test cases,both analytical and simulation-based, are presented.

Keywords: mixed-integer; nonlinear; constrained optimization; expensive functions; radial basisfunctions

1. Introduction

Optimization of simulation-based objective functions is a challenging task as little may be knownabout the function in terms of its continuity, differentiability and convexity, amongst other prop-erties. Owing to this lack of transparency, such simulation dependent functions are often referredto as black-box models. Furthermore, as the solution can depend on several computationallyintensive processes, such models are, more often than not, expensive to evaluate. For example,consider a model composed of mechanical, thermal and electromagnetic sub-systems, each eval-uated numerically using computationally demanding finite element analysis, such as those usedin the automotive and aerospace industries for combined structural and aerodynamic modelling(Jones et al. 1998, McDonald et al. 2007). Similarly, numerical models used in the oil industryto simulate multi-phase flow through porous media to predict transient reservoir behaviour are

*Corresponding author. Email: [email protected]

ISSN 0305-215X print/ISSN 1029-0273 online© 2013 Taylor & Francishttp://dx.doi.org/10.1080/0305215X.2012.665450http://www.tandfonline.com

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 4: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

186 K. Rashid et al.

also computationally intensive (Eclipse 2006,Avocet-IAM 2007). Thus, as these simulation-basedmodels can take hours or days to evaluate, optimization of such models, which necessarily requireshundreds of objective function evaluations, can be considerably costly in both time and CPUrequirement, especially when constraints are included.

While the means to mitigate the high computational burden expected in the optimization processhave been previously explored through the use of proxy (or surrogate) models (Ishikawa andMatsunami 1997, Booker et al. 1999, Bazan and Russenschuck 2000, Jones 2001, Lebensztajnet al. 2004, Regis and Shoemaker 2007b), less attention has been paid to the treatment of expensivenonlinear constraints (Regis and Shoemaker 2005). These are simulation-dependent nonlinearconstraints that can only be obtained as a consequence of the outcome of the simulation model orone of its sub-systems. Thus, these constraints are also non-trivial and expensive to evaluate, andmust therefore be suitably modelled alongside the objective function. Note that where nonlinearconstraints have been included in the optimization procedure, either derivative-free methods havebeen exploited, with and without the use of proxy models (Käck 2004, Couët et al. 2010, Djikpesseet al. 2011, Regis 2011), or a modified penalty function is accommodated that has limited utilityin practice (Käck 2004, Holmström and Quttineh 2008).

More recently, it has been recognized that to improve the process for constrained optimiza-tion, each expensive nonlinear constraint must be individually modelled alongside the objectivefunction (Käck 2004, Couët et al. 2010, Kleijnen et al. 2010, Regis 2011). This has led to theuse of cubic radial basis function (RBF) models (Käck 2004, Regis 2011), neural networks (NN)(Couët et al. 2010) and kriging models (Kleijnen et al. 2010) for each of the expensive nonlinearquantities in the problem.

This work, similarly, proposes the construction of a multiquadric RBF model for the expensiveobjective function and each expensive inequality constraint using a specifically devised trainingmethod. In addition, expensive equalities are managed by relaxation and the method is designedto handle both continuous and integer variables using a mixed-integer nonlinear programming(MINLP) solver.

Two optimization problems are solved at each iteration, one seeking the best possible solutionwith a minimum separation requirement around known samples, and the other giving the bestpossible solution subject to a larger separation distance. These exclusion constraints also serve toprevent singularity of the RBF system matrix in each case. Thus, the proposed scheme is broaderthan the kriging scheme by Kleijnen et al. (2010), intended for stochastic simulation models withinteger control variables only, and the cubic RBF scheme for inequality constrained problemsbased on a local metric stochastic search procedure for continuous constraints by Regis (2011).

In general, a number of methods are available to approximate expensive functions for opti-mization purposes, including the use of simple statistical response surface methods yieldinghand-crafted local models, to more generalized schemes eminently better suited for high-dimensional and multi-modal function approximation. The latter commonly include the use ofkriging, radial basis functions, wavelets and neural network models, and variants thereof (Poggioand Girosi 1990, Booker et al. 1999, Bazan and Russenschuck 2000, Björkman and Holmström2000, Gutmann 2001, Jones 2001, Bazan et al. 2002, Regis and Shoemaker 2007a, Holmström2008, Villemonteix et al. 2009).

While the mathematical model, training scheme and update procedures may differ amongstthese methods, they actually share a common underpinning other than the key objective of pro-viding an interpolation model over sparse scattered data that can be evaluated at a fraction ofthe cost in comparison to the real objective function (Fasshauer 2007). This is presented in theform of the response function realized, namely a linear combination of selected nonlinear pro-cessing units that enables a multi-variate nonlinear function to be modelled arbitrarily well undercertain specified conditions (Powell 1992, Bishop 2004). For example, the kriging method usesuncertainty minimization measures for new sample selection, while one particular RBF scheme

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 5: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 187

employs pseudo-optimization schemes to address the same issue (Björkman and Holmström 2000,Gutmann 2001). Similarly, while the function approximation capabilities of NNs are well reportedin the literature, the architecture is not well defined a priori, as in the case of the RBF method,without further investigation (Powell 1987, Bishop 2004). This not only complicates NN proce-dures, but as a larger set of model parameters must necessarily be managed as part of the regressionproblem for training (as the first layer weights can vary) this often leads to models with poorerresponses in comparison to the RBF scheme (in which the first layer weights can be deemed fixedand the parameters are obtained uniquely by linear inversion). The salient characteristics of RBFand NN are exploited in collective RBFNN methods (Bazan et al. 2002). However, these stilltend to suffer from the design requirements imposed on NN-based models and the proceduresnecessary to mitigate them. It is worth noting that iterative adaptive procedures are often used tominimize the expected number of function evaluations further in pursuit of an optimal solution.The means to identify new sample candidates and resolve the appropriate balance between explo-ration and exploitation of the search space are method dependent, and often based on heuristicsdeveloped from empirical evidence. This includes the sampling conditions imposed to generatethe initial model (Jones et al. 1998, Booker et al. 1999, Regis and Shoemaker 2007a, Kleijnenet al. 2010, Regis 2011).

This article presents a method to treat expensive black-box simulation-based nonlinear con-strained optimization problems, including integer variables, using adaptive radial-basis function(RBF) approximations. The article is laid out in the following manner. Firstly, the RBF methodis presented, followed by the specific training procedure developed. Subsequently, the overallmethodology is described, before various analytical and simulation-based test cases are pre-sented, in which both the objective function and the nonlinear constraints applied are consideredexpensive to evaluate.

2. RBF method

An expensive function can be approximated using a radial basis function model and a datasetdescribing the system under investigation (Hardy 1971, Powell 1992, Fasshauer 2007). Concep-tually, the RBF model comprises an input layer of nodes (equal to the dimensionality of theproblem), a hidden layer of nodes (equal to the number of samples available) and a single outputnode (see Figure 1).

The approximating function is a linear combination of nonlinear processing units, the radialbasis functions. Thus, if M samples of the real function are obtained over the domain of interest

Figure 1. An RBF network model with input (green) and hidden (blue) node layers, and a single output node (orange).

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 6: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

188 K. Rashid et al.

(with jth data point Dj = [X F(X)], X ∈ RN and F(X) ∈ R), then an RBF model is defined by

the following:

V(X) =M∑

j=1

Cj�(rj, pj), (1)

where Cj is the coefficient of the jth radial basis function �(rj, pj), with radius rj = ‖X − Xj‖and tuning parameter pj. Note that rj defines the Euclidean distance of a given point X from thecentre Xj of jth basis function, while pj is a measure of spread or influence of that particular RBF.Note that as the RBF model is designed to have as many centres as known data points, the size ofthe hidden layer is known a priori. However, this needn’t be a strict requirement if prototypicalsamples are extracted for use from the dataset using clustering methods.

If all centres adopt the same value for the tuning parameter pj, the number of unknowns isM + 1 and if they are allowed to vary, the number of unknowns is 2M (with M tuning parametersand M coefficients). For the former case, a simple linear relationship between the number ofparameters and hidden nodes results. In addition, the coefficients Cj are chosen such that theapproximating function has the same value as the modelled function at the known sample points(i.e. V(Xj) = F(Xj) for all Xj ∈ D) and are obtained from the solution of the following linearsystem if the tuning parameter is pre-defined:

C = A−1F, (2)

where C is the column of coefficients (∈ RM×1), F is the column of known target function values

(∈ RM×1) and A ∈ R

M×M is a symmetric square matrix of radial basis functions, in which �ij

(the element in the ith row and jth column) is the radial basis function evaluated with centre Xj fromcentre Xi. Note that matrix A (with �ij = �ji) has the remarkable property of being conditionallypositive definite depending on the choice of the RBF (Powell 1992, Fasshauer 2007). This ensuresthat the matrix is non-singular as long as no coincident points exist in the dataset and that the linearsystem can be solved to give a unique solution satisfying the interpolation conditions for almostall common RBF types (Hardy 1971, Powell 1992). From a network perspective, evaluation ofthe coefficients of the RBF model is equivalent to finding the weight from each hidden node to theoutput node (see Figure 1). The radial basis function � (or, more generally, the node function) thatis often used, owing to its cited robustness, is the multiquadric (MQ) RBF (Hardy 1971, Alottoet al. 1996, Rippa 1999), given by the following:

Multiquadric �(r, p) =√

(r2 + p2). (3)

Other common RBF types include Linear, Cubic, Gaussian and Inverse-multiquadric functions(Powell 1987). However, the MQ-RBF is used in this work due to its exhibited effectiveness ina range of problems, and particularly in the adaptive optimization procedure. Note that as theapproximating function is closed-form, analytic and twice continuously differentiable (Hardy1971, Powell 1992), the first- and second-order derivative information can be extracted for useby gradient-based solvers in the optimization procedure (see Appendix A) (Ambani and Rashid2010). In practice, the use of first-order information with pseudo-Hessian update procedures ismore robust than the second-order sensitivity information elicited from the model, and is thereforerecommended for use in the optimization procedure.

The main advantages of an RBF approximation scheme include that the model coefficients canbe obtained simply by linear inversion, the training scheme is fast with a low computational cost,the function values are exact at the training points, and as a result often give rise to good approx-imation models. The disadvantages that exist include the selection of the radial basis functionand its tuning parameter, singularity of the definitive linear system if duplicate (i.e. coincident)

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 7: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 189

points are present in the dataset, and the possible effect of the dataset size on the inversion per-formance. The last point can be remedied through the use of an efficient linear solver, while theuse of the MQ-RBF, the proposed training scheme (designed to establish a suitable value for thetuning parameter) and the addition of auxiliary search constraints help overcome the other issuesidentified.

The robust training scheme developed for the adaptive RBF method is presented next.

3. RBF training method

Nearly all radial basis function (�) types are defined as a function of the distance r and a parameterp. The latter, which enables the influence area of the RBF to be tuned, can complicate matters asits choice is not straightforward. For the Gaussian RBF, this parameter is a measure of spread,whereas it is known as the shift parameter for the multiquadric RBF. In establishing an RBF model,one can either choose to use the same value of the tuning parameter (a non-stationary approach)or allow each and every RBF to take a particular value derived from suitable training (in astationary approach) (Fasshauer 2007). In the stationary case, the radius of influence of each RBFis determined by some measure of the distance to its closest neighbour, while in the non-stationarycase all the points are assumed to have a common radius of influence. Non-stationary RBFs areoften favoured as they are simple to define and easy to train, without the additional complexityobserved by stationary approaches, even if the choice of parameter is far from optimal. Stationarymethods tend to be harder to train and exhibit greater variability in the response (Rashid 2009).

In this work, a non-stationary scheme is presented, in which the same p value is sought for eachand every RBF in the model. However, setting a single p value is not completely trivial (Rashid2009). Heuristic measures tend to exploit the density of samples in the dataset (Franke 1982) orexploit some mean distance measure between them (Hardy 1971). Such approaches are largelybased on rule-of-thumb and are not guaranteed to yield good results. Alternatively, the networkmodelling approach comprises partitioning the dataset into a training and checking set, in whichthe former is used for modelling and the latter for validation purposes only (Bishop 2004). This,however, assumes that the dataset (D) is sufficiently large to provide a suitably sized sub-set ofdata for testing, which is not usually the case when dealing with expensive functions. Hencethis scheme suffers when there is a limited number of samples, as is invariably the case whenpursuing an adaptive scheme with an expensive objective function commencing from the minimumpossible number of samples (e.g. using N + 1 samples for a linear approximation, where N is themodel dimensionality). The leave-one-out-crossover-validation (LOOCV) scheme overcomes thelimitation exhibited by a lack of data by removing one sample point at a time and using that as ameans for validation (Rippa 1999, Fasshauer and Zhang 2007, Kleijnen et al. 2010). The valueof p minimizing a norm of the vector of error measures is selected. However, as the samples arenot treated collectively, this approach can lead to anomalies in the response.

The response of an RBF method is determined generally by the number and location of thesamples, and specifically by the value of the tuning parameter p selected and the weights of theresulting linear system (Rippa 1999). While the latter are obtained by linear inversion, the formertuning parameter must be assigned a priori, as discussed earlier. Empirical studies show thatparticularly for the MQ RBF type, higher p values tend to produce smoother response surfaces,while lower values tend to give rise to sharper more jagged responses (Ambani et al. 2009).Thus, it appears advantageous, in the interest of smoothness, to use higher values of p, whileensuring that the selected value of p is not so big as to make the RBF system ill-conditioned ornearly ill-conditioned. This will either break the linear system as the matrix becomes singular, oreven if it remains invertible, the solution obtained at the training points may be quite poor, andparticularly so at all other points. With these conditions in mind, a training scheme is proposed that

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 8: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

190 K. Rashid et al.

Figure 2. Model error versus condition number and tuning parameter.

aims to establish the highest value of p such that the model error, the mis-match between the RBFapproximation and the known samples, is within a certain tolerance. This training procedure,referred to as the HighP scheme, can be achieved in one of two ways; either by setting the pvalue directly according to a requirement imposed on the condition number of the RBF matrixor by minimizing the mis-match function to within an upper bound on the desired level of modelaccuracy. In the first method, the condition number, defined as the ratio of the largest to the smallesteigenvalue, effectively defines the stability of the linear system and the results consequentlyobtained after inversion. A condition number of one indicates a stable system, while higher valuestending towards infinity are indicative of a highly ill-conditioned, and possibly singular, matrix(Golub and Loan 1996). As the condition number is both problem and machine dependent, itcan be difficult to assign a suitably relaxed value in order to obtain the desired value of thetuning parameter p. On the other hand, providing a relaxed upper bound on the training error ismore readily manageable. However, while a specified condition number can be solved with a rootfinding procedure (as the relationship of the condition number with p is monotonically increasing)the same is not true for the model error with p (see Figure 2) (Rippa 1999). Thus, in the lattercase, an iterative procedure is adopted in which the error measure is evaluated at an upper boundof p and slowly reduced by a factor (α = 0.9) until the upper acceptable error tolerance level ismet (Mupr

err ), specified as 3.5e−11. That is, the highest value of p is obtained that ensures the samplepoints can be reproduced to within a specified error tolerance (as shown in Figure 2). Evidently,the p value obtained will indicate the expected condition number for the requirements imposed.This method (see Algorithm 4) is shown to work particularly well when a limited amount of datais available and is thus particularly suitable in the adaptive optimization procedure for expensivefunctions (Ambani et al. 2009). This is demonstrated in later sections.

Set pmin = 1e−2, pmax = 1e3, α = 0.9

Muprerr = 3.5e−11, ε = 1e12

while(ε > Muprerr ) evaluate

system matrix A

coefficient array C (4)

model response V(X)

error norm ε = ‖V(X) − F(X)‖2

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 9: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 191

if (ε > Muprerr ) p = αp continue

else popt = p, Copt = C stop.

4. Problem definition

The optimization problem of interest is given by the following:

min F(X, Y)

s.t. G(X , Y) ≤ 0H(X , Y) = 0

with X ∈ RN Y ∈ N

M ,

(5)

where X is the vector of continuous variables and Y is the vector of integer (binary or discrete)variables. Also, G(X , Y) is the set of J inequality constraints and H(X, Y) is the set K of equalityconstraints. The constraints in problem (5) can be partitioned into inexpensive and expensiveconstraints depending on the computational cost necessary to evaluate them (Djikpesse et al.2011). The former can be evaluated trivially (i.e. are based on analytical expressions), while thelatter are simulation dependent and are therefore deemed costly to evaluate (e.g. depend on acomputationally intensive reservoir simulation). Hence, this leads to the following definition:

min F(X, Y)

s.t. GI(X , Y) ≤ 0GE(X , Y) ≤ 0HI(X , Y) = 0HE(X , Y) = 0

with X ∈ RN Y ∈ N

M ,

(6)

where the subscripts I and E refer to inexpensive and expensive constraints, respectively. Notethat the objective function F(X , Y) is necessarily considered expensive to evaluate in (5) and (6).

In the methodology developed, each expensive equality constraint is replaced by two associatedinequality constraints using partial relaxation as these are more easily managed by the solver. Thus,the following two expensive inequalities are defined for the original expensive equality constrainthi(X, Y):

gi1(X , Y) : hi(X , Y) − ε ≤ 0

gi2(X , Y) : − ε − hi(X, Y) ≤ 0 (7)

with ε = 5e−5.

Hence, the general problem concerns optimization of an expensive objective function withexpensive and inexpensive inequality constraints, together with inexpensive equality constraints,depending on the problem specification, given by the following:

min F(X, Y)

s.t. GI(X , Y) ≤ 0GE(X , Y) ≤ 0HI(X , Y) = 0

with X ∈ RN Y ∈ N

M ,

(8)

where F(X, Y) and GE(X , Y) are the RBF approximations of the objective function and eachof the expensive inequality constraints, respectively. GI(X, Y) and HI(X, Y) are the inexpensive

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 10: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

192 K. Rashid et al.

inequality and equality constraints, respectively. Note that the bound and integrality requirementsare included in the set of inexpensive constraints, given by the following:

X li − Xi ≤ 0

Xi − Xui ≤ 0

i ∈ {1, . . . , N} (9)

and Yj ∈ {Y lj , . . . , Y u

j }j ∈ {1, . . . , M} Yj, Y l

j , Y uj ∈ N,

where X li and Xu

i are the lower and upper bounds of the ith continuous variable, and similarly,Y l

j and Y uj are the lower and upper bounds of the jth integer variable, respectively. Note that,

in the foregoing, the removal of integer variables Y reduces the mixed-integer nonlinear pro-gram (MINLP) (5) to the standard nonlinear programming (NLP) problem. For convenience andgenerality, Z is defined as the set of all variables [X Y ] in the following.

5. Methodology

The high-level adaptive procedure for expensive constrained optimization is shown in Figure 3and is the same as that developed in earlier works for expensive function optimization (Joneset al. 1998, Booker et al. 1999, Regis and Shoemaker 2005). The differences result primarilyfrom the procedures implemented in Steps 0-4. For example, the adaptive procedure can beinitialized (Step 0) by evaluation of an initial sample set based on uniform, quasi-random orrandom sampling, or the use of Latin Hypercubes or Design of Experiments (Regis and Shoemaker2005, 2007a, Kleijnen et al. 2010, Regis 2011). As these samples are independent, they can beevaluated in parallel, if the provision exists, giving rise to a tabular set comprising the samplepoint Z = [X Y ], function value F and each of the nonlinear constraints G(Z) in the problem.Note that pre-existing model data could also be retrieved for use.

At the approximation stage (Step 1), an MQ-RBF proxy model is created for the objectivefunction and for each expensive nonlinear constraint in the problem using the available model data.

Figure 3. High-level methodology.

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 11: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 193

Previous schemes have utilized kriging models, neural networks and various RBF models usingcubic and thin-plate spline functions, amongst others (Bazan and Russenschuck 2000, Björkmanand Holmstr¨om 2000, Gutmann 2001, Lebensztajn et al. 2004, Kleijnen et al. 2010, Regis 2011).Subsequently, these proxy models are used in the optimization step (Step 2) in place of theactual simulation-based model, with the use of an appropriate solver. In this work, a MINLPsolver is employed to manage the problem defined in (8).

The optimal solution and some additional samples are evaluated (in parallel) using the real simu-lation model (Step 3). The convergence tests are made inStep 4 and all three conditions spec-ified must be met. These are norms on the change in the best objective function value and the asso-ciated solution in search space over consecutive iterations, alongside the differences between theproxy and the actual model at the current iterate. Stopping conditions based on a no-progress countare also implemented (Rashid et al. 2009a). If the convergence or stopping conditions are not met,the database is updated and the procedure is repeated from Step 1 using all the available modeldata. Otherwise, the best known solution is returned as the answer to the optimization procedure.

The detailed steps of the methodology are presented in Table 1 (Rashid et al. 2009b). Firstly,the adaptive iterative procedure is initialized with a starting dataset based on N + 2 samples.These are selected to include the base point (at which all the variables take their lower values), theextreme point (at which all variables take their highest values) and the N points that result whenone variable is set to its upper value from the base point. Note that with N = 2, this scheme is thesame as corner point sampling and has been adopted to provide a minimum number of sampleswhile ensuring a nonlinear approximation is obtained from the outset.1 Note also that if any pointin the initial dataset is infeasible with respect to the inexpensive constraints present, it is replacedby a feasible sample generated by the solution of an auxiliary optimization problem (using agenetic algorithm) (Cetinkaya et al. 2009). That is, a constraint satisfaction problem is solvedcomposed of the set of all inexpensive constraints and each infeasible sample is replaced by afeasible one. Hence, the points in the starting dataset are feasible with respect to the inexpensiveconstraints defined.

The initial sample set is evaluated with calls to the actual simulation model. Note that, through-out, it is assumed that a single black-box (simulation) evaluation returns both the objective andthe expensive nonlinear constraint values. Collectively, the data gathered is representative of thesystem under investigation and is stored in a tabular set [Z F(Z) GE(Z) GI(Z) HI(Z)]. This dataare subsequently used to build RBF approximations of the desired expensive quantities using theproposed training scheme (4) and also to evaluate the following penalty function:

P(Z) = F(Z) + λ

⎡⎣

J∑j=1

wjmax(0, gj(Z))2 +K∑

k=1

wkh2k(Z)

⎤⎦ , (10)

where J is the total number of inequality constraints (expensive and inexpensive) and K is the totalnumber of inexpensive equality constraints. λ is the penalty multiplier, while wj and wk are theassociated weights of the inequality and equality constraints, respectively. In this article, λ = 1e6,wj = 1 and wk = 1, and are invariant in the present scheme, but could be modified if necessary.The purpose of P(Z) is to provide a metric of comparison and is not intended to be used as the basisof approximation. Where the latter was attempted, approximation of the penalty function gave avery poor performance, and particularly for cases in which the constraints are active. This is dueto the fact that an active constraint boundary is difficult to model and consequently is much lesslikely to be met (Käck 2004, Holmström and Quttineh 2008, Regis 2011). Hence, in this work, thepenalty values are used for comparative purposes alone and an approximation of each expensiveconstraint function is made using multiquadric RBF models along with the expensive objectivefunction. The RBF parameter values are established using the proposed HighP training schemedescribed previously, with pmax = 1e3, pmin = 1e−2 and reduction factor α = 0.9 (Ambani et al.

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 12: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

194 K. Rashid et al.

Table 1. The adaptive RBF method.

Step Description

0A Select initial sample set (e.g. N + 2 samples)0B if (nHEconsts > 0) replace each expensive

equality constraints with two inequalities0C if (nGIconsts > 0 or nHIconsts > 0)

Solve auxiliary constraint satisfaction problemSet feasible samples (Z)

0D Evaluate samples using simulation and storemodel data: F(Z) GE(Z) GI(Z) HI(Z)

0E Set iteration counter k = 1 and stopping criteria:ε1 = 1e−3, ε2 = 1e−2 and ε3 = 1e−2

Set NoProgress = 0 and NoProgressMax ∈ [10, 50]Set Dcon = 0.2(largest search space distance)

while (not converged)1 Evaluate penalty metric P(Z) for each sample

Establish best solution Zbest given ranked P(Z) values2 Establish RBF model of objective function F(Z)

Establish RBF models of inequality constraints G(Z)

3 Solve Local Search (LS) optimization problem (ZL)Solve Expansive Search (ES) optimization problem (ZE)

4 if (‖ZL − ZE‖ ≤ δ) select ZL (npts = 1)else select ZL and ZE (npts = 2)

5 Evaluate new sample(s) F(Z) GE(Z) GI(Z) HI(Z)

Evaluate penalty metric for each new sample P(Z)

6 if (npts = 1) Znext = ZLelse: Znext = argmin{P(ZL), P(ZE)}

7 Update Zbest according to ranked penalty metric valuesUpdate NoProgress if best solution is unchanged

8 Evaluate convergence criteria:‖Zbest

k − Zbestk−1‖ ≤ ε1

‖F(Zbestk ) − F(Zbest

k−1)‖ ≤ ε2

‖F(Z) − F(Z)‖ ≤ ε3NoProgress = NoProgressMax

9 if (not converged) Store new sample dataUpdate λ, wj , wk , Dcon, itn – goto to Step 1else Return best solution Zbest , F(Zbest),GE(Zbest), GI(Zbest), HI(Zbest) and P(Zbest)

stop

2009). The coefficient array of the RBF system of each modelled quantity is subsequently obtainedand stored for use in the optimization step. In particular, two optimization problems are solved,defined by the following:

(Local Search) min F(Z)

s.t. GI(Z) ≤ 0GE(Z) ≤ 0HI(Z) = 0Dmin − d(Z) ≤ 0,

(11)

(Expansive Search) min F(Z)

s.t. GI(Z) ≤ 0GE(Z) ≤ 0HI(Z) = 0Dcon(k) − d(Z) ≤ 0Dmin ≤ Dcon(k) ≤ Dmax,

(12)

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 13: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 195

where F(Z) and GE(Z) represent the RBF approximations of the expensive objective function andthe set of expensive inequalities, respectively. Dcon(k) is the distance constraint value indicatingthe radius of exclusion specified at iteration k, with lower bound Dmin = 1e−3 and upper boundDmax defined as 0.2 (maximum span of the normalized search space).2 d(Z) is the minimumdistance of the solution point Z to any other point in the dataset. Thus, a single inexpensivenonlinear constraint is specified regardless of the number of samples.

The first problem (11) optimizes the approximation of the objective function F(Z) with onesupplementary constraint to limit the distance of the solution point to any existing point by aminimum separation distance. This prevents the RBF matrix from becoming singular due to theexistence of a coincident sample point in the dataset. In the second optimization problem (12),the same problem is solved but with a larger radius of exclusion. This serves the purpose ofproviding a more expansive search by ensuring the selection of points farther away from allother sample points, while also preventing the matrix from becoming singular, as previously.Note that, in the expansive search, the exclusion radius is made to decay towards the minimumacceptable separation distance with each iteration, at which juncture only the local search problemis considered. The decay schedule is exponentially decreasing, but could take any other form,including linear. In the extreme case, that either posed problem has no feasible solution, the pointthat minimizes the level of constraint violation is instead returned. This can be construed as theminimum of a penalty function in which no feasible region exists.

The two optimization problems serve to identify the current minimum (11) and introduce newinformation from the exploratory search (12) concurrently at each iteration. This provision ismanaged by Regis and Shoemaker (2005) through the use of a similar exclusion requirementand a sequential cycling procedure. A single optimization problem is solved at each iteration inwhich the exclusion step-length parameter (pre-defined by a search pattern) is varied from 1 to0 over a fixed number of steps and is applied to the maximum distance of a point to an existingsample. Thus, when the parameter is 1, an expansive search is made, and, when it reaches 0, alocal search is made. The latter, however, does not prevent the same solution from being obtainedand also complicates the notion of convergence, as the solution will vary depending on the cyclestate.

In this work, problems (11) and (12) are solved using the open source mixed-integer nonlinearprogramming (MINLP) solver bonmin in order to accommodate the integer variables in theproblem3 (Bonami et al. 2008, Bonami and Lee 2009, COIN-OR 2010). In addition, a multistartprocedure is adopted to overcome the local nature of the gradient-based solver.

Note that while direct optimization can be applied to expensive black-box NLP problemswith derivative-free or gradient-based methods, using numerical gradients if necessary, thesecan be computationally prohibitive (see Table 2). Also, in practice, a simulation-based MINLPproblem can only be made tractable by way of a proxy model. This is because the MINLP solverexpects derivatives of the objective and constraint functions (with respect to the variables in theproblem) over the entire search space by continuous relaxation. This assumes that a designatedinteger variable can be treated continuously, and that the objective and constraint functions canbe evaluated at those intermediate values. Clearly, this requirement may not be possible forsimulation-based problems, e.g. the number of stages in a compressor can only be assigned positiveinteger values. However, a proxy model of the function does indeed lend itself to continuousrelaxation, and is thus a necessary requirement. This is one of the motivations of this work. Now,a sequence of relaxed problems can be solved in which tighter (or stricter) constraints are added ateach iteration of the MINLP solver using the branch-and-bound method (Floudas 1995, Bonamiet al. 2008, Kleijnen et al. 2010). The solution finally returned is integer compliant as per theoriginal problem posed (8).

The utility of the proposed approach is demonstrated by the results presented in Table 2. Variousoptimization schemes are applied to four test cases (see Appendix B) for comparative purposes

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 14: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

196 K. Rashid et al.

Table 2. Comparative test results.

Method No. of evaluations ofObjective

Test Solver Proxy F ∇F G ∇G Fopt

g04 (ncont = 5, mNLI = 6)1 LEX none 181 n/a n/a n/a −30665.02 NLP (real) none 11 12 12 11 −30665.53 NLP (num) none 131 n/a 122 n/a −30665.54 LEX NN 155 n/a n/a n/a −30654.35 LEX RBF 23 n/a n/a n/a −30665.66 NLP NN 22 n/a n/a n/a −30665.37 NLP RBF 15 n/a n/a n/a −30665.5

g04B (ncont = 2, nint = 3, mNLI = 6)8 MINLP (real) none 222 238 223 233 −30606.09 MINLP (num) none 2603 n/a 2564 n/a −30606.0

10 MINLP NN 19 n/a n/a n/a −30606.611 MINLP RBF 17 n/a n/a n/a −30606.5

g05 (ncont = 4, mLI = 2, mNLI = 6)12 LEX none 3843 n/a n/a n/a 5126.813 NLP (real) none 9 10 10 9 5126.514 NLP (num) none 89 n/a 82 n/a 5126.515 NLP RBF 105 n/a n/a n/a 5126.5

g07 (ncont = 10, mLI = 3, mNLI = 5)16 LEX none 14575 n/a n/a n/a 24.306617 NLP (real) none 16 17 17 16 24.307418 NLP (num) none 356 n/a 337 n/a 24.306219 LEX RBF 146 n/a n/a n/a 24.321220 NLP RBF 83 n/a n/a n/a 24.3074

Key: ncont = no. of continuous variables; nint = no. of integer variables; mLI = no. of linear inequalities; mNLI = no. ofnonlinear inequalities; mLE = no. of linear equalities; and mNLE = no. of nonlinear equalities. NLP and MINLP refer tothe open source solver bonmin (Bonami et al. 2008). LEX refers to the derivative-free lexicographic method by Djikpesseet al. (2011). LEX results for g05 and g06 are the average values reported by Djikpesse et al. (2011). ‘(real)’ and ‘(num)’refer to the use of real or numerically evaluated derivatives. For consistency, the proxy cases are initiated with N + 1samples and expansive search is off.

(Michalewicz and Fogel 2002). In the table, Column 1 indicates the test number. Columns 2–3indicate the solver and proxy used, if any. Columns 4–7 indicate the number of evaluationsmade of the actual objective function (F) and its derivative (∇F), and similarly the actual con-straint set (G) and its derivative (∇G), respectively, as a measure of the cost of the procedure.The optimal solution is reported in the last column of the table. The solvers considered includethe derivative-free downhill simplex method with a lexicographic penalty handling procedure(LEX) by Djikpesse et al. (2011) and the bonmin MINLP solver by Bonami et al. (2008). Whenno integer variables are assigned in the given problem, the latter is simply labelled NLP. Thetwo proxy schemes considered include the RBF method of this work and the neural networkscheme presented by Couët et al. (2010). Cases 1–7 concern problem g04, comprising fivevariables and six nonlinear inequality constraints (see Appendix B). Derivative-free optimiza-tion with LEX (Case 1) requires 181 objective function evaluations. NLP with real derivativesmakes 11 objective and 12 constraint evaluations, respectively, and requires a similar numberof gradient evaluations. NLP with numerical derivatives requires 131 objective and 122 con-straints evaluations to reach the same solution. Clearly, without access to the actual derivatives,the procedure can be extremely costly if each evaluation of the objective and constraint setis considered expensive. Cases 4–7 demonstrate the computational advantage resulting fromthe use of NN and RBF proxy models. The NLP–RBF combination (Case 7) performs par-ticularly well and, notably, nearly as effectively as the case with actual derivatives. However,this information is typically unavailable for the simulation-based black-box problems of inter-est. The MINLP problem (g04B) is examined in Cases 8–11. Case 8 shows the ideal case

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 15: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 197

with derivative availability, while Case 9 shows the cost when numerical derivatives are used.The number of objective and constraint evaluations in the latter is significant, and especiallycostly if each is considered expensive to evaluate. Cases 10–11 indicate the results with NNand RBF proxy use, respectively. Similar results are demonstrated for problems g05 and g07in the table. Clearly, the MINLP–RBF combination is of practical utility when dealing withsimulation-based problems. The method is effective in comparison to derivative-free or gradient-based methods using numerical derivatives, and is also competitive with existing proxy-basedschemes.

It is worth stressing that function approximation and optimization in a single step is fraughtwith complication and is not considered in this work. Such an approach depends on the numberof samples selected and their location. Poor location selection or too few samples often leads topoor results, while dense sampling to ensure good function approximation is extremely costly,especially in high dimensions. Moreover, the approach is severely limited if expensive constraintsare required to be resolved, the behaviour of which cannot be known a priori. Thus, an adaptivescheme commencing from a smaller number of samples, in which additional samples are addedeach iteration to provide pertinent new information, is more effective in practice and well estab-lished by past research (Booker et al. 1999, Jones 2001, Regis and Shoemaker 2005, Holmström2008).

The results presented in Table 2 (and those to follow) can also be compared with the resultspresented by Regis (2011). This article compares a number of different optimization schemes andsolvers on many of the same benchmark problems presented here (Michalewicz and Fogel 2002).The methods considered include a pattern search procedure, a genetic algorithm, MatLab®’sFMINCON solver, COBYLA, NOMAD-DACE and four particular constrained RBF proxyschemes. Unfortunately, only a qualitative assessment can be made as the results are presentedin the article in the form of performance plots. None the less, comparing the number of evalu-ations required by the present RBF scheme to reach a complete and converged solution in eachcase suggests that the method is competitive in performance, while also able to tackle problemswith expensive equality constraints and mixed-integer variables (as demonstrated for example byproblems g05 and g04B in Table 2, respectively).

Returning to the adaptive procedure, the optimization step yields two solutions, a local one(ZL) and a more expansive one (ZE).4 If ZL and ZE are sufficiently far apart, given the minimumseparation distance requirement (Dmin), two new solution candidates are obtained. Otherwise,only the local solution is returned for evaluation.

The new candidate points are evaluated to give the associated objective function and constraintvalues, respectively (see Step 5 in Table 1). It is assumed that a single black-box function eval-uation returns all expensive quantities of interest. Next, the calculated penalty metric values areused to identify the best of the current iterate (Znext) given as the argmin(P(ZL), P(ZE)). Thissolution is compared to the best known solution over all samples (Zbest) in the convergence step,and is updated accordingly.5 A no progress counter is incremented if the best solution identi-fied remains unchanged. Note that the best solution in the dataset, [Zbest Fbest Gbest

I GbestE Hbest

IPbest], is by definition representative of the black-box simulation model as it is an actual samplepoint.

At the end of the iteration, the dataset [Z F(Z) GE(Z) GI(Z) HI(Z) P(Z)] is updated to includethe new solutions. The penalty factor and constraint multipliers (λ, wj, wk) can also be updatedif necessary, which additionally necessitates the clearance of the P(Z) array. The performancemetrics are stored and the procedure repeats with re-evaluation of the penalty metric. However,if the convergence conditions have been met, based on an and condition for the specified normstogether with a limit on the no progress count, the best solution identified (Zbest) is returned asshown in Table 1. The methodology is demonstrated on several analytical and simulation-basedtest cases in the next section.

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 16: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

198 K. Rashid et al.

6. Test results

While the actual black-box function may be non-smooth, non-continuous and non-differentiable,the RBF proxy model provides a closed-form analytical representation that is smooth, contin-uous and differentiable (see Appendix A). Hence, well established gradient-based methods canbe applied with ease to solve the proxy-based constrained optimization problem, even thoughthey may be limited for practical reasons on the actual problem. Hence, the de facto workhorsefor constrained nonlinear problems, the Sequential Quadratic Programming (SQP) method6 isemployed, and, in addition, the application of a mixed-integer nonlinear programming (MINLP)solver is made possible,7 which otherwise would be clearly impractical for use with simulation-based models (Bonami et al. 2008, Rashid and Wilkinson 2010). Note that the MINLP solver usesan Interior Point method for solution of the NLP problem when no integrality requirements areimposed. Thus, this solver can effectively handle all possible cases and is recommended for thispurpose. The use of these solvers is demonstrated, using the methodology described, on variousanalytical and simulation-based test cases

6.1. Analytical problems

In this sub-section, results for various analytical benchmark test problems are presented (seeAppendix B) (Manne 1986, Floudas and Pardalos 1990, Tawarmalani and Sahinidis 2001,Michalewicz and Fogel 2002). In Table 3, all the constraints are assumed simple (inexpensive)to evaluate, while in Table 4 the constraints are treated as expensive. The latter is in considera-tion of constraints that are simulation-dependent and will therefore be costly to evaluate. Notethat the objective function is considered expensive to evaluate in all cases. In these Tables, thefirst two columns indicate the test number and name. Columns 3 and 4 indicate the number ofcontinuous and discrete variables, respectively. The number of linear and nonlinear inequalityand equality constraints defined in the problem are given in Columns 5 to 8, respectively. The

Table 3. Analytical problems with simple constraints.

Test Variables Constraints Solution

No. Name ncont nint mLI mNLI mLE mNLE %OptGap Fevals Solver

1 Alan 4 4 5 2 0 47 m2 ex1223a 3 4 5 4 0 25 m3 g06 2 2 0 8 m4 g06B 1 1 2 0 11 m5 g09 7 4 0.04 78 m6 g11 2 1 0 6 m7 gbd 1 3 4 0 6 m8 gear 4 9.6e−2 21 m9 nvs03 2 1 1 0 11 m

10 rosenCU 2 0 41 m11 rosenCC 2 1 0 29 m12 rosenIU 1 1 0.6 35 m13 rosenIC 1 1 1 0 43 m14 TP1 2 3 3 2 0 11 m15 g04 5 6 1.2e−4 22 m16 g04B 2 3 6 0 17 m17 TP3 5 4 2 0 18 m

Key: ncont = no. of continuous variables; nint = no. of integer variables; mLI = no. of linear inequalities; mNLI = no. of non-linear inequalities; mLE = no. of linear equalities; and mNLE = no. of nonlinear equalities. %OptGap = percentage optimalitygap; Fevals = no. of function evaluations; and m = MINLP solver. Note that a unity offset has been applied to problems witha known zero optimum (gear & rosenCU) to enable %OptGap evaluation.

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 17: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 199

Table 4. Analytical problems with expensive constraints.

Test Variables Constraints Solution

No. Name ncont nint mLI mNLI mLE mNLE %OptGap Fevals Solver

1 g05 4 2 3 3.70e−5 38 m2 g06 2 2 0 38 s3 g08 2 2 0.07 55 m4 g11 2 1 0 27 m5 g04 5 6 1.1e−4 36 m

Key: ncont = no. of continuous variables; nint = no. of integer variables; mLI = no. of linear inequalities; mNLI = no. ofnonlinear inequalities; mLE = no. of linear equalities; and mNLE = no. of nonlinear equalities. %OptGap = percentageoptimality gap; Fevals = no. of function evaluations; m = MINLP solver; and s = SQP solver. Note that a single black-boxfunction evaluation returns both the objective and constraint function values.

results are presented in the last three columns, including the percentage optimality gap from theknown solution (%OptGap) in Column 9, the number of actual black-box function evaluationsrequired (Fevals) in Column 10 and the solver used in Column 11. Notably, the gradient-basedMINLP solver does well to solve all the problems, with only a small optimality gap, and in onlya fraction of the number of function evaluations that would be required by direct optimizationapproaches. Note that the number of function evaluations required for problem g06 increasesfrom 8 (in Table 3) to 38 (in Table 4) when the constraints are treated as expensive, while stillreturning the expected solution. The same is true for problems g04 and g11. The success in solvingthese various analytical benchmark MINLP problems imparts confidence when dealing with thesimulation-based problems, presented in the following sub-section, for which the optimal solutionis not known.

6.2. Simulation-based problems

In this section, the proposed methodology is demonstrated using a multi-phase flow simulator(PipeSim 2007) that is used to model various gathering networks of 2, 4 and 26 interconnectedwells, each with a single gas-lift valve and a single sub-surface choke for flow control management.For present purposes, details concerning fluid compositions and boundary conditions are omitted,though, in general, the wells use a black oil definition with varying water-cut and gas-to-oil ratiovalues. The 26-well network is shown in Figure 4.

The network models are considered for gas-lift allocation, in which a fixed amount of lift-gas isdistributed over a number of wells in order to improve the overall production at the gathering sink(Brown 1982). The injection of lift-gas at high pressure inside the well-bore reduces the densityof the fluid column, effectively lowering the bottom-hole pressure and increasing the pressuredifferential induced across the sandface (the connection point between the reservoir and the well)allowing more fluid to flow to the surface. However, too much lift-gas injection increases thefrictional pressure drop and reduces the fluid production possible. Furthermore, as the wells areinterconnected and other operating constraints may be imposed (e.g. owing to separator handlinglimits), an optimal solution of the resulting NLP is desired (Rashid 2010). Moreover, if chokes,which can be binary (off/on), discrete (fixed position) or continuous in nature, are used to adjustthe well flow rates to meet the operating constraints, a MINLP problem invariably arises (Rashidet al. 2011). Thus, more generally, the solution of the MINLP problem (5) is desired.

In the cases examined, the chokes are considered as having either 2 positions (0–1 case) or 9positions (with 1/8 settings). If the choke is not used as a control variable, it is set to the fullyopen position (2 inch diameter) and assumed invariant over the course of the optimization. Theavailable lift-gas quantity is 2, 4 and 45 MMscfd (Million standard cubic feet per day) for the

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 18: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

200 K. Rashid et al.

Figure 4. Schematic of the 26-well gathering network. (Key: ◦ wells; • manifold nodes; | branches; ↑ transport line tothe delivery sink ◦. The boundary conditions are all pressure specified.)

Table 5. PipeSim 26-variable model operating constraints.

No. Constraint Manifold Type Value Unit

1 GE1(X) mA Gas 20 MMscfd2 GE2(X) mB Gas 18 MMscfd3 GE3(X) mD Gas 12 MMscfd4 GE4(X) mC Gas 15 MMscfd

5 GE5(X) mA Liquid 14,000 STB6 GE6(X) mB Liquid 12,000 STB7 GE7(X) mD Liquid 12,000 STB8 GE8(X) mC Liquid 15,000 STB

9 GE9(X) Sink Liquid 41,000 STB10 GE10(X) Sink Oil 36,000 STB11 GE11(X) Sink Water 8,000 STB12 GE12(X) Sink Gas 48 MMscfd13 GI1(X) Network Lift-gas 45 MMscfd

Note: mA connects Wells W01–W03. mB connectsWells W04–W13. mD connect Wells W14–W19.mC connects Wells W20–26. See Figure 4.

2-, 4- and 26-well cases, respectively. The additional nonlinear constraints imposed on the 26-wellmodel are shown in Table 5. Here, 12 operating constraints are introduced at the manifold andsink level. In particular, gas and liquid constraints are imposed on each manifold (the internalnodes connecting wells), while gas, liquid, oil and water handling constraints are imposed atthe sink level (the terminating node) shown in Figure 4. Each of the nonlinear constraints issimulation dependent and thus expensive to evaluate. Only the single linear inequality constraint,indicating the available lift-gas quantity, is inexpensive. Lastly, the objective function concerns

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 19: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 201

Table 6. Simulation-based problems with simple constraints.

Test Variables Constraints Solution

No. Name ncont nint mLI mNLI mLE mNLE Fopt Fevals Solver Notes

1 Net2 2 1 2835 8 m LIQ2 Net4 4 1 5763 10 m LIQ3 Net26 26 1 45911 54 s LIQ4 Net26 26 1 38824 58 s OIL5 Net26 26 1 2.5806 66 s PFT6 Net4 4 5089 16 m LIQ 0–17 Net4 4 5096 39 m LIQ 1/88 Net26 8 41391 46 m LIQ 0–19 Net26 8 41399 45 m LIQ 1/8

10 Net26 8 35359 50 m OIL 0–111 Net26 8 35390 43 m OIL 1/812 Net26 8 2.447 49 m PFT 0–113 Net26 8 2.449 52 m PFT 1/814 Net26 8 8 1 42119 39 m LIQ 0–115 Net26 8 8 1 35909 52 m OIL 0–116 Net26 8 8 1 2.497 87 m PFT 0–1

Key: ncont = no. of continuous variables; nint = no. of integer variables; mLI = no. of linear inequalities; mNLI = no. of nonlinearinequalities; mLE = no. of linear equalities; and mNLE = no. of nonlinear equalities; Fopt = optimal objective function; Fevals = no. ofsimulation evaluations; m = MINLP solver; s = SQP solver; LIQ = liquid objective in STB; OIL = oil objective in STB; PFT = profitobjective in $M; 0–1 = two − position choke; and 1/8 = 9 discrete-position choke.

maximization of either the liquid or oil rate at the network sink and, for demonstrative purposes, the26-well model is also optimized for profit using the following cost factors: oil price = 68.0$\STB,gas price = 1.068$\MMscfd, water cost = 8.568$\STB and gas cost = 2.568$\MMscfd.

In Table 6, Tests 1–5 concern the gas-lift allocation problem with a single linear inequality.Tests 6–7 concern the four-well model with two-position chokes and, alternatively, with ninediscrete-position chokes, respectively. The objective value is higher in the latter, with finer chokecontrol permissible, indicating the value of multi-position chokes over simple block valves. Similarbehaviour is noted in Tests 8–9 for the 26-well model (in which only Wells 1–8 are under control)for the liquid objective, and again in Tests 10–13 for the oil and profit objectives, respectively.Lastly, in Tests 14–16, the 26-well model is optimized for each objective using continuous gas-liftand a discrete two-position choke as variables in Wells 1–8. Notably, the introduction of lift-gasimproves the overall objective values.

Table 7. Simulation-based problems with expensive constraints.

Test Variables Constraints Solution

No. Name ncont nint mLI mNLI mLE mNLE %OptGap Fevals Solver Notes

1 Net26 26 1 12 40866 62 s LIQ2 Net26 26 1 12 35858 60 s OIL3 Net26 26 1 12 2.394 64 m PFT4 Net26 8 8 1 12 41001 66 m LIQ 0–15 Net26 8 8 1 12 41001 66 m LIQ 1/86 Net26 8 8 1 12 35586 61 m OIL 0–17 Net26 8 8 1 12 35584 51 m OIL 1/88 Net26 8 8 1 12 2.475 69 m PFT 0–19 Net26 8 8 1 12 2.467 53 m PFT 1/8

Key: ncont = no. of continuous variables; nint = no. of integer variables; mLI = no. of linear inequalities; mNLI = no. of nonlinear inequal-ities; mLE = no. of linear equalities; mNLE = no. of nonlinear equalities; Fopt = optimal objective function; Fevals = no. of simulationevaluations; m = MINLP solver; s = SQP solver; LIQ = liquid objective in STB; OIL = oil objective in STB; PFT = profit objective in$M; 0–1 = two − position choke; and 1/8 = 9 discrete-position choke. Note that a single black-box simulation evaluation returns boththe objective and constraint function values.

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 20: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

202 K. Rashid et al.

In Tests 1–3 in Table 7, the gas-lift allocation problem is solved with the addition of the12 expensive nonlinear operating constraints listed in Table 5. The objective values are lower(compared to Tests 3–5 in Table 6) in accordance with the constraints imposed, but the solutionsare all constraint feasible. Tests 4–9 show results for the liquid, oil and profit objectives withcontinuous gas-lift injection and either two-position or nine-position chokes.

7. Conclusions

A methodology for treating mixed-integer nonlinear optimization problems with an expensiveobjective function together with expensive nonlinear equality and inequality constraints has beenpresented. Adaptive MQ-RBF techniques with a suitable non-stationary training method havebeen employed to provide approximations of the objective function and all expensive constraints.These are subsequently used in place of the actual simulation model in the optimization processto significantly reduce the overall computational burden.

Two optimization problems are solved at each iteration of the adaptive scheme. Both problemsare posed with the inclusion of a constraint to ensure that a minimum separation distance (theradius of exclusion) is met between the solution and the existing samples in the dataset. This hasthe benefit of ensuring that the RBF matrix remains non-singular while, in addition, allowinga search for the minimum of the proxy-model with a small radius of exclusion or enabling anexpansive search with a broader radius of exclusion. In the latter case, the radius of exclusion ismade to decay exponentially with each iteration, but remains unchanged at its lowest value forthe local search. As two solutions are obtained at each iteration, a Lagrangian penalty function isadopted as a measure of quality of the solutions obtained and also to identify the best solution inthe dataset. The procedure repeats until the convergence conditions are met or an upper limit onthe number of permissible iterations is reached.

Various analytical and network simulation tests cases comprising expensive constraints werepresented. The known solutions to the analytical test cases were efficiently obtained, while demon-strably constraint feasible, if not provably optimal, network simulation solutions were obtained.Notably, the procedure requires a greater amount of time for high-dimensional problems withmany expensive constraints specified. This is the focus of future research.

In summary, the results obtained are promising and the adaptive RBF scheme is shown tobe an effective tool for solving black-box simulation dependent NLP and MINLP optimizationproblems with expensive constraints present.

Acknowledgement

The authors would like to thank Benoît Couët (Schlumberger-Doll Research) for his valued comments.

Notes

1. N + 1 is the minimum number of samples required to create a linear approximation.2. The RBF models are constructed with normalized inputs and scaled target values.3. bonmin is an open source MINLP solver from COmputational INfrastructure for Operations Research (COIN-OR).4. These problems can be solved simultaneously if multi-core architecture is exploited, e.g. most computers are at

least dual-core these days.5. This is important for the convergence tests implemented, which compare the best solution of all samples (Zbest) to

the best of the current iterate (Znext) in one of the metrics employed (Rashid et al. 2009a).6. Refers to the FMINCON solver in MatLab® from Mathworks®.7. Refers to the bonmin MINLP solver from COIN-OR (Bonami et al. 2008, COIN-OR 2010).

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 21: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 203

References

Alotto, P., et al., 1996. A multiquadrics-based algorithm for the acceleration of simulated annealing optimizationprocedures. IEEE Transactions on Magnetics, 32 (3), 1198–1201.

Ambani, S., Cetinkaya, E., and Rashid, K., 2009. Training of adaptive radial basis functions for black-box functionoptimization. Research Note OFSR-rn-2009-137-MM-C. Schlumberger-Doll Research Center, Cambridge, MA.

Ambani, S. and Rashid, K., 2010. Application of gradient information from network proxy models in black-box functionoptimization. Research Note OFSR-rn-2010-134-MM-C. Schlumberger-Doll Research Center, Cambridge, MA.

Avocet-IAM, 2007. Integrated asset modeller user manual. Technical report. Schlumberger Information Solutions, Calgary,Canada.

Bazan, M., Aleksa, M., and Russenschuck, S., 2002. An improved method using radial basis function neural network tospeed up optimization algorithms. IEEE Transactions on Magnetics, 38 (2), 1081–1084.

Bazan, M. and Russenschuck, S., 2000. Using neural networks to speed up optimization algorithms. European PhysicalJournal – Applied Physics, 12 (2), 109–115.

Bishop, C., 2004. Neural network for pattern recognition. Oxford: Oxford University Press.Björkman, M. and Holmström, K., 2000. Global optimization of costly nonconvex functions using radial basis functions.

Optimization and Engineering, 1 (4), 373–397.Bonami, P., et al., 2008. An algorithmic framework for convex mixed integer nonlinear programs. Discrete Optimization,

5 (2), 186–204.Bonami, P. and Lee, J., 2009. bonmin v1.3 user’s manual, IBM.Booker, A., et al., 1999. A rigorous framework for optimization of expensive functions by surrogates. Structural

Optimization, 17 (1), 1–13.Brown, K., 1982. Overview of artificial lift systems. Journal of Petroleum Technology, 2384–2396.Cetinkaya, E., Ambani, S., and Rashid, K., 2009. Optimization of expensive black-box functions using RBF approxima-

tions with mixed-integer and continuous variables. Research Note OFSR-rn-2009-146-MM-C. Schlumberger-DollResearch Center, Cambridge, MA.

COIN-OR, 2010. Computational infrastructure for operations research. Available from: www.coin-or.org [Accessed 17April 2012].

Couët, B., et al., 2010. Production optimization through integrated asset modeling optimization (Paper number 135901).In: SPE production and operations conference and exhibition, 8–10 June, Tunis, Tunisia. Houston, TX: Society ofPetroleum Engineers.

Djikpesse, H., Couët, B., andWilkinson, D., 2011.A practical sequential approach for derivative-free black-box constrainedoptimization. Engineering Optimization, 43 (7), 721–739.

Eclipse, 2006. Reservoir simulator reference manual. Technical report. Schlumberger Information Solutions, Abingdon,UK.

Fasshauer, G.E., 2007. Meshfree approximation methods with MatLab. Interdisciplinary mathematical sciences Vol. 6.Singapore: World Scientific.

Fasshauer, G. and Zhang, J., 2007. On choosing optimal shape parameters for RBF approximation. Numerical Algorithms,45 (1–4), 345–368.

Floudas, C., 1995. Nonlinear and mixed-integer optimization: fundamentals and applications. Oxford: Oxford UniversityPress.

Floudas, C. and Pardalos, P., 1990. A collection of test problems for constrained global optimization algorithms. Berlin:Springer-Verlag.

Franke, R., 1982. Scattered data interpolation: tests of some methods. Mathematics and Computation, 38 (157), 181–200.Golub, G. and Loan, C.V., 1996. Matrix computations. 3rd ed. Baltimore, MD: John Hopkins University Press.Gutmann, H., 2001. A radial basis function method for global optimization. Journal of Global Optimization, 19 (3),

201–227.Hardy, R., 1971. Multiquadric equations of topography and other irregular surfaces. Journal of Geophysical Research,

76 (8), 1905–1910.Holmström, K., 2008. An adaptive radial basis algorithm (ARBF) for expensive black-box global optimization. Journal

of Global Optimization, 41 (3), 447–464.Holmström, K. and Quttineh, N., 2008. An adaptive radial basis algorithm (ARBF) for expensive black-box mixed-integer

constrained global optimization. Optimization and Engineering, 9 (4), 311–339.Ishikawa, T. and Matsunami, M., 1997. An optimization method based on radial basis function. IEEE Transactions on

Magnetics, 33 (2), 1868–1871.Jones, D., 2001. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization,

21 (4), 345–383.Jones, D., Schonlau, M., and Welch, W., 1998. Efficient global optimization of expensive black-box functions. Journal of

Global Optimization, 13 (4), 455–492.Käck, J., 2004. Constrained global optimization with radial basis functions. Research report MdH-IMa-2004. Department

of Mathematics and Physics, Mälardalen University, Sweden.Kleijnen, J., van Beers, W., and van Nieuwenhuyse, I., 2010. Constrained optimization in expensive simulation: novel

approach. European Journal of Operational Research, 202 (1), 164–174.Lebensztajn, L., et al., 2004. Kriging: a useful tool for electromagnetic device optimization. IEEE Transactions on

Magnetics, 40 (2), 1196–1199.

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 22: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

204 K. Rashid et al.

Manne, A.S., 1986. GAMS/MINOS: three examples. Technical report. Department of Operations Research, StanfordUniversity, CA.

McDonald, D., et al., 2007. Global and local optimization using radial basis function response surface models. AppliedMathematical Modelling, 31 (10), 2095–2110.

Michalewicz, Z. and Fogel, D., 2002. How to solve it: modern heuristics. 3rd ed. Berlin, Germany: Springer-Verlag.PipeSim, 2007. Network simulator user manual. Technical report. Schlumberger Information Solutions, Abingdon, UK.Poggio, T. and Girosi, F., 1990. Networks for approximation and learning. Proceedings of the Institute of Electrical and

Electronics Engineers, 78 (9), 1481–1497.Powell, M., 1987. Radial basis functions for multivariable interpolation: a review of algorithms for approximation. 3rd ed.

New York: Clarendon Press.Powell, M.J.D., 1992. The theory of radial basis function approximation in 1990. In: W.A. Light, ed. Advances in numerical

analysis II: wavelets, subdivisions algorithms and radial basis functions. Oxford: Oxford University Press, 105–210.Rashid, K., 2009. A comparison of two training schemes for adaptive radial basis function model design. Research Note

OFSR-rn-2009-155-MM-C. Schlumberger-Doll Research Center, Cambridge, MA.Rashid, K., 2010. Optimal allocation procedure for gas-lift optimization. Industrial & Engineering Chemistry Research,

49 (5), 2286–2294.Rashid, K., Ambani, S., and Cetinkaya, E., 2009. Convergence tests for adaptive RBF model optimization. Technical

report. Schlumberger-Doll Research Center, Cambridge, MA.Rashid, K., Ambani, S., and Cetinkaya, E., 2009. An adaptive RBF method for expensive simulation based constraints.

Research Note OFSR-rn-2009-149-MM-C. Schlumberger-Doll Research Center, Cambridge, MA.Rashid, K., Demirel, S., and Couët, B., 2011. Gas-lift optimization with choke control using a mixed-integer nonlinear

formulation. Industrial & Engineering Chemistry Research, 50 (5), 2971–2980.Rashid, K. and Wilkinson, D., 2010. Implementation and application of bonmin: an open source mixed-integer nonlinear

programming solver. Research Note OFSR-rn-2010-135-MM-C. Schlumberger-Doll Research Center, Cambridge,MA.

Regis, R., 2011. Stochastic radial basis function algorithms for large scale optimization involving expensive black-boxobjective and constraint functions. Journal of Computers and Operations Research, 38 (5), 837–853.

Regis, R. and Shoemaker, C., 2005. Constrained global optimization of expensive black box functions using radial basisfunctions. Journal of Global Optimization, 31 (1), 153–171.

Regis, R. and Shoemaker, C., 2007. Improved strategies for radial basis function methods for global optimization. Journalof Global Optimization, 37 (1), 113–135.

Regis, R. and Shoemaker, C., 2007. A stochastic radial basis function method for the global optimization of expensivefunctions. INFORMS Journal of Computing, 19 (4), 497–509.

Rippa, S., 1999.An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Advancesin Computational Mathematics, 11 (2–3), 193–210.

Tawarmalani, M. and Sahinidis, N., 2001. Exact algorithms for global optimization of mixed-integer nonlinear programs(Chap. 2). In: P. Pardalos and H. Romeijn, eds. Handbook of global optimization.Vol. 2. Dordrecht: KluwerAcademic.

Villemonteix, J., Vazquez, E., and Walter, E., 2009. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44 (4), 509–534.

Appendix A

The partial derivatives of a trained RBF model (1) are given by the following:

∂V(X)

∂xi=

M∑j=1

Cj∂�(rj , pj)

∂rj

∂rj

∂xi, (A1)

where xi is the ith component of point X and rj is its distance from the ith centre Xj . Thus

∂rj

∂xi= (xi − xj

i )

rj(A2)

and the derivative of the multiquadric RBF (3), in particular, is given by

∂�(rj , pj)

∂rj= rj

(r2j + p2

j )1/2

. (A3)

Then, the first-order derivatives of the model are given by

∂V(X)

∂xi=

M∑j=1

Cj(xi − xj

i )

(r2j + p2

j )1/2

(A4)

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 23: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

Engineering Optimization 205

and the second-order information by the following:7

∂2V(X)

∂xi∂xk=

M∑j=1

Cj (xji − xi)

(xk − xjk)

(r2j + p2

j )3/2

. (A5)

Although the denominator in (A2) and numerator in (A3) cancel, in the multiquadric case, ∂rj/∂xi is not defined whenX is the same as a training sample. Hence, the derivative of the RBF model is not defined at the given training points.This problem can be avoided by setting rj to a very small value (say 1e−8) when the derivative is to be evaluated at atraining point and can be construed as enforcing a tiny separation requirement around an existing centre. However, for themultiquadric case, it is evident that the partial derivative is well-defined as long as p is greater than zero. This requirementis implicitly met by the proposed training scheme owing to the lower bound imposed (with pmin > 0) (Ambani et al.2009). Lastly, note that, for the present purposes, it is assumed that all pj take the same value.

Appendix B Test problems

RosenCC

min F(X) = (1 − x1)2 + 100(x2 − x2

1)2

s.t.√

x21 + x2

2 − 1

−2 ≤ xi ≤ 2 xi ∈ R i ∈ {1, 2}Xopt = [1.095 1.2] Fopt = 0.009

Table B1. Test case description.

Variables ConstraintsTestname ncont nint mLI mNLI mLE mNLE Source

Alan 4 4 5 2 Manne (1986)ex1223a 3 4 5 4 Floudas and Pardalos (1990)g04 5 6 Michalewicz and Fogel (2002)g04B 2 3 6 g04 variantg05 4 2 3 Michalewicz and Fogel (2002)g06 2 2 Michalewicz and Fogel (2002)g06B 1 1 2 g06 variantg07 10 3 5 Michalewicz and Fogel (2002)g08 2 2 Michalewicz and Fogel (2002)g09 7 4 Michalewicz and Fogel (2002)g11 2 1 Michalewicz and Fogel (2002)gbd 1 3 4 MINOPT Model Library1

gear 4 GAMS Model Library2

nvs03 2 1 1 Tawarmalani and Sahinidis (2001)rosenCU 2 Rosenbrock Test FunctionrosenCC 2 1 Rosenbrock variant (see below)rosenIU 1 1 Rosenbrock variant (see below)rosenIC 1 1 1 Rosenbrock variant (see below)TP1 2 3 3 2 Floudas and Pardalos (1990)TP3 5 4 2 Floudas and Pardalos (1990)

Net2 2 1 Two-well network modelNet4 4 1 Four-well network modelNet4 4 Net4 variantNet26 26 1 26-well network modelNet26 8 Net26 variantNet26 8 8 1 Net26 variantNet26 26 1 12 Net26 variantNet26 8 8 1 12 Net26 variant

Key: ncont = no. of continuous variables; nint = no. of integer variables; mLI = no. of linear inequalities; mNLI = no. of nonlinear inequal-ities; mLE = no. of linear equalities; and mNLE = no. of nonlinear equalities. 1MINOPT Model Library (C.A. Floudas & C.A. Schweiger).2GAMS Model Library (http://www.gamsworld.org/minlp/minlplib/gear.htm).

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3

Page 24: An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization

206 K. Rashid et al.

RosenIU

min F(X) = (1 − x1)2 + 100(x2 − x2

1)2

s.t. − 2 ≤ xi ≤ 2 i ∈ {1, 2}x1 ∈ R x2 ∈ {−2, −1.6, −1.2, . . . , 1.6, 2.0}

Xopt = [0.788 0.616] Fopt = 0.046

RosenIC

min F(X) = (1 − x1)2 + 100(x2 − x2

1)2

s.t.√

x21 + x2

2 − 1

−2 ≤ xi ≤ 2 i ∈ {1, 2}x1 ∈ R x2 ∈ {−2, −1.6, −1.2, . . . , 1.6, 2.0}

Xopt = [0.6348 0.4] Fopt = 0.134

Dow

nloa

ded

by [

"Que

en's

Uni

vers

ity L

ibra

ries

, Kin

gsto

n"]

at 1

5:05

06

Oct

ober

201

3


Recommended