+ All Categories
Home > Documents > Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the...

Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the...

Date post: 27-Oct-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
29
Mathematical Programming Models and Algorithms for Engineering Design Optimization ? J. Herskovits COPPE - Federal University of Rio de Janeiro Mechanical Engineering Program Caixa Postal 68503, 21945 970 Rio de Janeiro, Brazil Abstract Mathematical Programming provides general tools for Engineering Design Opti- mization. We present numerical models for Simultaneous Analysis and Design Op- timization (SAND) and Multidisciplinary Design Optimization (MDO) represented by Mathematical Programs and numerical techniques to solve these models. These techniques are based on the Feasible Arc Interior Point Algorithm (FAIPA) for Nonlinear Constrained Optimization. Even if MDO is a very large optimization problem, our approach reduces considerably the computer effort. Several tools for very large problems are also presented. The present approach is very strong and effi- cient for real industrial applications and can easily interact with existing simulation engineering codes. Key words: Engineering Design Optimization, MDO - Multidisciplinary Design Optimization, SAND - Simultaneous Analysis and Design, Nonlinear Programming, Interior Point Algorithms 1 Introduction The Feasible Arc Interior Point Algorithm, FAIPA, is a new technique for non- linear inequality and equality constrained optimization, [28]. FAIPA requires an initial point at the interior of the inequality constraints, and generates a ? This research was partially supported by CNPq, Brazilian Research Council and FAPERJ, Rio de Janeiro State Research Council Email address: [email protected] (J. Herskovits). URL: www.optimize.ufrj.br (J. Herskovits). Preprint submitted to Elsevier Science 9 February 2004
Transcript
Page 1: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Mathematical Programming

Models and Algorithms

for Engineering Design Optimization ?

J. Herskovits

COPPE - Federal University of Rio de JaneiroMechanical Engineering Program

Caixa Postal 68503, 21945 970 Rio de Janeiro, Brazil

Abstract

Mathematical Programming provides general tools for Engineering Design Opti-mization. We present numerical models for Simultaneous Analysis and Design Op-timization (SAND) and Multidisciplinary Design Optimization (MDO) representedby Mathematical Programs and numerical techniques to solve these models. Thesetechniques are based on the Feasible Arc Interior Point Algorithm (FAIPA) forNonlinear Constrained Optimization. Even if MDO is a very large optimizationproblem, our approach reduces considerably the computer effort. Several tools forvery large problems are also presented. The present approach is very strong and effi-cient for real industrial applications and can easily interact with existing simulationengineering codes.

Key words: Engineering Design Optimization, MDO - Multidisciplinary DesignOptimization, SAND - Simultaneous Analysis and Design, NonlinearProgramming, Interior Point Algorithms

1 Introduction

The Feasible Arc Interior Point Algorithm, FAIPA, is a new technique for non-linear inequality and equality constrained optimization, [28]. FAIPA requiresan initial point at the interior of the inequality constraints, and generates a

? This research was partially supported by CNPq, Brazilian Research Council andFAPERJ, Rio de Janeiro State Research Council

Email address: [email protected] (J. Herskovits).URL: www.optimize.ufrj.br (J. Herskovits).

Preprint submitted to Elsevier Science 9 February 2004

Page 2: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

sequence of interior points. When the problem has only inequality constraints,the objective function is reduced at each iteration. An auxiliary potential func-tion is employed when there are also equality constraints.

The fact of giving interior points, even when the constraints are nonlinear,makes of FAIPA an efficient tool for engineering design optimization, wherefunctions evaluation is in general very expensive. Since any intermediate designcan be employed, the iterations can be stopped when the objective reductionper iteration becomes small enough. Interior point algorithms are essential tosolve problems that deal with an objective function, or constraints, that arenot defined at infeasible points. This occurs in several applications, in partic-ular, size and shape structural optimization. When applying interior pointsalgorithms to real time process optimization, as the feasibility is maintainedand the cost reduced, the controls can be activated at each iteration, [10,11].

FAIPA, that is an extension of the Feasible Directions Interior Point Algo-rithm [17–19,21,42], integrates ideas coming from the modern Interior PointAlgorithms for Linear Programming with Feasible Directions Methods. Ateach point, FAIPA defines a ”Feasible Descent Arc”. Then, it finds on thearc a new interior point with a lower objective. Newton, quasi - Newton andfirst order versions of FAIPA can be obtained. FAIPA is supported by strongtheoretical results. In particular, the search along an arc ensures superlinearconvergence for the quasi - Newton version, even when there are highly non-linear constraints, avoiding the so called ”Maratos’ effect”, [38]. The presentmethod, that is simple to code, does not require the solution of quadratic pro-grams and it is not a penalty neither a barrier method. It merely requires thesolution of three linear systems with the same matrix per iteration. Severalpractical applications of the present and previous versions of FAIPA, as wellas several numerical results, show that the present is a very strong and ef-ficient technique for engineering design optimization, [1,2,4–8,25,24,34,36,47].In a book by Laporte and Le Tallec, [33], can be found a complete descriptionof the application of FAIPA to shape optimization in aeronautics industry.The present approach was also applied to stress analysis problems involv-ing mathematical programming, as contact problems, [3,35,53], and nonlinearlimit analysis, [55]. In [23,51] algorithms for Nonlinear Least Squares Problemsand Linear and Convex Quadratic Programming are described.

The classical model for design optimization requires, at each iteration, thesolution of the state equation. When this equation is solved iteratively thewhole process can be very painful. The SAND technique solves simultaneouslythe state equation and the optimization problem, [26]. For a given initial set ofdesign variables and state variables the optimal design and the correspondingstate variables are obtained iteratively. We remark that the intermediate statevariables do not necessarily verify the state equations.

2

Page 3: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Multidisciplinary Design Optimization, MDO, ”can be described as a method-ology for design of complex engineering systems that are governed by mutuallyinteracting physical phenomena and made up of distinct interacting subsys-tems”, [48].

Modern design techniques require numerical models of each of the parts ofthe system and each of the interacting physical phenomena. These modelswere generally developed independently, as well as the simulation codes basedon them. From a practical point of view, to be successful, MDO must bebased on existing codes, as they are. It is not reasonable to ask engineers andscientists working in the different disciplines to modify their mathematicaland numerical models and the corresponding computer codes to adapt themto MDO.

MDO problems are naturally very large. They normally deal with a large num-ber of design variables and include the state variables and constraints comingfrom all disciplines as well as the interaction between disciplines. Several tech-niques were developed to overcome this difficulty, [12,15,30,32,46,50,54]. Mostof them try to decompose the problem into smaller sub problems or to workwith reduced models for analysis and/or optimization. A recent survey can befound in [49].

FAIPA MDO is a numerical optimization algorithm for MDO that works witha model that considers the complete problem without reductions, decompo-sitions or simplifications. This goal is very ambitious due to the size andcomplexity of the problems, but it can be a way to obtain strong and efficienttools for MDO, [27].

Our model works with linking variables and equality constraints to introducethe interaction between disciplines. State equations can be treated implicitlyas in the classical optimization model or, alternatively, included in the math-ematical program as in SAND optimization.

FAIPA SAND and FAIPA MDO reduce the state variables and equality con-straints given by the state equations. An original point of our technique is thatthis reductions are carried out without need of solving of the state equations.This formulation requires the solution of systems of equations given by a lin-earized equilibrium equation and can be performed very efficiently employingthe solvers of engineering simulations codes.

FAIPA includes several techniques for very large size problems, as an exten-sion for constrained optimization of Limited Memory quasi-Newton Methodand iterative numerical techniques for the solution of the internal systems ofFAIPA.

In the next session, Mathematical Programming models for classical engi-

3

Page 4: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

neering design optimization and for SAND optimization are described. Wealso introduce our model for MDO. The basic ideas involved in FAIPA aredescribed in Section 3. The subsequent sections are devoted to describe linesearch techniques, procedures to solve the internal systems of FAIPA and FirstOrder, Newton and quasi - Newton versions of FAIPA. In section 7 a basicversion FAIPA is stated. Sections 8 and 9 are dedicated to FAIPA versions forSAND and MDO respectively.

2 Numerical Models for Engineering Design Optimization

We consider the optimal design of engineering systems described by a stateequation e(x, u) = 0 , e ∈ <r, where u ∈ <r represents the state variables.In most applications of structural optimization, the state variables are thenodal displacements and the state equation is given by the equilibrium. Thestate equation also depends of the design variables vector x ∈ <n, that arethe unknowns of the design problem.

The Optimal Design Problem consists on finding the value of x that minimizesa cost function f(x, u(x)) submitted to inequality constraints g(x, u(x)) ≤ 0and equality constraints h(x, u(x)) = 0, where g ∈ <m and h ∈ <p andu(x) verifies e(x, u(x)) = 0. This problem is represented by the followingMathematical Program:

minx

f(x, u(x))

s. t. g(x, u(x)) ≤ 0

h(x, u(x)) = 0

(1)

We assume that f , g, h and e are continuous, as well as their first derivatives.

In engineering applications, Problem (1) is a Nonlinear Program and is solvedmaking iterations on the design variables. At each of the iterations, the stateequation must be solved and the sensitivity of the state variables must becomputed for the current design.

There is an extensive literature about numerical methods for Problem (1).An engineering view is given in [13,16,31,52] and a mathematical approachin [9,29,37,40,43]. A condensed description of nonlinear optimization methodscan be found in [20].

To solve simultaneously analysis and optimization problems, the state vari-ables are included within the unknowns of the optimization problem and the

4

Page 5: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

state equation is considered as a set of additional equality constraints. TheMathematical Program for SAND Optimization is stated as follows:

minx,u

f(x, u)

s. t. g(x, u) ≤ 0

h(x, u) = 0

e(x, u) = 0

(2)

This model for engineering optimization is very advantageous in the case ofnonlinear systems but, on the other hand, the size of the Mathematical Pro-gram is greatly increased. In general, the number of state variables is muchlarger than those of design variables.

We consider now the MDO of an engineering system integrated by ne sub-systems and/or disciplines. The state variables of the subsystems are u ≡(u1, u2, ..., une), ui ∈ <ri . We define a vector of linking variables z ∈ <s

that represents the physical interactions between all the disciplines. The stateequations of the disciplines can then be written as e1(x, z, u1), e2(x, z, u2),...,ene(x, z, une), ei ∈ <ri . We include conditions that impose compatibility of theinteractions between disciplines in the set of equality constraints h(x, z, u) = 0and call them ”Compatibility Conditions”.

Then, we propose the following Mathematical Program as a model for Multi-disciplinary Design Optimization.

minx,z,u

f(x, z, u)

s. t. g(x, z, u) ≤ 0

h(x, z, u) = 0

e1(x, z, u1) = 0

e2(x, z, u2) = 0

................

ene(x, z, une) = 0

(3)

As an example, we consider the case of structural and aerodynamics multidis-ciplinary optimization of airplanes and their components. The aerodynamicsefforts acting on the structure can be included within the linking variables.These efforts are computed by the aerodynamics analysis code as functionsof the aerodynamics state variables, that represent velocities. In the present

5

Page 6: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

model the equality constraints include conditions imposing that, at the opti-mal design, the efforts employed as inputs for the structural analysis are equalto those computed by the aerodynamics analysis.

We remark that one or more disciplines can be treated implicitly in the MDOProblem (3) and the state equation solved at each iteration, as in the classicalmodel for optimal design (1).

3 FAIPA - The Feasible Arc Interior Point Algorithm

FAIPA is an iterative algorithm to solve the Nonlinear Programming Problem

minx

f(x)

s. t. g(x) ≤ 0

h(x) = 0,

(4)

where x ∈ <n and f ∈ <, g ∈ <m, h ∈ <p.

Let be Ω ≡ x ∈ <n/g(x) ≤ 0. The following assumption on the problem arerequired:

Assumption 3.1. The functions f(x), g(x) and h(x) are continuous in Ω, aswell as their first derivatives.

Assumption 3.2. (Regularity Condition) For all x ∈ Ω the vectors ∇gi(x),for i = 1, 2, ..., m such that gi(x) = 0 and ∇hi(x) for i = 1, 2, ..., p are linearlyindependent.

This condition must be checked in practical applications. There are severalexamples when the previous assumption is not true. In the case when thereare sets of constraints that take the same values due to symmetries of theproblem, only one constraint of each set must be considered.

We also introduce the definitions:

Definition 3.1. d ∈ <n is a Descent Direction for a smooth function φ : <n →< at x ∈ <n if for some ψ > 0 it is φ(x + td) < φ(x) for all t ∈ (0, ψ]. It canbe proved that this condition is true if dT∇φ(x) < 0.

Definition 3.2. d ∈ <n is a Feasible Direction (with respect the inequalityconstrains) for the problem (4) , at x such that g(x) ≤ 0, if for some θ > 0 we

6

Page 7: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

have g(x + td) ≤ 0 for all t ∈ [0, θ]. This condition is true if dt∇gi(x) < 0 fori = 1, 2, ..., m.

At each point FAIPA defines a ”feasible descent arc”. A search is then per-formed along this arc to get a new interior point with a lower potential func-tion.

We denote ∇g(x) ∈ <n×m and ∇h(x) ∈ <n×p the matrix of derivatives of gand h respectively and call λ ∈ <m and µ ∈ <p the corresponding vectors ofLagrange multipliers. The Lagrangian is l(x, λ, µ) = f(x)+λtg(x)+µth(x) andL(x, λ, µ) = ∇2f(x) +

∑mi=1 λi∇2gi(x) +

∑pi=1 µi∇2hi(x) its Hessian. G(x) =

diag(g(x)) denotes a diagonal matrix such that Gii(x) = gi(x).

Let us consider Karush-Kuhn-Tucker, (KKT), first order optimality condi-tions:

∇f(x) +∇g(x)λ +∇h(x)µ = 0 (5)

G(x)λ = 0 (6)

h(x) = 0 (7)

λ ≥ 0 (8)

g(x) ≤ 0 (9)

A point x∗ is a Stationary Point if it exists λ∗ and µ∗ such that (5)-(7) aretrue and is a KKT Point if KKT conditions (5)-(9) hold.

KKT conditions constitute a nonlinear system of equations and inequationson the unknowns (x, λ, µ). It can be solved by computing the set of solutions ofthe nonlinear system of equations (5)-(7) and then, looking for those solutionssuch that (8) and (9)are true. However, this procedure is useless in practice.

FAIPA makes Newton-like iterations to solve the nonlinear equations (5)-(7)in the primal and the dual variables. With the object of ensuring convergenceto KKT points, the system is solved in such a way as to have the inequalities(8) and (9) satisfied at each iteration.

7

Page 8: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Let be S = L(x, λ, µ). A Newton iteration for the solution of (5)-(7) is definedby the following linear system:

S ∇g(x) ∇h(x)

Λ∇gT (x) G(x) 0

∇hT (x) 0 0

x0 − x

λ0 − λ

µ0 − µ

= −

∇f(x) +∇g(x)λ +∇h(x)µ

G(x)λ

h(x)

(10)

where (x, λ, µ) is the current point and (x0, λ0, µ0) is a new estimate. We callΛ = diag(λ).

We can also take S ≡ B, a quasi-Newton approximation of L(x, λ, µ), orS ≡ I. Depending on how S is defined, (10) is a Newton, a quasi-Newton, ora First Order Newton-like iteration, see [21].

Iterative methods for nonlinear problems in general include a local searchprocedure to force global convergence to a solution of the problem. This is thecase of Line Search and Trust Region Algorithms for nonlinear optimization,[37,40]. The present method includes a line search procedure, in the space ofthe primal variables x only, that enforces the new iterate to be closer from thesolution.

Let be d0 ∈ <n such that d0 = x0 − x. From (10), we have

Sd0 +∇g(x)λ0 +∇h(x)µ0 = −∇f(x)

Λ∇gt(x)d0 + G(x)λ0 = 0

∇ht(x)d0 = −h(x)

(11)

which is independent of the current value of µ. Then, (11) gives a direction inthe space of primal variables x and new estimates of the Lagrange multipliers.

Let be the potential function

φ(c, x) = f(x) +p∑

i=1

ci|hi(x)|, (12)

where, at the iteration k, cki is such that

sg[hi(xk)](ci + µk

0i) < 0; i = 1, 2, .., p, (13)

8

Page 9: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

where sg(.) = (.)/|(.)|. It is proved in [19,21] that dk0 is a descent direction

of φ(ck, x).

However, d0 is not useful as a search direction since it is not necessarily feasible.This is due to the fact that as any constraint goes to zero, it follows from (11)that d0 goes to a direction tangent to the feasible set, [21].

To obtain a feasible direction, a negative vector −ρλ is added in the righthand side of (11). A perturbed linear system in d and λ is then obtained,

Sd +∇g(x)λ +∇h(x)µ = −∇f(x)

Λ∇gt(x)d + G(x)λ = −ρλ

∇ht(x)d = 0

(14)

where ρ ∈ < is positive. The new direction is d and λ and µ are the newestimate of the Lagrange multipliers. We have now that d is a feasible direction,since ∇gt

i(x)d = −ρ < 0 for the active constraints.

The addition of a negative number in the right hand side of (11) produces adeflection on d0, proportional to ρ, in the sense of the interior of the feasibleregion. To ensure that d is also a descent direction, we establish an upperbound on ρ in order to have

dt∇φ(c, x) ≤ αdt0∇φ(c, x) (15)

with α ∈ (0, 1), that implies dt∇φ(c, x) < 0. Thus, d is a descent direction ofthe potential function.

In general, the rate of descent of φ along d will be smaller than along d0. Thisis a price that we pay to get a feasible descent direction.

To obtain the upper bound on ρ, we solve the auxiliary linear system in (d1, λ1)

Sd1 +∇g(x)λ1 +∇h(x)µ1 = 0

Λ∇gt(x)d01 + G(x)λ1 = −λ

∇ht(x)d1 = 0

(16)

It follows from (11), (14) and (16) that d = d0 + ρd1. Then, we have that (15)is true for any ρ > 0, if dt

1∇φ(c, x) < 0. Otherwise, we take

ρ <(α− 1)dt

0∇φ(c, x)

dt1∇φ(c, x)

(17)

9

Page 10: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

and (15) holds.

The Feasible Directions Interior Point Algorithm, FDIPA, described in [21],employs d as a search direction. The line search procedure looks for a step-length t that ensures that the new point (x + td) satisfies the inequality con-straints with a ”reasonable” decrease of the potential function φ(x, λ0, µ0).

FDIPA has global convergence to a Karush- Kuhn - Tucker Point for any wayof updating S and λ provided the following assumptions are true.

Assumption 3.3. There exist positive numbers λI , λS and g such that 0 <λi ≤ λS, i = 1, ..., m, and λi ≥ λI for any i such that gi(x) ≥ −g.

Assumption 3.4. There exist positive numbers σ1 and σ2 such that

σ1 ‖ d ‖2

2≤ dT Bd ≤ σ2 ‖ d ‖2

2for any d ∈ Rn.

It was also proved superlinear asymptotic convergence of the quasi - Newtonversion of FDIPA, provided that the step-length obtained in the line searchgoes to one as the iterates go the solution of the problem.

However, when there are highly nonlinear constraints, the length of the feasi-ble segment supported by the feasible descent search direction d can be notenough to accept a step equal to one. This fact is similar to the Maratos’ Ef-fect observed in [38] when employing the Sequential Quadratic ProgrammingMethod, [20,44,45].

Maratos’effect was rarely observed in numerical tests carried out with FDIPA.But, when it occurred, the number of iterations required to solve the problemwith a given precision was greatly improved. A strong optimization techniquefor real engineering applications must avoid Maratos’ Effect, since superlinearconvergence can be broken with only one highly nonlinear constraint.

The basic idea to avoid this problem consists on making the line search alonga second order arc, tangent to the feasible descent direction d and with acurvature ”close” to the curvature of the feasible set boundary. The arc at xis given by the following expression:

x(t) := x + td + t2d

10

Page 11: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

where d is obtained by solving

Sd +∇g(x)λ +∇h(x)µ = 0

Λ∇gt(x)d + G(x)λ = −ΛωI

∇ht(x)d = −ωE

(18)

This linear system is similar to (16), being ωIi ≈ 2dt∇2gi(x)d and ωE

i ≈2dt∇2hi(x)d computed as follows:

ωIi = gi(x + d)− gi(x)−∇gt

i(x)d; i = 1, ..., m

ωEi = hi(x + d)− hi(x)−∇ht

i(x)d; i = 1, ..., p

The arc search was proposed in [39] and [41] for the Sequential QuadraticProgramming algorithm and it was also employed in an algorithm based onFDIPA, described in [42]. However, in both references the computation ofd requires the solution of a Quadratic Programming Problem while FAIPAmerely solves an additional linear system with the same matrix.

4 Line Search Techniques for Interior Point Algorithms

The Feasible Arc Interior Point algorithms requires at each iteration a con-strained line search looking for a step-length corresponding to a feasible pointwith a lower potential. The first idea consists on solving the following opti-mization problem on t:

mint

φ(xk + tdk + t2dk)

s. t. g(xk + tdk + t2dk) ≤ 0(19)

Instead of making an exact minimization on t, it is more efficient to employinexact line search techniques,[20,37,29,40]. These include acceptance criteriafor the step length, compatible with a formal proof of global convergence,and a numerical procedure to find a value of t that verifies these criteria.We extend to the constrained line search well known procedures employed forunconstrained optimization.

11

Page 12: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

4.1 Armijo’s Line Search

Armijo’s line search defines a procedure to find a step-length ensuring a rea-sonable decrease of the potential function. In our case, we add the conditionof feasibility of the inequality constrains. Armijo’s search is stated as follows:

Algorithm 1. Armijo’s Constrained Line Search

Define the step-length t as the first number of the sequence 1, ν, ν2, ν3, ...,satisfying

φ(x + td + t2d) ≤ φ(x) + tη1∇φt(x)d (20)

and

g(x + td + t2d) ≤ 0, (21)

where η1 ∈ (0, 1) and ν ∈ (0, 1) also. 2

Condition (20) imposes an upper bound on t. It ensures a reasonable decreaseof the potential function, at least η1 times the reduction obtained on a linearfunction tangent to φ at x. Let us call tmax the greatest t that satisfies (20)and tmin = inf(1, νtmax). It is easy to prove that t ∈ [tmin, tmax].

4.2 Wolfe’s and Goldstein’s Constrained Line Search Criteria:

Wolfe’s and Goldstein’s Criteria define intervals of acceptance of the step-length. According to our extension of Wolfe’s criterion, the step length t isaccepted if (20) and (21) are true and at least one of the following m + 1conditions hold:

∇φt(x + td + t2d)d ≥ η2∇φt(x)d (22)

and

gi(x + td + t2d) ≥ γgi(x); i = 1, 2, ..., m (23)

where η1 ∈ (0, 1/2), η2 ∈ (η1, 1) and γ ∈ (0, 1). 2

12

Page 13: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Condition (22) imposes a decrease of the directional derivative of the potentialfunction. However, if the exact minimum of Problem (19) is not interior, theexistence of t such that (22) holds is not guaranteed. This is the motivationto include (23). We have that (20) and (21) define upper bounds on the step-length, while (22) and (23) define lower bounds.

Wolfe’s condition is very strong because guarantees a reduction of the potentialfunction and of it’s directional derivative. However it requires the computationthe directional derivative of the potential function. This is avoided by anextension of Goldstein’s criterium that, instead of (22), employs the followingcondition:

φ(x + td + t2d) ≥ φ(x) + tη2∇φt(x)d, (24)

with η2 ∈ (η1, 1).

A step-length satisfying Wolfe’s or Goldstein’s criteria for interior point al-gorithms can be obtained iteratively in a similar way as in [29]. Given aninitial t, if it is too short, extrapolations are carried out until a good or a toolong step is obtained. Once a too long step is obtained, interpolations basedon the longest known short step and the shortest known long step are car-ried out, until the criterion is satisfied. The interpolations and extrapolationare based on polynomial approximations of the potential function and the in-equality constraints. As the criterion of acceptance is quite wide, the processgenerally requires few iterations. A line search procedure based on Wolfe’s andGoldstein’s Criteria is described now.

Algorithm 2. Wolfe’s and Goldstein’s Constrained Line Search

Parameters. η1 ∈ (0, 0.5), η2 ∈ (η1, 1) and γ ∈ (0, 1).

Data. Define an initial estimate of the step length, t > 0. Set tR = tL = 0

Step 1. Test for the upper bound on t.

If

φ(x + td + t2d) ≤ φ(x) + tη1∇φt(x)d

and

g(x + td + t2d) ≤ 0,

Go to Step 2. Else go to Step 4.

Step 2. Test for the lower bound on t.

13

Page 14: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

If

∇φt(x + td + t2d)d ≥ η2∇φt(x)d,

for Wolfe’s Criterium. Or

φ(x + td + t2d) ≥ φ(x) + tη2∇φt(x)d,

for Goldstein’s Criterium.

Or any

gi(x + td + t2d) ≥ γgi(x) for i = 1, 2, ..., m,

then t verifies Line Search Criterium, STOP.

Step 3. Get a longer t.

Set tL = t.

i) If tR = 0, find a new t by extrapolation based on (0, t).

ii) If tR > 0, find a new t by interpolation in (t, tR).

Return to Step 1. Step 4. Get a shorter t.

Set tR = t.

Find a new t by interpolation in (tL, t) Return to Step 1. 2

5 Solving FAIPA’s Internal Linear Systems

The matrix corresponding to the linear systems (11), (16) and (18), called”Primal-Dual Matrix”, is not symmetric, neither positive definite. However,in [42] it was proved that the solution is unique, provided the problem satisfiesRegularity Condition.

It follows from (11) that

d0 = −S−1[∇f(x) +∇g(x)λ0 +∇h(x)µ0] (25)

and

[Λ∇gt(x)S−1∇g(x)−G(x)]λ0 +∇gt(x)S−1∇h(x)µ0 = −Λ∇gt(x)S−1∇f(x)

∇ht(x)S−1∇g(x)λ0 +∇ht(x)S−1∇h(x)µ0 = −∇ht(x)S−1∇f(x) + h(x)(26)

14

Page 15: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Then, instead of (11), we can obtain λ0 and µ0 by solving (26), and d0 bysubstitution in (25). Similar expressions, involving linear systems with thesame matrix as in (26), called ”Dual Matrix”, can be obtained to solve (16)and (18).

The Dual Matrix is symmetric and positive definite when the regularity con-dition is true, [17,19]. However the condition number becomes worst as someof the components of λ grows. To compute the matrices we need S−1 or theproducts S−1∇g(x) and S−1∇f(x). In quasi - Newton algorithms, this can beeasily obtained by working with an approximation of L−1(x, λ, µ).

In a similar way as before, we deduce from (11) that

λ0 = −G(x)−1Λ∇gtd0 (27)

and

[S −∇g(x)G−1(x)Λ∇gt(x)]d0 +∇h(x)µ0 = −∇f(x)

∇ht(x)d0 = −h(x)(28)

The system above is symmetric and positive definite. This is true since Sis positive definite and ∇g(x)G−1(x)Λ∇gt(x) is negative definite at x suchthat g(x) ≤ 0. When there are only inequality constraints, (28) gives d0 andthe corresponding matrix is called ”Primal Matrix”. Also in this case similarexpressions are obtained to solve (16) and (18).

6 First Order, Newton and Quasi-Newton Iterations

In the present method S can be taken equal to the second derivative of theLagrangian, to a quasi-Newton estimate or to the Identity Matrix.

6.1 A First Order Algorithm

Taking S = I the present method becomes an extension of the gradientmethod. In particular, when all the constraints are active, the search directionis equal to the projected gradient, [20]. Up to now, we have no theoreticalresults about the speed of convergence, but probably the rate of convergenceis no better than linear. Since the computer effort and memory storage aresmaller, this algorithm can be efficient in engineering applications that do not

15

Page 16: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

need a very precise solution or when the computation of the involved functionsand derivatives is not expensive.

6.2 A Newton Algorithm

To have a Newton algorithm we take S = L(xk, λk, µk) as in the iteration (10).However, as mentioned above, S must be positive definite. Since this is notalways true, the Newton version of the present method can be obtained forparticular applications only. This is the case of the algorithm for NonlinearLimit Analysis and of the algorithm for Linear Stress Analysis with Contactdescribed in references [55] and [3].

A study of the rate of convergence in this case is still required. Newton methodfor nonlinear system has quadratic local convergence. But, we make iterationsin the primal and dual variables, and, in practice we are interested on the rateof convergence in terms of the design variables x.

6.3 A Quasi - Newton Algorithm

In quasi - Newton methods for constrained optimization, S = B, an approxi-mation of the Hessian of the Lagrangian L(x, λ, µ). Then, it should be possibleto obtain B with the same updating rules as in unconstrained optimization,but taking ∇xl(x, λ, µ) instead of ∇f(x). However, since L(x, λ, µ) is not nec-essarily positive definite at a KKT point, it is not always possible to get Bpositive definite, as required by the present technique. Powell, [44,45], pro-posed a modification of Broyden - Fletcher - Goldfarb and Shanno (BFGS)Updating Rule that gives positive definite quasi - Newton matrices.

BFGS Updating Rule for Constrained Optimization

Takeδ = xk+1 − xk

andγ = ∇xl(x

k+1, λk0, µ

k0)−∇xl(x

k, λk0, µ

k0)

If

δtγ < 0.2δtBkδ,

then compute

φ =0.8δtBkδ

δtBkδ − δtγ

16

Page 17: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

and takeγ := φγ + (1− φ)Bkδ.

Set

Bk+1 := Bk +γγt

δtγ− BkδδtBk

δtBkδ2

In [21] it was proved that the convergence of the present algorithm is two-stepsuperlinear, provided that a unit step length is obtained after a finite numberof iterations.

It is also interesting to build a quasi - Newton approximation of the inverse ofthe Hessian Matrix, H ≈ L−1(x, λ0, µ0). An Updating Rule for H is obtainedby employing Sherman-Morrison Formula, [40], to get the inverse of Bk+1:

Hk+1 := Hk +

(1 +

γtHkγ

γtδ

)δδt

δtγ− δγtHk + Hkγδt

γtδ(29)

6.4 Limited Memory quasi-Newton Method

Quasi - Newton methods, when applied to constrained optimization, constructan approximation of the second derivative of the Lagrange function [20,37,40].Quasi - Newton method, when applied to constrained optimization, are veryefficient since it avoids the calculus of the second derivative of the objectivefunction and the constraints. However they require the storage of the quasi -Newton matrix, that is full. Limited memory quasi - Newton methods avoidthe storage of this matrix, [14,40]. They were first developed for unconstrainedoptimization and then extended to problems with box constraints. FAIPA isparticularly adapted to employ this technique in problems with any kind ofnonlinear constraints.

Limited Memory Method computes efficiently the product of the quasi-NewtonMatrix Hk+1 by a vector v ∈ <n, or by a matrix, without the explicit assemblyand storage of Hk+1, requiring only the storage of the q most recent pairs ofvectors δ and γ.

Updating rule (29) for H can be expressed as follows:

Hk+1 = Hk−q + [∆ Hk−qΓ]E[∆ Hk−qΓ]t (30)

where∆ = [δk−q, δk−q+1, δk−q+2, ..., δk−1]; ∆ ∈ <n×q

17

Page 18: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Γ = [γk−q, γk−q+1, γk−q+2, ..., γk−1]; Γ ∈ <n×q

E =

R−t(D + ΓtHk−qΓ)R−1 −R−t

−R−1 0

; E ∈ <2q×2q

R = upper(∆tΓ); R ∈ <q×q

D = diag(R)

We write A = upper(B) when Aij = Bij for j ≥ i and Aij = 0 otherwise.

Limited Memory Method takes Hk−q = I. Then, the following expression forHk+1v is obtained:

Hk+1v = v + [∆ Γ]E[∆ Γ]tv. (31)

Now, the coefficient matrices and right sides of the Dual System (26) can beobtained without need of computing and storing the quasi - Newton Matrix.A Limited Memory formulation for the BFGS matrix B can be obtained in asimilar way and employed to solve the Primal System (28).

Even in very large problems, taking q ≈ 10, the number of iterations withLimited Memory and with the original Quasi - Newton Algorithm remainssimilar.

Iterative methods for linear systems in general compute the product of thecoefficient matrix by a vector at each iteration. Limited Memory formulationcan also be applied to solve iteratively the internal linear systems of FAIPAwithout storing the quasi - Newton matrix. This can be extremely efficientwhen the constraints derivatives are sparse.

7 Statement of FAIPA

Algorithm 3. Feasible Arc Interior point Algorithm

Parameters. (α, η, ν1, ν2, γ) ∈ (0, 1), ϕ > 0 and c ∈ <p, c > 0.Data. Initial values for x ∈ <n such that g(x) < 0, λ ∈ <m, λ > 0, andS ∈ <nxn symmetric and positive definite.

Step 1. Computation of a feasible descent direction.

18

Page 19: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

(i) Solve the linear systems:

Sd0 +∇g(x)λ0 +∇h(x)µ0 = −∇f(x)

Λ∇gt(x)d0 + G(x)λ0 = 0

∇ht(x)d0 = −h(x)

(32)

and

Sd1 +∇g(x)λ1 +∇h(x)µ1 = 0

Λ∇gt(x)d1 + G(x)λ1 = −λ

∇ht(x)d1 = 0

(33)

Let be the potential function

φc(x) = f(x) +p∑

i=1

ci|hi(x)|, (34)

(ii) If ci < −1.2µ0(i), then set ci = −2µ0(i); i = 1, ..., p.

(iv) If dT1∇φc(x) > 0, set

ρ = min[ϕ ‖ d0 ‖2

2; (α− 1)dT

0∇φc(x)/dT1∇φc(x)] (35)

Otherwise, set

ρ = ϕ ‖ d0 ‖2

2. (36)

(iii) Compute the feasible descent direction: d = d0 + ρd1

Step 2. Computation of a ”restoring direction” d

Compute:

ωIi = gi(x + d)− gi(x)−∇gt

i(x)d; i = 1, ..., m

ωEi = hi(x + d)− hi(x)−∇ht

i(x)d; i = 1, ..., p

19

Page 20: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Solve:

Sd +∇g(x)λ +∇h(x)µ = 0

Λ∇gt(x)d + G(x)λ = −ΛωI

∇ht(x)d = −ωE

(37)

Step 3. Arc search.

Employ a line search procedure to get a step-length t based on the PotentialFunction φc(x + td + t2d)

Step 4. Updates.

(i) Set the new point:x := x + td + t2d

(ii) Define new values for λ > 0 and S symmetric and positive definite.

(iii) Go back to Step 1. 2

The present algorithm is very general in the sense that it converges to a Karush- Kuhn - Tucker point of the problem for any initial interior point. This is trueno matter how λ and S are updated in Step 4, taking care of Assumptions 1and 2. We work with the following updating rule for λ.

Updating Rule for λ

Set, for i = 1, ..., m,

λi := max [λ0; ε ‖ d0 ‖2

2]. (38)

If gi(x) ≥ −g and λi < λI , set λi = λI . 2

The parameters ε, g and λI are taken positive. In this rule, λi is a second orderperturbation of λ0, given by Newton iteration (10) . If g and λI are taken smallenough, then after a finite number of iterations, λi becomes equal to λ0 forthe active constraints .

8 A Quasi-Newton Method for Simultaneous Analysis and Design

In the case when FAIPA is applied to solve the SAND Problem (2), the sizes ofthe internal linear systems and of the quasi-Newton matrix are much increased

20

Page 21: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

since the number of degrees of freedom of the state equation in general is muchlarger that the number of design variables.

We present a quasi-Newton algorithm, FAIPA SAND, that reduces these sizesto the same values as in the classical design optimization model. Our techniquecan be considered as a generalization of the Reduced Gradient method. Ex-isting Reduced Algorithms restore the equality constraints at each iteration.The present formulation avoids this procedure, that is equivalent to solve thestate equation at each iteration.

Let be the the Mathematical Program for SAND problem:

minx,u

f(x, u)

s. t. g(x, u) ≤ 0

h(x, u) = 0

e(x, u) = 0

(39)

The first linear system (11) of FAIPA becomes

Bxxd0x + Bxud0u +∇xg(x, u)λ0 +∇xh(x, u)µ0 +∇xe(x, u)ν0 = −∇xf(x, u)

Buxd0x + Buud0u +∇ug(x, u)λ0 +∇uh(x, u)µ0 +∇ue(x, u)ν0 = −∇uf(x, u)

Λ∇xgt(x, u)d0x + Λ∇ug

t(x, u)d0u + G(x, u)λ0 = 0

∇xht(x, u)d0x +∇uh

t(x, u)d0u = −h(x, u)

∇uet(x, u)d0x +∇u et(x, u)d0u = −e(x, u)

(40)

where

B =

Bxx Bxu

Bux Buu

, d0 =

d0x

d0u

and ν0 ∈ <r represents the Lagrange Multipliers corresponding to the stateequation.

Now, we assume that ∇uet(x, u) has an unique inverse and call

δu ≡ [∇uet(x, u)]−1e(x, u); δu ∈ <r (41)

and

∆u ≡ [∇uet(x, u)]−1∇xe

t(x, u); ∆u ∈ <r×n. (42)

21

Page 22: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

We remark that [∇uet(x, u)] is the so called ”Tangent Matrix” and it has a

particular structure depending on the application. Numerical analysis codes indifferent disciplines usually include techniques to solve the tangent equationthat take advantage of the structure and that can be employed to compute δuand ∆u.

Let us define M = [I −∆ut], M ∈ <n×(n+r), Ix = [In×n 0], Ix ∈ <n×(n+r),and Iu = [Ir×r 0], Iu ∈ <r×(n+r). By elimination of d0u and ν0 from (40), thefollowing reduced system is obtained:

Bd0x + [∇xg(x, u)−∆ut∇ug(x, u)]λ0 + [∇xh(x, u)−∆ut∇uh(x, u)]µ0 = b

Λ[∇xg(x, u)−∆ut∇ug(x, u)]td0x + G(x, u)λ0 = −Λ∇txg(x, u)δu

[∇xh(x, u)−∆ut∇uh(x, u)]td0x = −[h(x, u) +∇txh(x, u)δu]

(43)

where B = MBM t, B ∈ <n×n, is a reduced quasi-Newton matrix and

b = −∇xf(x, u) + ∆ut∇uf(x, u)− (Ix + ∆utIu)BI tuδu.

We employ the Limited Memory quasi-Newton formulation to compute thematrix B and the vector b without need of computing and storing the fullquasi-Newton matrix B. With the present formulation the internal linear sys-tems has the same size as in the classical model for engineering optimization.

9 FAIPA MDO: A Mathematical Programming Algorithm for MDO

We consider now the Problem (3), that represents a model for Multidisci-plinary Design Optimization. This one is a particular case of Problem (2),where u ≡ (u1, u2, ..., une) and

e(x, z, u) ≡ [e1(x, z, u1), e2(x, z, u2), ..., ene(x, z, une)].

It follows that δu = (δu1, δu2, ..., δune) and ∆u = (∆u1, ∆u2, ..., ∆une), where

δui = [∇uiet

i(x, ui)]−1ei(x, ui); δui ∈ <ri (44)

and

∆ui = [∇uiet(x, ui)]

−1∇xeti(x, ui); ∆ui ∈ <ri×n. (45)

Then, the tangent state equations corresponding to the subsystems have tobe solved at each iteration of FAIPA MDO.

22

Page 23: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Given initial values for the Design Variables x, the State Variables ui cor-responding to all the subsystems and the Linking Variables z, FAIPA MDOgenerates a sequence converging to an optimal point that verifies the StateEquations and the Compatibility Equations. The initial point only needs toverify the inequality constraints.

The stopping criterion of the main algorithm includes the verification, with agiven tolerance, of Karush - Kuhn - Tucker optimality conditions and of thestate equations of all the subsystems. KKT conditions are easily checked sinceFAIPA produces estimates of the Lagrange Multipliers.

Following, a simplified description of the iterative process of FAIPA MDO ispresented.

Algorithm 4. FAIPA MDO

Data: Initial values for x ∈ <n, z ∈ <s and u ≡ (u1, u2, ..., une), ui ∈ <ri suchthat g(x, z, u) < 0 and λ ∈ <m, λ > 0, and S ∈ <nxn symmetric and positivedefinite.

Step 1. For each discipline i compute:

The State Equationei(x, z, u)

The State Equation Sensitivities

∇xei(x, z, u),∇zei(x, z, u),∇uei(x, z, u)

Step 2. For each discipline i solve:

[∇uiet

i(x, ui)]δui = ei(x, ui)

and

[∇uiet(x, ui)]∆ui = ∇xe

ti(x, ui)

Step 3. Compute a Feasible Descent Direction

d ≡ (dx, dz, du1 , du2 , ..., dune)

and estimates of the Lagrange Multipliers

λ0, µ0, ν01 , ν02 , ..., ν0ne

23

Page 24: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Step 4. For each discipline i compute:

The State Equation

ei(x + dx, z + dz, u + dui)

Step 5. Compute a Restoring Direction

d ≡ (dx, dz, du1 , du2 , ..., dune)

Step 6. Arc search.

Find a step-length t based on the Potential Function φc(x + td + t2d)

Step 7. Updates.

(i) Set the new point:

x := x + tdx + t2dx

z := z + tdz + t2dz

ui := ui + tdui+ t2dui

; for i = 1, 2, ..., ne

(ii) Define new values for λ > 0 and S symmetric and positive definite.

(iii) Go back to Step 1. 2

FAIPA MDO solves three linear systems with the same matrix per iteration.The size of these ones depends only on the number of design variables andconstraints. The additional systems required to compute δui and ∆ui canhave their matrix computed by the engineering simulations codes, as well as,be solved by these codes. In this way, existing analysis codes and their solverscan be employed without changes, for MDO. We remark that FAIPA andFAIPA MDO have strong theoretical bases that ensure global convergence toKKT points of Problem (3).

References

[1] Araujo, A. L., Mota Soares, Herskovits, J. and Pedersen, P.,Development of a Finite Element Model for the Identification of Materialand Piezoeletric Properties Through Gradient Optimization and ExperimentalVibration Data, Composite Structures, Elsevier, v.58, pp.307 - 318, 2002.

24

Page 25: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

[2] Araujo, A. L., Mota Soares, C. M., Moreira de Freitas, M. J.,Pedersen, P. and Herskovits, J., Combined Numerical - ExperimentalModel for the Identification of Mechanical Properties of Laminated Stuctures,Composite Structures, Elsevier , v.50, pp.363 - 372, 2000.

[3] Auatt, S. S., Borges, L. A., and Herskovits, J., An Interior PointOptimization Algorithm for Contact Problems in Linear Elasticity, NumericalMethods in Engineering’96, Edited by J. A. Desideri, P. Le Tallec, E. Onate,J. Periaux, J., and E. Stein, John Wiley and Sons, New York, New York, pp.855-861, 1996.

[4] Baron, F. J., Constrained Shape Optimization of Coupled Problems withElectromagnetic Waves and Fluid Mechanics, PhD Thesis, University ofMalaga, Malaga, Spain, 1994, (in Spanish).

[5] Baron, F. J., Some Shape Optimization Problems in Electromagnetism andFluid Mechanics, DSc Thesis, Paris VI University, Paris, France, 1998, (inFrench).

[6] Baron, F. J., Duffa, G., Carrere, F., and Le Tallec, P., Optimisationde Forme en Aerodynamique, CHOCS, Revue Scientifique et Technique de laDirection des Applications Militaires du CEA, France, No. 12, pp. 47 - 56, 1994,(in French).

[7] Baron, F. J., and Pironneau, O., Multidisciplinary Optimal Design of aWing Profile, Proceedings of Structural Optimization 93, Rio de Janeiro, Brazil,1993, Edited by J. Herskovits, Vol. 2, pp. 61-68, 1993.

[8] Bijan, M., and Pironneau, O., New Tools for Optimum Shape Design , CFDReview, Special Issue, 1995.

[9] Dennis, J. E., and Schnabel, R., Numerical Methods for ConstrainedOptimization and Nonlinear Equations, Prentice Hall, Englewood Cliffs, NewJersey, 1983.

[10] Dew, M. C. A First Order Feasible Directions Algorithm for ConstrainedOptimization, Technical Report No. 153, Numerical Optimization Centre, TheHatfield Polytechnic, College Lane, Hatfield, Hertfordshire AL 10 9AB, U. K.,1985.

[11] Dew, M. C., A Feasible Directions Method for Constrained OptimizationBased on the Variable Metric Principle, Technical Report 155, NumericalOptimization Centre, Hatfield Polytechnic, Hatfield, Hertfordshire, England,1985.

[12] Eschenauer, H. A. and Beer, R. Multidisciplinary optimization of castcomponents regarding process characteristics, Structural and MultidisciplinaryOptimization, Springer - Verlag, v. 16, pp. 212-225, 1998.

[13] Fox, R. L. Optimization Methods for Engineering Design, Addison-Wesley,Reading, Massachusetts, 1971.

25

Page 26: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

[14] Goldfeld, P., Duarte, and A., Herskovits, J., A Limited Memory InteriorPoints Technique for Nonlinear Optimization, Proceedings of ECCOMAS,European Congress on Computational Methods in Applied Science andEngineering, Paris, Sept. 1996.

[15] Gu, X., Renaud, J. E., Batill, S. M., Brach, R.M. and Budhiraja,A.S., Worst case propagated uncertainty of multidisciplinary systems in robustdesign optimization, Structural and Multidisciplinary Optimization, Springer,V. 20, pp. 190-213, 2000.

[16] Haftka, R. and Gurdal, Z., Elements of Structural Optimization, Kluwer,1992.

[17] Herskovits, J., Developement of a Numerical Method for NonlinearOptimization, Dr. Ing. Thesis, Paris IX University, INRIA Rocquencort, 1982.

[18] Herskovits, J., A Two-Stage Feasible Directions Algorithm Including VariableMetric Techniques for Nonlinear Optimization , Research Report 118, INRIA,Le Chesnay, France, 1982.

[19] Herskovits, J., A Two-Stage Feasible Directions Algorithm for NonlinearConstrained Optimization, Mathematical Programming, Vol. 36, pp. 19-38,1986.

[20] Herskovits, J., A View on Nonlinear Optimization, in Advances in StructuralOptimization, Edited by J. Herskovits, Kluwer Academic Publishers, Dordrecht,Holland, pp. 71-117, 1995.

[21] Herskovits, J., A Feasible Directions Interior Point Technique For NonlinearOptimization, JOTA, Journal of Optimization Theory and Applications, ,Vol.99, N.1, pp.121 - 146,Plenum, London, 1998.

[22] Herskovits, J., Dias, G.,Falcon, G. and Mota Soares, C. M. ShapeStructural Optimization with an Interior Point Mathematical ProgrammingAlgorithm, Structural And Multidisciplinary Optimization. Springer, v.20,p.107 - 115, 2000.

[23] Herskovits, J., Dubeux, V. Mota Soares, C. M. and Araujo, A.,Interior Point Algorithms for Nonlinear Constrained Least Squares Problems,Inverse Problems in Engineering, Taylor & Francis, London, 2004,(To bepublished).

[24] Herskovits, J., Lapporte, E., Le Tallec P., and Santos, G., A Quasi-Newton Interior Point Algorithm Applied for Constrained Optimum Design inComputational Fluid Dynamics, European Journal of Finite Elements, Vol. 5,pp. 595-617, 1996.

[25] Herskovits, J., Leontiev, A., Dias, G. and Falcon, G. S. ContactShape Optimization: a Bilevel Programming Approach, Structural AndMultidisciplinary Optimization, Springer, v.20, n.3, pp.214 - 221, 2000.

26

Page 27: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

[26] Herskovits, J., Mappa, P. and Juillen, L. FAIPA SAND: An InteriorPoint Algorithm for Simultaneous ANalysis and Design Optimization,WCSMO4, Fourth World Congress of Structural and MultidisciplinaryOptimization Dalian, China, June 2001.

[27] Herskovits, J., Mappa, P. and Mota Soares, C. M., FAIPA MDO: AMathematical Programming Algorithmfor Multidisciplinary Design Optimization, WCSMO5, Fifth World Congressof Structural and Multidisciplinary Optimization, Lido di Jessolo, Italy, May2003.

[28] Herskovits, J. and Santos, G., Feasible Arc Interior Point Algorithmfor Nonlinear Optimization, Computational Mechanics, New trends andApplications, Ed. by S. Idelsohn, E. Onate and E. Dvorkin, CIMNE, Barcelona,1998.

[29] Hiriart-Urruty, J. B., and Lemarechal, C., Convex analysis andMinimization Algorithms, Springer Verlag, Berlin, Germany, 1993.

[30] Hulme, K.F. and Bloebaum, C.L.A simulation-based comparison ofmultidisciplinary design optimization solution strategies using CASCADE,Structural and Multidisciplinary Optimization, Springer, v. 19, pp. 17-35, 2000.

[31] Kirsch, U. Structural Optimization: Fundamentals and Applications, Springer- Verlag, 1993.

[32] Kovacs, B., Szabo, F.J. and Szota, Gy., A generalized shape optimizationprocedure for the solution of linear partial differential equations withapplication to multidisciplinary optimization , Structural and MultidisciplinaryOptimization, Springer , v.21, pp. 327-331, 2001.

[33] Laporte, E. and Le Tallec, P., Numerical Methods in Sensitivity and ShapeOptimization, Birkhauser, 2002.

[34] Leontiev, A. and Herskovits, J. Interior Point Techniques ForOptimal Control Of Variational Inequality, Structural and MultidisciplinaryOptimization, Springer , v.14, n.2-3, pp.100 - 107, 1997.

[35] Leontiev, A., Herskovits, J. and Eboli, C., Optimization TheoryApplication to Slitted Plate Bending Problem , International Journal of Solidsand Structures,United Kingdom: , Vol.35, pp.2679 - 2694, 1997.

[36] Leontiev, A., Huacasi, W. and Herskovits, J., An Optimization Techniquefor the Solution of Signorini Problem Using the Boundary Element Method,Structural and Multidisciplinar Optimization, Springer - Verlag, v.24, n.1, pp.72- 77, 2002.

[37] Luenberger D.G., Linear and Nonlinear Programming , 2nd. Edition,Addison-Wesley,London, 1984.

[38] Maratos N., Exact Penalty Function Algorithms for Finite DimensionalOptimization Problems, PhD Thesis, Imperial College of Science andTechnology, London, England, 1978.

27

Page 28: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

[39] Mayne, D. Q., and Polak, E., Feasible Directions Algorithms forOptimization Problems with Equality and Inequality Constraints, MathematicalProgramming, Vol. 11, pp. 67-80, 1976.

[40] Nocedal, J. and Wright, S. J., Numerical optimization, Springer-Verlag,New York, 1999.

[41] Panier, E. R., and Tits A. L., A Superlinearly Convergent Algorithm forConstrained optimization problems, SIAM Journal of Control Optimization, Vol.25, pp. 934-950, 1987.

[42] Panier, E. R., Tits A. L., and Herskovits J., A QP-Free, GloballyConvergent, Locally Superlinearly Convergent Algorithm for InequalityConstrained Optimization, SIAM Journal of Control and Optimization, Vol.26, pp. 788-810, 1988.

[43] Polak, E. Optimization Algorithm and Consistent Approximations, Springer- Verlag, 1997.

[44] Powell, M.J.D., The Convergence of Variable Metric Methods for NonlinearlyConstrained Optimization Calculations, Nonlinear Programming 3, Edited byO.L. Mangasarian, R.R. Meyer, and S.M. Robinson, Academic Press, London,England, pp. 27-64, 1978.

[45] Powell M. J. D., Variable Metric Methods for Constrained Optimization,Mathematical Programming - The State of the Art, Edited by A. Bachem, M.Grotschet, and B. Korte, Springer Verlag, Berlin, Germany, pp. 288-311, 1983.

[46] Qian, Z., Xue, C. and Pan, S., FEA Agent for MultidisciplinaryOptimization, Structural and Multidisciplinary Optimization, Springer, v.22,pp. 373-383, 2001.

[47] Romero, J., Mappa, P. C., Herskovits, J. and Mota Soares, C. M.,Optimal Truss Desing Including Plastic Collapse Constraints, Structural andMultidisciplinary Optimization, Springer, To be published.

[48] Sobieszczanski-Sobieski, J., Multidisciplinary Design Optimization: AnEmerging New Engineering Discipline, in Advances in Structural Optimization,J. Herskovits (Ed.), KLUWER A.P., Dordrecht, pp. 483-496, 1995.

[49] Sobieszczanski-Sobieski, J., and Haftka, R.T., MultidisciplinaryAerospace Design Optimization: Survey of Recent Developments, Structural andMultidisciplinary Optimization, Springer, v.14(1), pp. 1-23, 1997.

[50] Tappeta, R.V., Nagendra, S. and Renaud, J. E., A multidisciplinarydesign optimization approach for high temperature aircraft enginecomponents, Structural and Multidisciplinary Optimization, v. 18, pp. 134-145, 1999.

[51] Tits, L. A., and Zhou J. L., A Simple, Quadratically Convergent InteriorPoint Algorithm for Linear Programming and Convex Quadratic Programming,Large Scale Optimization: State of the Art, Edited by W.W. Hager, D. W.

28

Page 29: Mathematical Programming Models and Algorithms for ... · FAIPA, that is an extension of the Feasible Directions Interior Point Algo- rithm [17{19,21,42], integrates ideas coming

Hearn and P.M. Pardalos, Kluwer Academic Publishers, Holland, pp. 411 -4127, 1993.

[52] Vanderplaats, G. N., Numerical Optimization Techniques for EngineeringDesign: with applications, McGraw-Hill, 1984.

[53] Vautier, I., Salaun, M. and Herskovits, J., Application of an InteriorPoint Algorithm to the Modeling of Unilateral Contact Between Spot-WeldedShells, Proceedings of Structural Optimization 93, Rio de Janeiro, Brazil, 1993,Edited by J. Herskovits, Vol. 2, pp. 293 - 300, 1993.

[54] Venter, G. and Sobieszczanski - Sobieski, J., MultidisciplinaryOptimization of a Transport Aircraft Wing Using Particle Swarm Optimization,Structural and Multidisciplinary Optimization, Springer, to be publised.

[55] Zouain, N. A., Herskovits, J., Borges, L. A., and Feijoo, R.,An Iterative Algorithm for Limit Analysis with Nonlinear Yield Functions,International Journal on Solids and Structures, Vol. 30, pp. 1397-1417, 1993.

29


Recommended