This article was downloaded by: [National Dong Hwa University]On: 29 March 2014, At: 09:14Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Optimization: A Journal ofMathematical Programming andOperations ResearchPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/gopt20
Methods for optimizing over theefficient and weakly efficient sets ofan affine fractional vector optimizationprogramLe Thi Hoai An a , Pham Dinh Tao b , Nguyen Canh Nam c & LeDung Muu da Laboratory of Theoretical and Applied Computer Science (LITA) ,University Paul Verlaine – Metz , Ile du Saulcy, 57045 Metz Cedex,Franceb Laboratory of Modelling, Optimization & Operations Research(LMI) , National Institute for Applied Sciences – Rouen , PlaceEmile Blondel – BP 08 F76131, Mont Saint Aignan Cedex, Francec Department of Mathematics , Technical University of Denmark ,Building 303S, DK-2800 Kgs, Lyngby, Denmarkd Institute of Mathematics , 18 Hoang Quoc Viet Road, 10307Hanoi, VietnamPublished online: 12 Feb 2010.
To cite this article: Le Thi Hoai An , Pham Dinh Tao , Nguyen Canh Nam & Le Dung Muu (2010)Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vectoroptimization program, Optimization: A Journal of Mathematical Programming and OperationsResearch, 59:1, 77-93, DOI: 10.1080/02331930903500290
To link to this article: http://dx.doi.org/10.1080/02331930903500290
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
OptimizationVol. 59, No. 1, January 2010, 77–93
Methods for optimizing over the efficient and weakly efficient sets
of an affine fractional vector optimization program
Le Thi Hoai Ana*, Pham Dinh Taob, Nguyen Canh Namc and Le Dung Muud
aLaboratory of Theoretical and Applied Computer Science (LITA), University PaulVerlaine – Metz, Ile du Saulcy, 57045 Metz Cedex, France; bLaboratory of Modelling,Optimization & Operations Research (LMI), National Institute for Applied Sciences –
Rouen, Place Emile Blondel – BP 08 F76131, Mont Saint Aignan Cedex, France;cDepartment of Mathematics, Technical University of Denmark, Building 303S, DK-2800
Kgs, Lyngby, Denmark; dInstitute of Mathematics, 18 Hoang Quoc Viet Road,10307 Hanoi, Vietnam
(Received 5 March 2008; final version received 10 February 2009)
Both the efficient and weakly efficient sets of an affine fractional vectoroptimization problem, in general, are neither convex nor given explicitly.Optimization problems over one of these sets are thus nonconvex.We propose two methods for optimizing a real-valued function over theefficient and weakly efficient sets of an affine fractional vector optimizationproblem. The first method is a local one. By using a regularization function,we reformulate the problem into a standard smooth mathematicalprogramming problem that allows applying available methods forsmooth programming. In case the objective function is linear, we haveinvestigated a global algorithm based upon a branch-and-bound procedure.The algorithm uses Lagrangian bound coupling with a simplicial bisectionin the criteria space. Preliminary computational results show that the globalalgorithm is promising.
Keywords: affine fractional; pareto efficiency; optimization over theefficient set; branch-and-bound; Lagrange bound; simplicial bisection
AMS Subject Classifications: 65K10; 90C25
1. Introduction
Affine fractional vector programming (AFVP) plays an important role in vectoroptimization (see, e.g. [11,21,24,26]). Pareto efficiency is a fundamental conceptwidely used in vector optimization. Unlike solving a scale mathematical program,where the goal is to find an optimal solution, in vector optimization finding a Paretoefficient solution is not always meaningful. The problem of generating all of thePareto efficient solutions of a vector mathematical programming problem is thusimportant. There exist few algorithms that allow generating the whole Paretoefficient set of a linear [18,31] and convex [20] multiobjective programming problem.
*Corresponding author. Email: [email protected]
ISSN 0233–1934 print/ISSN 1029–4945 online
� 2010 Taylor & Francis
DOI: 10.1080/02331930903500290
http://www.informaworld.com
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
However, these algorithms work well only for problems of medium size, since the
efficient set, even for the linear case, in general, is not convex. In [4], an outer
approximation algorithm is proposed for generating all efficient extreme points in
the outcome set of a linear vector programming problem. This algorithm can solve
problems of high decision variables, since its outer approximation is performed in the
outcome space. In AFVP, as far as we know, no algorithm is available for generating
the whole Pareto efficient set. In order to avoid generating the whole set of Pareto
efficient solutions, recently the optimization problem over the Pareto efficient set has
attracted much attention from researchers. This problem is introduced first by Philip
[23], but prior to the appearance of the remarkable paper [5] by Benson, this problem
was intensively studied and a lot of algorithms for optimizing (globally or locally) a
real-valued function over the efficient and weakly efficient sets of a linear vector
program were developed (see, e.g. [1,2,5–7,13,17,19,22] and the references therein).
These algorithms strongly employ the fact that the efficient set is a connected union
of some faces of the polyhedral convex constrained domain. It also uses the duality
theory for linear vector programming that allows formulating the efficient set as an
equation defined by a DC function [1,2]. Unfortunately, these properties no longer
remain true for linearly constrained AFVPs. Thus, it is not possible to use the
available algorithms developed for linear vector programming to AFVP.The problem of optimizing a real-valued function over the efficient set of a
linearly constrained AFVP has been studied in [12,21,30]. In [12], Choo and Akins
propose a parametric algorithm for optimizing a linear function over the efficient set
of a bicriteria AFVP. This parametric algorithm works well for bicriteria cases, but
it cannot extend to three or more criteria. In [21], Malivert gives a necessary and
sufficient condition for efficiency and weak efficiency to AFVP and uses it together
with penalty techniques to formulate the problem into a standard mathematical
programming problem that, in general, is neither convex nor smooth. No solution
method is discussed in [21] for the penalized problem. In [30], a branch-and-bound
algorithm is proposed for globally optimizing a linear function over the efficient set
of an AFVP. From a computational point of view, the algorithm proposed in [30]
has a disadvantage that for computing lower bounds it requires solving minimax
subproblems rather than linear ones.In this article, we attempt to develop local and global optimization approaches
for optimization over the Pareto efficient and weakly efficient sets of an AFVP.
Namely, we use a regularization function to formulate the problem being
considered into a standard smooth mathematical programming problem where the
derivative of the functions involved can be computed by solving a strongly convex
quadratic program. This formulation allows that well-developed methods of
smooth optimization such as penalty and gradient methods (see, e.g. [10]) can
be applied. In order to find a global optimal solution, we propose a
branch-and-bound algorithm using the Lagrange duality for bounding and a
simplicial bisection for branching. Lagrangian bounds are commonly used in
nonconvex optimization (see, e.g. [9,14,25,27,28] and the references therein).
For linearly constrained AFVPs, thanks to the linearity structure of the functions
involved, lower bounds can be computed by solving linear programs whereas the
subdivision can take place on a simplex in the criteria space. As it is expected,
computational results obtained show that the proposed algorithm is efficient for
78 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
problems with a moderate number of criteria. The number of decision variablesmay be much larger.
This article is constructed as follows. In the next section, we formulate theoptimization problems over the efficient and weakly efficient sets as standard smoothmathematical programs. In Section 3, we describe relaxation and brand-and-boundalgorithms for approximating global optimal solutions to the problems. The lastsection is devoted to computational experience and numerical results.
2. Smooth optimization formulations
Throughout this article let K� IRn be a nonempty bounded polyhedral convex set(polytope) and fi :K! IR, i2 I :¼ {1, . . . , p} be affine fractional functions of the form
fiðxÞ :¼aTi xþ �ibTi xþ �i
ði ¼ 1, . . . , pÞ,
where ai, bi2 IRn, �i,�i2 IR and p� 1 is an integer. As usual, we suppose that
bTi xþ �i 4 0 for every i and x2K. Thus fi (i¼ 1, . . . , p) is continuous on K.Consider the affine fractional vector optimization problem
infff ðxÞ :¼ ð f1ðxÞ, . . . , fpðxÞÞ: x2Kg, ðVPÞ
where minimum is understood by the well-known Pareto solution that is defined bythe following definition.
Definition 2.1 A vector x2K is said to be a Pareto (reps. weakly Pareto) solution to(VP) if there is no y2K, f (y)� f (x) (resp. f (y)5f (x)).
A Pareto (weakly Pareto) solution to (VP) is also called an efficient (resp. weaklyefficient) solution to (VP).
Let E(K, f ) (resp. WE(K, f )) denote the efficient set (resp. weakly efficient set) of(VP). Since K is compact, E(K, f ) 6¼ ;, but may be neither closed, nor open, whereasWE(K, f ) is closed [11]. The problems to be considered in this article are ofminimizing a real-valued function f0 over E(K, f ) and over WE(K, f ). These problemscan be written as follows:
inff f0ðxÞ: x2EðK, f Þg ð1Þ
and
inff f0ðxÞ: x2WEðK, f Þg, ð2Þ
respectively.Two main difficulties of these problems are that both efficient and weakly
efficient sets are not given as a system of inequalities and/or equalities. Moreover,these sets, in general, are not convex. Thus these problems, even when f0 is linear, aretruly difficult. Note that unlike the linear case both problems (1) and (2), even with f0linear, do not necessarily attain optimal solutions among the vertices of K. Benson [5]has introduced a merit function that has been successfully used to analyse thestructure of the efficient and weakly efficient sets of a vector linear problem and todevelop algorithms for optimizing over these sets (see, e.g. [1,2,5,6,19]). We can then
Optimization 79
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
investigate characterizations and get an explicit form for the efficient and weaklyefficient sets.
In this subsection, following Benson’s idea [5] we will use the merit functionapproach to handle the efficient and weakly efficient sets of an AFVP. Namely,we will use a regularization function to formulate these sets in explicit forms asequations defined by smooth functions. To this end let � denote the standardsimplex in IRp, that is
� :¼ � :¼ ð�1, . . . , �pÞ 2 IRpþ:Xpi¼1
�i ¼ 1
( )
and let �0 be the relative interior of �. Hence
�0 ¼ � :¼ ð�1, . . . , �pÞ4 0:Xpi¼1
�i ¼ 1
( ):
The following theorem due to Malivert [21] gives a characterization for efficiencyand weak efficiency for affine fractional vector optimization problems.
THEOREM 2.1 [21] A vector x2K is a weakly efficient solution of problem (VP) if andonly if there exists � :¼ (�1, . . . , �p)2� such that
Xpj¼1
�j ðbTj xþ �j Þaj � ða
Tj xþ �j Þbj
h i, x� y
* +� 0 8y2K: ð3Þ
The vector x is an efficient solution to (VP) if and only if there exists� :¼ (�1, . . . , �p)2�0 such that (3) holds.
For each fixed �, let M(�) denote the n� n matrix
Mð�Þ :¼Xpi¼1
�iðaibTi � bia
Ti Þ, ð4Þ
and let q(�)2 IRn be defined by
qð�Þ ¼Xpi¼1
�ið�iai � �ibiÞ ¼Xpi¼1
ð�iai � �ibiÞeð pÞTi
" #�, ð5Þ
where feð pÞi : i ¼ 1, . . . , pg is the canonical basis of IRp.
The mappings M : IRp! IRn�n and q : IRp
! IRn are linear and enjoy thefollowing properties which will be used later to devise solution algorithms for (1)and (2).
LEMMA 2.2
(i) M(�) is skew: M(�)T¼�M(�).(ii) M(�)x is bilinear in u¼ (x, �) and
Mð�Þx ¼ NðxÞ�, ð6Þ
where N(x) is the n� p matrix given by
NðxÞ :¼Xpi¼1
ðaibTi � bia
Ti Þxe
ð pÞTi : ð7Þ
80 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
(iii) q(�)Tx is bilinear in u¼ (x, �) and
qð�ÞTx ¼ �ðxÞT�, ð8Þ
where �(x)2 IRp is linear in x defined by
�ðxÞ :¼Xpi¼1
ð�iai � �ibiÞTxeð pÞi ¼
Xpi¼1
eð pÞi ð�iai � �ibiÞ
T
" #x: ð9Þ
Proof Immediately from elementary matrix computations. g
We can write (3) in the matrix form
hMð�Þx, y� xi þ hqð�Þ, y� xi � 0 8y2K: ð10Þ
Now consider the function F(u)¼F(x, �) defined on IRn� IRp by
FðuÞ ¼ Fðx, �Þ :¼Mð�Þxþ qð�Þ
0
� �2 IRn � IRp ð11Þ
and the related variational inequality problem.
Find u ¼ ðx, �Þ 2K�� such that hFðui, v� ui � 0 8v ¼ ð y,�Þ 2K��: ðVIPÞ
It is easy to see that
x2WEðK, f Þ , u ¼ ðx, �Þ solves ðVIPÞ for some �2�, ð12Þ
i.e. WE(K, f ) is the projection on K of the solution set of (VIP). Similarly, we have
the analogous property for E(K, f ) by replacing, in (2) and (12), � with �0
x2EðK, f Þ , u ¼ ðx, �Þ solves ðVIPÞ for some �2�0: ð13Þ
Hence Problems (1) and (2) are special cases of mathematical programs with
equilibrium constraints to that are usually preferred ordinary constraints. To this
end we consider the gap function ’r (r being a fixed nonnegative parameter), as in [3]for r¼ 0 and in [15] for r40:
’rðuÞ � ’rðx, �Þ :¼ inf hFðui, v� ui þr
2jjv� ujj2: v ¼ ð y,�Þ 2K��
n o¼ inf hMð�Þxþ qð�Þ, y� xi þ
r
2jj y� xjj2 þ
r
2jj�� �jj2: v ¼ ð y,�Þ 2K��
n o¼ inf hMð�Þxþ qð�Þ, y� xi þ
r
2jj y� xjj2: y2K
n oþmin
r
2jj�� �jj2: �2�
n o:
ð14Þ
Hence
’rðuÞ ¼ ’rðx,�Þ ¼ inf �rðx,�,yÞ :¼hMð�Þxþqð�Þ,y�xiþr
2jjy�xjj2: y2K
n o: ð15Þ
In fact, the matrix M(�) being skew, we have hM(�)x, xi¼ 0 and (15) takes the
simpler form
’rðuÞ ¼ ’rðx, �Þ ¼ inf �rðx, �, yÞ :¼ hMð�Þx, yi þ hqð�Þ, y� xi þr
2jj y� xjj2: y2K
n o:
ð16Þ
Optimization 81
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
Since K is a nonempty bounded polyhedral convex set, problem (14) admits an
optimal solution. More precisely, if r¼ 0 (14) is a linear program while, for r40,
its solution set is reduced to PK�� ðu�1r FðuÞÞ, the Euclidean projection over K��
of u� 1r FðuÞ, that is exactly fðPKðx�
1r ðMð�Þxþ qð�ÞÞ, �)}. In the next section, for
simplicity, we use the notation ’ for the function ’r with r¼ 0. Reformulating (VIP)
as ordinary constraint relies on the following result.
PROPOSITION 2.3
(i) �15’r(u)¼ ’r(x, �)� 0 for every u¼ (x, �)2K��.(ii) x2WE(K, f ) (resp. E(K, f )) if and only if ’r(x, �)¼ 0 for some �2�
(resp. �0).(iii) (Smoothing by regularization) For r40 the function ’r is continuously
differentiable and its derivative is given by
r’rðuÞ ¼ r’rðx, �Þ ¼rx�rðx, �, yðuÞÞr��ðx, �, yðuÞ
� �, ð17Þ
where y(u) is the unique solution of the problem (15) defining ’r(u) and
rx�rðx, �, yðuÞÞ :¼ �Mð�Þ yðuÞ � qð�Þ � rð yðuÞ � xÞ,
r��rðx, �, yðuÞÞ :¼ NðxÞTyðuÞ þ �ð yðuÞ � xÞ:ð18Þ
Proof
(i) It is trivial from the definition of ’r.(ii) It follows from (12), (13), (15) and optimality conditions for the quadratic
convex program (15). It can also be regarded as a consequence of general
results related to reformulations in (VIP)-(12), (see [3,15]).(iii) Since the mapping u¼ (x, �)!M(�)xþ q(�) is continuously differentiable
on IRn� IRp, according to [3,15] the function ’r is continuously differenti-
able too and we have (17). It remains to show (18). From (16) it follows that
�rðx, �, yÞ :¼ hMð�Þx, yi þ hqð�Þ, y� xi þr
2jj y� xjj2, ð19Þ
and using (i) of Lemma 2.2 we get
�rðx, �, yÞ :¼ �hx,Mð�Þ yi þ hqð�Þ, y� xi þr
2jj y� xjj2. ð20Þ
The first equation of (18) is then immediate. On the other hand, (ii) and (iii) of
Lemma 2.2 imply that
�rðx, �, yÞ :¼ h�,NðxÞTyi þ h�, �ð y� xÞi þ
r
2jj y� xjj2,
and the proof is complete. g
In the virtue of Proposition 2.3, problems (1) and (2) can be reformulated as
inf0 f ðxÞ subject to
ðx, �Þ 2K��0, ’rðx, �Þ ¼ 0
�ð21Þ
82 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
and
inf f0ðxÞ subject to
ðx, �Þ 2K��, ’rðx, �Þ ¼ 0,
�: ð22Þ
respectively. These two problems are, for r40, smooth mathematical programswhenever f0 is differentiable and so more tractable by available methods for smoothprogramming.
Clearly, when the vector optimization problem is linear, i.e. bi¼ 0 and �i¼ 1 forall i¼ 1, . . . , p, the function ’r takes the form
’rðx, �Þ ¼ infu2K
nhqð�Þ, u� xi þ
r
2jju� xjj2
o:
By Proposition 2.3, the efficient and weakly efficient sets of a linear vector problemcan be reformulated as ordinary constraints as follows:
EðK, f Þ ¼ fðx, �Þ 2K��, ’rðx, �Þ ¼ 0g,
WEðK, f Þ ¼ fðx, �Þ 2K��0, ’rðx, �Þ ¼ 0g:
3. A global optimization approach
3.1. Optimization over the weakly efficient set
We consider first the problem over the weakly efficient set of an AFVP (2).According to Lemma 2.2 and (10), we can write this problem into the followingequivalent form:
inf f0ðxÞ subject to ðx, �Þ 2K��,
Mð�Þu, x� �
þ qð�Þ, x� u� �
� 0 8u2K:
�ð23Þ
Let V(K ) denote the vertex set of the polytope K. Since
hMð�Þu, xi þ hqð�Þ, x� ui � 0 8u2K
if and only if
hMð�Þu, xi þ hqð�Þ, x� ui � 0 8u2VðK Þ,
we can rewrite problem (23) in the form
ðPð�ÞÞf0ð�Þ :¼ inf f0ðxÞ subject to ðx, �Þ 2K��,
Mð�Þu, x� �
þ qð�Þ, x� u� �
� 0 8u2VðK Þ:
�ð24Þ
To define Problem (P(�)) it requires that all vertices of the polytope K are knownin advance. So this formulation is recommended to use when the vertices of K canbe easily calculated. A special case has appeared already in some applications(see [24,26]) where each component xi of the decision variable x represents the ratioof ith quantity to be defined. In this case K is a simplex given by
K ¼ x ¼ ðx1, . . . ,xnÞ:Xni¼1
xi ¼ 1, xi � 0 8i
( ), ð25Þ
and V(K ) is the canonical basis feðnÞi : i ¼ 1, . . . , ng of IRn.
Optimization 83
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
In general, finding all vertices of a polytope may be very costly. The algorithm we
are going to describe is a relaxation procedure in which vertices of K are generated
iteratively by solving related linear programs. If it happens that the currently
generated vertex is not a new one, then the solution of the relaxed problem is also
a global optimal solution of the original one. Otherwise, the newly generated vertex
helps to build the new relaxed subproblem for the next iteration. In this way one may
avoid computing all vertices of K.
Algorithm 1 (A general relaxation scheme for globally solving (P(�)))
Step 1. Choose some distinct vertices u1, . . . , uq of K. Let V0¼ {u1, . . . , uq}, ‘¼ 0.Step 2. Solve the problem
ðPð�,V‘ÞÞf0ð�,V‘Þ :¼ inf f0ðxÞ subject to ðx, �Þ 2K��,Mð�Þui, x� �
þ qð�Þ, x� ui� �
� 0 ði ¼ 1, . . . , qÞ
�
to obtain a global optimal solution (xj, �j) of (P(�,V‘)).Step 3. Solve the linear program
supu2K�Mð�jÞx j � qð�jÞ, u� �
ðL‘Þ
to obtain a basic solution uqþ12V(K ).
(a) If �Mð�jÞuqþ1, x ji þ qð�jÞ, x j � uqþ1
� �� 0,
then (xj, �j) is a global optimal solution to (P(�)).(b) Otherwise, if�
Mð�jÞuqþ1,x ji þ qð�jÞ, x j � uqþ1� �
4 0,
then set V‘$V‘[ {uqþ1} and go back to Step 2.
Finite convergence. The algorithm terminates after a finite number of repetitions of
Step 2 yielding a global optimal solution to (P(�)).
Proof First, we note that, for each q, the feasible set of (P(�)) is contained in that
of (P(�,V‘)). Thus f (�,V‘)� f� where f� is the optimal value of (P(�)). Moreover,
the global optimal solution (xj, �j) of (P(�,V‘)) also solves (P(�)) if it is feasible
for the latter, i.e.
h�Mð�jÞx j � qð�jÞ, ui � qð�jÞ,x ji 8u2K:
In other words �Mð�jÞvqþ1,x ji þ qð�jÞ, x j � vqþ1
� �� 0:
Otherwise �Mð�jÞuqþ1,x ji þ qð�jÞ, x j � uqþ1
� �4 0,
and so uqþ1 =2 V‘. Since jV(K )j is finite, Algorithm 1 must terminate after a finite
number of repetitions of Step 2. g
84 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
Here and in what follows, by ‘an �-global optimal solution to problem (P(�))’,we mean a feasible point (x�, ��) such that j f0(x
�)� ��j � �(j f0(x�)j þ 1), where �� 0
and �� denotes the global optimal value of problem (P(�)).Clearly, if in Algorithm 1 (xj, �j) is an �-global optimal solution to (P(�)), then
the algorithm terminates at an �-global optimal solution of (P(�)). The mainsubproblem needed be solved in Algorithm 1 is the relaxed problem (P(�,V‘)). Thisproblem can be solved by a branch-and-bound procedure that we can briefly describeas follows.
The procedure starts with �0 :¼ {�}. At the beginning of each iterationk¼ 0, 1, . . . , one has the following:
. �k: a family of full-dimensional subsimplices of the simplex �. To eachmember S2� it assigns a real number �(S,V‘), which serves as a lowerbound for the global optimal value of the problem (P(�,V‘)) restricted onS, that is (V‘ :¼ {vi: i¼ 1, . . . , q})
ðPðS,V‘ÞÞf0ðS,V‘Þ :¼ inf f0ðxÞ subject to ðx, �Þ 2K� S,
Mð�Þui, x� �
þ hqð�Þ,x� uii � 0, ði ¼ 1, . . . , qÞ:
�
. (xk, �k): the currently best feasible point for (P(�,V‘)).
. �k¼ f0(xk): the currently smallest upper bound for f0(�,V‘).
. �k: the currently biggest lower bound for f0(�,V‘)). Hence�k� f0(�,V‘)��k.
The algorithm terminates when the difference between the upper and lower boundis less than or equal to a given tolerance.
In this algorithm, like all other branch-and-bound algorithms, the mostimportant and difficult task is of computing the lower bound �(S) for each partitionset S in such a way that the algorithm is convergent and efficient. We will use theLagrange duality for finding lower bounds which seem suitable – at least in the casethe objective function f0 is linear – for the affine fractionality structure of theproblem being solved. Next we will investigate such an approach for the linearfunction f0.
The Lagrangian bounding operation. We suppose that the nonempty boundedpolyhedral convex set K is given as
K :¼ fx � 0: Ax � bg ð26Þ
and that f0 is a linear function given by f0(x) :¼ dTx. In this case problem (P(S,V‘))takes the form
f0ðS,V‘Þ :¼ min dTx subject to Ax � b, x � 0, �2S,
hMð�Þu j,xi þ hqð�Þ, x� u ji � 0 ð j ¼ 1, . . . , qÞ,
�ð27Þ
where V‘¼ {uj: j¼ 1, . . . , q}�V(K ) is defined in Algorithm 1.For each �2�, let A(�) (resp. b(�)) be the q� n matrix (resp. the vector in IRq)
defined, respectively, by (feðqÞi : i ¼ 1, . . . , qg being the canonical basis of IRq)
Að�Þ :¼Xqj¼1
eðqÞj ½Mð�Þu
j þ qð�ÞT ð28Þ
Optimization 85
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
and
bð�Þ :¼Xqj¼1
qð�ÞTu jeðqÞj : ð29Þ
Now consider the (mþ q)� n matrix H(�) and the vector h(�)2 IRmþq given by
Hð�Þ :¼A
Að�Þ
� �, hð�Þ ¼
b
bð�Þ
� �: ð30Þ
It follows from Lemma 2.2 that the mappings H and h are affine in the variable �.Problem (27) can be rewritten in the form
ðPðS,V‘ÞÞf0ðS,V‘Þ :¼ inf dTx subject to x � 0, �2S,
Hð�Þx� hð�Þ � 0:
�
Set
�V‘ ð�Þ :¼ inffdTx: x � 0,Hð�Þx� hð�Þ � 0g, ð31Þ
that is a linear program. Let
�ð�Þ :¼ fx2 IRn: x � 0,Hð�Þx� hð�Þ � 0g
¼ fx2 IRn: x � 0, Ax � b, Að�Þx� bð�Þ � 0g
by the virtue of (28) and (29). Since the mappings A(�) and b(�) are linear and the
polyhedral convex set K¼ {x2 IRn: x� 0, Ax� b} is supposed to be bounded, the
�(�) is a bounded polyhedral convex set. According to [3], the multivalued mapping
is upper semicontinuous and the marginal function �V‘is thus continuous on S.
Clearly (we must prove that the next problem (36) is solvable)
f0ðS,V‘Þ ¼ inf�2S
�V‘ ð�Þ: ð32Þ
Now we define the Lagrangian function of problem (P(S,V‘)) with respect to the
constraint
Hð�Þx� hð�Þ � 0:
That is,
Lðx, �,wÞ :¼ dTxþ hw,Hð�Þx� hð�Þi: ð33Þ
By Lagrange duality for the linear program (31) we have the relation
infx�0
supw�0
Lðx, �,wÞ ¼ supw�0
infx�0
Lðx, �,wÞ ¼ �V‘ ð�Þ 8�2S, ð34Þ
which implies that
inf�2S
supw�0
infx�0
Lðx, �,wÞ ¼ inf�2S
infx�0
supw�0
Lðx, �,wÞ ¼ min�2S
�V‘ð�Þ ¼ f0ðS,V‘Þ ð35Þ
and
supw�0
inf�2S
infx�0
Lðx, �,wÞ � inf�2S
infx�0
supw�0
Lðx, �,wÞ:
86 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
Thus
�ðS,V‘Þ :¼ supw�0
inf�2S
infx�0
Lðx, �,wÞ � f0ðS,V‘Þ, ð36Þ
that means that �(S,V‘) is a lower bound for the optimal value of problem
(P(S,V‘)).The following lemma says that this lower bound can be computed by minimizing
a polyhedral convex function over linear constraints, which in turn is a linear
program.
PROPOSITION 3.1
(i) Let S and S0 be two subsimplices of � such that SS0, then
�(S0,V‘)��(S,V‘).(ii) Let V(S) :¼ {�i: i¼ 1, . . . , p} be the vertex set of the simplex S. Then
�ðS,V‘Þ ¼ supfgS,V‘ ðwÞ: w � 0, wTHð�iÞ þ d � 0, i ¼ 1, . . . , pg,
where
gS, V‘ ðwÞ :¼ infi¼1,..., p
�hTð�iÞw� �
is a piecewise linear concave function.
Proof
(i) Is immediate from (36).(ii) By definition
�ðS,V‘Þ :¼ supw�0
inf�2S
infx�0
hdTxþ wT
Hð�Þx� hð�Þ
i¼ sup
w�0inf�2S
V‘ ð�,wÞ, ð37Þ
where
V‘ ð�,wÞ :¼ infx�0
Lðx, �,wÞ ¼ infx�0
h�dþHð�ÞTw, xi � w, hð�Þ
� �i: ð38Þ
Since
V‘ ð�,wÞ ¼w, hð�Þ� �
if dþHð�ÞTw � 0
�1 otherwise,
(
there holds
�ðS,V‘Þ ¼ supw�0, dþHð�ÞTw�0,8�2S
inf�2Sf�wThð�Þg:
As mentioned above, the mappings H and h are affine in the variable �. Hence
�ðS,V‘Þ ¼ supfgS,V‘ ðwÞ: w � 0, wTHð�iÞ þ d � 0, i ¼ 1, . . . , pg,
where
gS,V‘ ðwÞ :¼ inf�2Sf�wThð�Þg ¼ inf
i¼1,..., p
� hð�iÞTw
:
g
Optimization 87
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
Remark 1
(i) It is worth noting that
�ðS,V‘Þ ¼ supw�0
inf�2S
V‘ ð�,wÞ � inf�2S
supw�0
V‘ ð�,wÞ ¼ inf�2S
�V‘ ð�Þ ¼ f0ðS,V‘Þ: ð39Þ
(ii) �(S,V‘) can be computed by solving the linear program
supðt,wÞ
f t: wThð�iÞ þ t � 0, w � 0, dþHð�iÞTw � 0, i ¼ 1, . . . , pg: ð40Þ
Denote by wS the point where �(S,V‘) is attained and let �S be the vertex of S
corresponding to wS, i.e. �(S,V‘)¼�h(�S)T wS. Having this bounding operation we
can describe the algorithm in detail as follows.
Algorithm 2 (For globally solving problem (P(�,V‘)) in Step 2 of Algorithm 1)Initialization. Choose a tolerance �� 0. Compute
�1 :¼ inff f0ðxÞ: Ax � b, x � 0g:
Use the Lagrange bounding operation described above to compute �0� �(�,V‘) as
a lower bound for the optimal value f0(�,V‘) of problem (P(�,V‘)). Let
�0 :¼max{�1,�(�,V‘)}. Take �0 as the vertex of � corresponding to �(�,V‘).
Solve the linear program
inf dTx subject to Ax � b, x � 0,
hMð�0Þu j, xi þ hqð�0Þ, x� u ji � 0 ð j ¼ 1, . . . , qÞ
�
to obtain an optimal solution x0 (hence (x0, �0) is feasible for (P(�,V‘)).
Take �0� �(�,V‘) :¼ dTx0 and
�0 :¼f�g if �0 � �0 4 �ðj�0j þ 1Þ
; otherwise:
�
Let k :¼ 0.
Iteration k (k¼ 0, 1, . . . )
Step 1 (Checking optimality) If �k¼;, terminate: xk is an �-global optimal
solution. Otherwise go to Step 2.Step 2 (Selection) Choose Sk2�k such that
�k :¼ �ðSk,V‘Þ ¼ inff�ðS,V‘Þ: S2�kg:
Step 3 (Branching) Take [k, k] a longest edge of Sk and �k :¼ (kþ k)/2. Define
Sk1 :¼ co��VðSkÞ n f
kg�[ f�kg
�, Sk2 :¼ co
��VðSkÞ n f
kg�[ f�kg
�(co stands for the convex hull).
Step 4 (Bounding) For each newly generated simplex Sk1, Sk2 use the Lagrange
bounding operation to compute �(Skj,V‘) as a lower bound for f0(Skj,V‘)
(j¼ 1, 2) and to obtain the corresponding points �Sk1, �Sk2.
88 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
For each j¼ 1, 2 solve the linear programs
inf dTx subject to Ax � b, x � 0,
hMð�Skj Þu j, xi þ hqð�Skj Þ, x� u ji � 0 ð j ¼ 1, . . . , qÞ
�
to obtain xkj (hence (xkj, �kj) is feasible for problem (P�)). Update upper
bound by taking
�kþ1 :¼ infn�k, dTxk1, dTxk2
oand let (xkþ1, �kþ1)2 {(xk, �k), (xk1, �k1), (xk2, �k2)} such that �kþ1¼ dT xkþ1.
Take
�0k :¼nð�k n SkÞ [ fSk1,Sk2g
o,
�kþ1
nS2�0k: �kþ1 � �ðSÞ4 �ðj�kþ1j þ 1Þ
oand go to Step 1 of the next iteration kþ 1.
Convergence theorem. If Algorithm 2 terminates at some iteration k, then (xk, �k) isan �-global optimal solution to (P�). Otherwise, it generates an infinite sequence
{(xk, �k)} such that �k& f (�), �k% f (�) and any cluster point of the sequence
{(xk, �k)} is a global optimal solution to (P�).
Proof By construction, the algorithm terminates at iteration k if and only if
�k� �k� �(j�kj þ 1). Since �k� f (�)� �k¼ dTxk and (xk, �k) is feasible for
(P(�,V‘)), it follows that (xk, �k) is an �-solution to (P(�,V‘)).
Now suppose that the algorithm does not terminate. For every k, since
Sk¼Sk1[Sk2, by the rule for computing lower bound �(Sk), we have
�k ¼ �ðSk,V‘Þ � �ðSkþ1,V‘Þ ¼ �kþ1 � f0ð�,V‘Þ 8k:
Also, by the definition of upper bound �k, we have f0(�, V‘)��kþ1��k for every k.
Thus
�k � f0ð�,V‘Þ � �k 8k:
Hence, both ��¼ limk�k and ��¼ limk�k exist and satisfy
�� � f0ð�,V‘Þ � ��:
If the algorithm does not terminate, then it generates a nested sequence of
subsimplices that, for simplicity of notation, are also denoted by {Sk}. Since the
subdivision is exhaustive, this sequence shrinks to a singleton, say �� 2� [16]. Since
�k is the currently smallest upper bound at iteration k, we have �kþ1� �V‘(�k).
As �k! ��, by the continuity of �, it follows that
�� � �V‘ ð��Þ: ð41Þ
Using the duality theorem for the linear program defining �V‘(��), we have
�V‘ð��Þ ¼ sup
w�0infx�0
Lðx, ��,wÞ: ð42Þ
Optimization 89
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
On the other hand, by the definition of �k, we can write
�k ¼ supw�0
inf�2Sk
infx�0
Lðx, �,wÞ:
Letting k!þ1 we obtain
�� � supw�0
infx�0
Lðx, ��,wÞ: ð43Þ
Since �����, from (41)–(43), it follows that
�� ¼ �V‘ ð��Þ ¼ f0ð�,V‘Þ ¼ ��:
Let ð �x, ��Þ be a cluster point of the sequence {(xk, �k)}. For simplicity of notation wesuppose ðxk, �kÞ ! ð �x, ��Þ. Since (xk, �k) is feasible for (P�), i.e.
xk 2K, �k 2�, Hð�kÞxk � hð�kÞ � 0:
By the closedness of K, � and continuity of H, h, it follows that
�x2K, ��2�, Hð ��Þ �x� hð ��Þ � 0,
which means that ð �x, ��Þ is feasible for (P(�,V‘)). On the other hand, since �k¼ dTxk,we obtain in the limit that �� ¼ dT �x. Hence ð �x, ��Þ is a global optimal solutionto ((P�,V‘)). g
Remark 2 In Algorithm 2, we may use the following rule for branching: Choosean integer number N� 1. Then at iteration k, if N is a multiple of k, we bisect Sk viathe midpoint of a longest edge of Sk. Otherwise, we bisect Sk via the midpoint of anyedge of Sk. The convergence of the algorithm remains valid, according to Tuy [29].This rule is important for improving the efficiency of the algorithm. In fact, inimplementation of the algorithm, when lower bounds improve slowly, it suggestschanging the ‘direction’ of search by choosing another edge for branching.
Remark 3 In the general case where f0 is not linear, �(S,V‘) given by (36) is still alower bound for f0(S,V‘). However, Lemma 3.1 is no longer valid and hence �(S,V‘)cannot be computed by (40).
3.2. Optimization over the efficient set
Let us now consider the optimization problem over the efficient set E(K, f ). It isobtained from (24) by just replacing the closed standard simplex � with its relativeinterior �0:
ðPð�0ÞÞ
f0ð�0Þ :¼ inf dTx subject to ðx, �Þ 2K��0,Mð�Þu, x� �
þ qð�Þ, x� u� �
� 0 8u2VðK Þ
Mð�Þu, x� �
þ qð�Þ, x� u� �
� 0 8u2VðK Þ:
8<: ð44Þ
Like (24) we have
f0ð�0Þ ¼ inff�VðK Þð�Þ: �2�0g,
where
�VðK Þð�Þ :¼ inf dTx: x2K, Mð�Þu, x� �
þ qð�Þ,x� u� �
� 0 8u2VðK Þ� �
:
90 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
As noticed previously, the marginal function �V(K ) is continuous and since � is theclosure of �0, we have
f0ð�0Þ ¼ f0ð�Þ
and the solution set of (P(�0)) is contained in that of (P(�)). More precisely, we have
P0 :¼ fðx�, ��Þ 2P: �� 2�0g, ð45Þ
where P (resp. P0) denotes the solution set of (P(�)) (resp. (P(�0)).It follows that an approximate global solution (x�, ��) of (P(�)) computed by
the global algorithm is also an approximate global solution of (P(�0)) if ��2�0.g
4. Computational experience and results
We have tested the proposed algorithms on a Laptop Computer Sony Vaio with1GB of main memory and 2Gh speed. We used CPLEX 7.5 to solve linearsubprograms.
For Algorithm 1 we take the constrained set K¼ {x2 IRn: 0� x� b}. The vectorb2 IRn and the affine fractional functions are randomly generated. The obtainedresults are reported in Table 1, where
Iter: the number of iterations which is the number of generated vertices;MaxIterBB and AvgIterBB: the maximum and average numbers of iterations inthe branch and bound procedure respectively;LB and UB: lower and upper bounds, respectively.
From the results in Table 1 we can observe that the number of generated verticesis very small compared to 2n (the number of the vertices of the box) and, as it couldbe expected, almost runtimes were spent for globally solving problems (P(�,V‘)) inStep 2 of Algorithm 1.
To test Algorithm 2, we take
K :¼ x2 IRn: x � 0,Xni¼1
xi � 1
( ):
The affine fractional functions are generated randomly as before. The computationalresults are reported in Table 2.
Table 1. Numerical results – Box.
Probability n p Iter LB UB Times (s) MaxIterBB AvgIterBB
P1.2 25 3 24 �1695.535 �1614.846 525.13 1439 489P2.2 30 3 16 �323.675 �308.265 444.19 3851 641P3.2 35 3 20 �2889.520 �2752.147 318.11 1326 313P4.2 40 3 53 �3139.805 �3015.015 8774.37 5551 1541P5.2 50 3 42 �5413.317 �5155.584 5589.95 3351 1240P6.2 60 3 27 �1568.517 �1493.857 4479.24 7977 1749P7.2 20 4 5 �231.462 �221.384 93.64 1887 654P8.2 25 4 5 �43.824 �41.739 120.98 2156 792P9.2 35 4 4 �1293.226 �1260.000 281.45 7783 1946P10.2 50 4 8 �1428.000 �1428.000 137.03 664 367
Optimization 91
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
From the results reported in Table 2 we can conclude the following:
. The proposed Algorithm 2 is efficient for the problems up to eight criteria,while the number of decision variables may be fairly high.
. As expected, since the subdivision takes place in the criteria space, thecomputational time is very sensitive to the number of the criteria.
5. Conclusion
We have developed methods for optimizing over the efficient and weakly efficientsets of an affine fractional vector optimization program. By using a regularizationfunction, we reformulate the problem into a standard smooth mathematicalprogramming problem that allows applying available local methods for smoothnonconvex programming. In case the objective function is linear, we proposeda global algorithm based upon a branch-and-bound procedure, which uses theLagrangian bound coupling with a simplicial bisection in the criteria space.Preliminary computational results reported show that the global algorithm ispromising. We are currently investigating the case where f0 is convex (and thereaftera DC function) and a combination of local approaches with branch-and-boundtechniques in order to improve the related global algorithm.
References
[1] L.T. Hoai An, P.D. Tao, and L.D. Muu, Numerical solution for optimization over the
efficient set by DC optimization algorithms, Oper. Res. Lett. 19 (1996), pp. 117–126.
[2] L.T. Hoai An, P.D. Tao, and L.D. Muu, Simplicially constrained DC optimization over the
efficient and weakly efficient sets, J. Optim. Theory Appl. 117 (2003), pp. 503–521.[3] A. Auslender, Optimization: Methods Numeriques, Masson, Paris, 1976.[4] H. Benson, An outer approximation algorithm for generating all efficient extreme point in the
outcome set of a multiple objective linear programming problem, J. Global Optim. 13 (1998),
pp. 1–24.[5] H. Benson, Optimization over the efficient set, J. Math. Anal. Appl. 98 (1984), pp. 562–580.[6] H. Benson, An all-linear programming relaxation algorithm for optimizing over the efficient
set, J. global Optim. 1 (1991), pp. 83–104.
Table 2. Numerical results – Standard simplex.
Probability n p Iter LB UB Times (s)
P1.1 100 3 139 �29.815 �28.412 140.86P1.2 120 3 54 �23.732 �23.281 100.17P1.3 90 4 137 �50.685 �48.321 137.25P1.4 40 5 66 �28.241 �28.125 25.08P1.5 70 5 73 �98.750 �98.750 58.72P1.6 50 6 111 �39.273 �39.273 38.59P1.7 60 6 283 �49.947 �49.947 147.25P1.8 50 7 314 �48.640 �48.640 125.06P1.9 40 8 195 �30.200 �29.000 53.91P1.10 50 8 68 �99.157 �96.500 27.34
92 L.T. Hoai An et al.
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014
[7] H. Benson, A finite, nonadjacent extreme point search algorithm for optimization over theefficient set, J. Optim. Theory Appl. 73 (1992), pp. 47–64.
[8] A. Ben-Tal et al., Global minimization by reducing the duality gap, Math. Program. 63(1994), pp. 193–212.
[9] C. Berge, Topological Spaces, MacMillan, New York, 1968.[10] P.D. Bertsekas, Nonlinear Programming, Second Printing, Athena Scientific, Nashua,
USA, 2003.
[11] E.U. Choo and D.R. Atkins, Connectedness in multiple linear fractional programming,Management Sci. 29 (1983), pp. 250–255.
[12] E.U. Choo and D.R. Atkins, Bicriteria linear fractional programming, J. Optim. Theory
Appl. 38 (1982), pp. 203–220.[13] J. Ecker and J. Song, Optimizing a linear function over the efficient set, J. Optim. Theory
Appl. 83 (1994), pp. 541–563.
[14] J.E. Falk, Lagrange multipliers and nonconvex programs, SIAM J. Control 7 (1969),pp. 534–545.
[15] M. Fukushima, A new merit function and a successive quadratic programming algorithm forvariational inequality problems, SIAM J. Optim. 6 (1996), pp. 703–713.
[16] R. Horst and H. Tuy, Global Optimization (Deterministic Approaches), 3rd ed., Springer,Berlin, 1996.
[17] J.M. Jorge, A bilinear algorithm for optimizing a linear function over the efficient set of a
multiple objective linear programming problem, J. Global Optim. 31 (2005), pp. 1–16.[18] N.T.B. Kim and D.T. Luc, Normal cones to polyhedral convex set and generating efficient
faces in linear multiobjective programming, Acta Math. Vietnam. 25 (2000), pp. 100–124.
[19] L.T. Luc and L.D. Muu, A Global optimization approach to optimization over the efficientset, In Recent Advances in Optimization, edited by P. Gritzmann, R. Horst, E. Sachs andR. Tichtschke, Springer Verlag, Berlin (1997) pp. 183–195.
[20] D.T. Luc, T.Q. Phong, and M. Volle, Scalarizing functions for generating the weakly
efficient solution set in convex multiobjective problems, SIAM J. Optim. 15 (2005),pp. 987–2001.
[21] C. Malivert, Multicriteria fractional optimization, Proceeding of the 12th Catalans Days
on Applied Mathematics (1995) pp. 189–198.[22] L.D. Muu and W. Oettli, Optimization over equilibrium sets, Optimization 49 (2000),
pp. 179–189.
[23] J. Philip, Algorithms for the vector maximization problem, Math. Program. 1 (1972),pp. 207–228.
[24] S. Schaible, Fractional programming: Applications and algorithms, Eur. Oper. Res. 75
(1981), pp. 111–120.[25] N.Z. Shor and P.I. Stetsyk, Lagrangian bounds in multiextremal polynomial and discrete
optimization problems, J. Global Optim. 23 (2002), pp. 1–41.[26] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation and Application,
John Wiley and Sons, New York, 1996.[27] N.N. Thoai, Convergence and application of a decomposition method using duality bounds
for nonconvex global optimization, J. Optim. Theory Appl. 113 (2002), pp. 165–193.
[28] H. Tuy, On solving nonconvex optimization problems by reducing the duality gap,J. of Global Optimization 32 (2005), pp. 349–365.
[29] H. Tuy, Effect of subdivision strategy on convergence and efficiency of some global
optimization algorithms, J. Global Optim. 1 (1991), pp. 23–36.[30] H.Q. Tuyen and L.D. Muu, Biconvex programming approach to optimization over the
efficient set of a multiple objective affine fractional problem, Oper. Res. Lett. 28 (2001),pp. 81–92.
[31] M. Zeleny, Linear Multiobjective Programming, Springer-Verlag, Berlin, 1974.
Optimization 93
Dow
nloa
ded
by [
Nat
iona
l Don
g H
wa
Uni
vers
ity]
at 0
9:14
29
Mar
ch 2
014