+ All Categories
Home > Documents > g-dominance: Reference point based dominance for multiobjective metaheuristics

g-dominance: Reference point based dominance for multiobjective metaheuristics

Date post: 27-Feb-2023
Category:
Upload: ieschsalazar
View: 0 times
Download: 0 times
Share this document with a friend
8
Decision Support g-dominance: Reference point based dominance for multiobjective metaheuristics Julián Molina c, * , Luis V. Santana a , Alfredo G. Hernández-Díaz b , Carlos A. Coello Coello a , Rafael Caballero c a CINVESTAV-IPN, Computer Science Department, Mexico b Pablo de Olavide University, Seville, Spain c University of Málaga, Applied Economics (Mathematics), Campus El Ejido s./n., 29071, Málaga, Spain article info Article history: Received 1 February 2007 Accepted 7 July 2008 Available online 25 July 2008 Keywords: Multiple-criteria decision making Interactive methods Preference information Reference point abstract One of the main tools for including decision maker (DM) preferences in the multiobjective optimization (MO) literature is the use of reference points and achievement scalarizing functions [A.P. Wierzbicki, The use of reference objectives in multiobjective optimization, in: G. Fandel, T. Gal (Eds.), Multiple-Criteria Decision Making Theory and Application, Springer-Verlag, New York, 1980, pp. 469–486.]. The core idea in these approaches is converting the original MO problem into a single-objective optimization problem through the use of a scalarizing function based on a reference point. As a result, a single efficient point adapted to the DM’s preferences is obtained. However, a single solution can be less interesting than an approximation of the efficient set around this area, as stated for example by Deb in [K. Deb, J. Sundar, N. Udaya Bhaskara Rao, S. Chaudhuri, Reference point based multiobjective optimization using evolution- ary algorithms, International Journal of Computational Intelligence Research, 2(3) (2006) 273–286]. In this paper, we propose a variation of the concept of Pareto dominance, called g-dominance, which is based on the information included in a reference point and designed to be used with any MO evolution- ary method or any MO metaheuristic. This concept will let us approximate the efficient set around the area of the most preferred point without using any scalarizing function. On the other hand, we will show how it can be easily used with any MO evolutionary method or any MO metaheuristic (just changing the dominance concept) and, to exemplify its use, we will show some results with some state-of-the-art- methods and some test problems. Ó 2008 Elsevier B.V. All rights reserved. 1. Introduction Multiple-criteria optimization naturally appears in most real- world applications, and the term MultiObjective Programming (MOP) problem refers to such problems. The first difficulty that we face when dealing with Multiobjective Optimization (MO) is that the notion of ‘‘optimum” changes. In this case, rather than aiming to find the global optimum, we look for good trade-offs among the objectives, which are obtained by using the definition of Pareto efficiency. Such a definition will lead us to obtain not one, but a set of (Pareto) efficient solutions (the Pareto front, PF). The idea of solving a multiobjective optimization problem is understood as helping a human Decision Maker (DM) in consider- ing the multiple criteria simultaneously and in finding a Pareto efficient solution that pleases him/her the most. More details about the resolution of a MOP can be found in Ref. [31]. The common element in all MOP techniques is the need to find a sufficiently wide and representative set of efficient points where the DM is able to find an alternative adjusted to his/her prefer- ences. A commonly adopted approach to find this type of solutions are the so-called Interactive MultiObjective methods (see Miettinen [25]), which assume that the DM is able to provide consistent feedback regarding which preferences to include in the resolution process. This interaction can guide a search towards the most preferred areas of the Pareto front obtained and avoids exploring non-interesting solutions. These methods are very useful in real-world cases, as they help the DM to find the most preferred solutions in a consistent and reliable way. The main problem when solving a real application is that some of the existing methods generate the entire Pareto set (most of the MO metaheuristics) whilst others produce a single point (most of the Interactive MultiObjective methods). Our aim in this paper is producing something in-between. Thus, we will show how the use of g-dominance within a MO metaheuristic will let us produce a (reduced) set of efficient points adapted to the DM’s preferences instead of the entire Pateto Set or a single efficient solution. One of the main tools for expressing preference information is the use of reference points [33]. Reference points consist of aspira- tion levels reflecting desirable values for the objective functions. This is a natural way of expressing preference information and lets the DM express hopes about his/her most preferred solutions. The reference point is projected onto the Pareto front by minimizing a so-called achievement scalarizing function [33] outlined in Section 0377-2217/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2008.07.015 * Corresponding author. Tel.: +34 952 13 11 71; fax: +34 952 13 20 61. E-mail address: [email protected] (J. Molina). European Journal of Operational Research 197 (2009) 685–692 Contents lists available at ScienceDirect European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor
Transcript

European Journal of Operational Research 197 (2009) 685–692

Contents lists available at ScienceDirect

European Journal of Operational Research

journal homepage: www.elsevier .com/locate /e jor

Decision Support

g-dominance: Reference point based dominance for multiobjective metaheuristics

Julián Molina c,*, Luis V. Santana a, Alfredo G. Hernández-Díaz b, Carlos A. Coello Coello a, Rafael Caballero c

a CINVESTAV-IPN, Computer Science Department, Mexicob Pablo de Olavide University, Seville, Spainc University of Málaga, Applied Economics (Mathematics), Campus El Ejido s./n., 29071, Málaga, Spain

a r t i c l e i n f o a b s t r a c t

Article history:Received 1 February 2007Accepted 7 July 2008Available online 25 July 2008

Keywords:Multiple-criteria decision makingInteractive methodsPreference informationReference point

0377-2217/$ - see front matter � 2008 Elsevier B.V. Adoi:10.1016/j.ejor.2008.07.015

* Corresponding author. Tel.: +34 952 13 11 71; faxE-mail address: [email protected] (J. Molina).

One of the main tools for including decision maker (DM) preferences in the multiobjective optimization(MO) literature is the use of reference points and achievement scalarizing functions [A.P. Wierzbicki, Theuse of reference objectives in multiobjective optimization, in: G. Fandel, T. Gal (Eds.), Multiple-CriteriaDecision Making Theory and Application, Springer-Verlag, New York, 1980, pp. 469–486.]. The core ideain these approaches is converting the original MO problem into a single-objective optimization problemthrough the use of a scalarizing function based on a reference point. As a result, a single efficient pointadapted to the DM’s preferences is obtained. However, a single solution can be less interesting than anapproximation of the efficient set around this area, as stated for example by Deb in [K. Deb, J. Sundar,N. Udaya Bhaskara Rao, S. Chaudhuri, Reference point based multiobjective optimization using evolution-ary algorithms, International Journal of Computational Intelligence Research, 2(3) (2006) 273–286]. Inthis paper, we propose a variation of the concept of Pareto dominance, called g-dominance, which isbased on the information included in a reference point and designed to be used with any MO evolution-ary method or any MO metaheuristic. This concept will let us approximate the efficient set around thearea of the most preferred point without using any scalarizing function. On the other hand, we will showhow it can be easily used with any MO evolutionary method or any MO metaheuristic (just changing thedominance concept) and, to exemplify its use, we will show some results with some state-of-the-art-methods and some test problems.

� 2008 Elsevier B.V. All rights reserved.

1. Introduction

Multiple-criteria optimization naturally appears in most real-world applications, and the term MultiObjective Programming(MOP) problem refers to such problems. The first difficulty thatwe face when dealing with Multiobjective Optimization (MO) isthat the notion of ‘‘optimum” changes. In this case, rather thanaiming to find the global optimum, we look for good trade-offsamong the objectives, which are obtained by using the definitionof Pareto efficiency. Such a definition will lead us to obtain notone, but a set of (Pareto) efficient solutions (the Pareto front, PF).

The idea of solving a multiobjective optimization problem isunderstood as helping a human Decision Maker (DM) in consider-ing the multiple criteria simultaneously and in finding a Paretoefficient solution that pleases him/her the most. More details aboutthe resolution of a MOP can be found in Ref. [31].

The common element in all MOP techniques is the need to finda sufficiently wide and representative set of efficient points wherethe DM is able to find an alternative adjusted to his/her prefer-ences. A commonly adopted approach to find this type of solutions

ll rights reserved.

: +34 952 13 20 61.

are the so-called Interactive MultiObjective methods (seeMiettinen [25]), which assume that the DM is able to provideconsistent feedback regarding which preferences to include inthe resolution process. This interaction can guide a search towardsthe most preferred areas of the Pareto front obtained and avoidsexploring non-interesting solutions. These methods are very usefulin real-world cases, as they help the DM to find the most preferredsolutions in a consistent and reliable way.

The main problem when solving a real application is that someof the existing methods generate the entire Pareto set (most of theMO metaheuristics) whilst others produce a single point (most ofthe Interactive MultiObjective methods). Our aim in this paper isproducing something in-between. Thus, we will show how theuse of g-dominance within a MO metaheuristic will let us producea (reduced) set of efficient points adapted to the DM’s preferencesinstead of the entire Pateto Set or a single efficient solution.

One of the main tools for expressing preference information isthe use of reference points [33]. Reference points consist of aspira-tion levels reflecting desirable values for the objective functions.This is a natural way of expressing preference information and letsthe DM express hopes about his/her most preferred solutions. Thereference point is projected onto the Pareto front by minimizing aso-called achievement scalarizing function [33] outlined in Section

686 J. Molina et al. / European Journal of Operational Research 197 (2009) 685–692

3. Reference points and achievement scalarizing function play themain role in some of the most commonly adopted methods, suchas the light beam search [19], the visual interactive approach[21] and the Pareto Race [22], the STOM method [26] or the NIM-BUS method [24].

When solving real-world optimization problems, classicalmethods encounter great difficulty in dealing with the complexityinvolved in these situations and cannot offer a reliable solution. Wecan find many real applications in fields such as economics, engi-neering or science where methods with ample mathematical sup-port (ensuring the optimality of solutions under ideal conditions)cannot obtain a solution or cannot obtain one in a reasonable time.These facts led researchers to develop metaheuristic methods tosolve these very complex models. The success of these types ofstrategies produced enormous interest in their study giving riseto an active community and a number of very efficient metaheuris-tic algorithms for multiobjective optimization. Such approaches,which are generically called MultiObjective Meta-Heuristics(MOMH) are very popular nowadays, as shown in several surveyssuch as [20,15] or [5]. However, most of the MOMH focus on theapproximation of the Pareto front without including DM’s prefer-ences. However, as shown before, the determination or approxima-tion of the Pareto front is not enough, and the DM’s preferenceshave to be incorporated in order to determine the solution thatbetter represents these preferences. But very few works can befound using MOMH which incorporate DM’s preferences (as shownin Section 2) and a common fact in all of them is that many mod-ifications on the main architecture have to be done in order to in-clude DM’s preferences into the MOMH.

This is an important fact when dealing with a complex problembecause not every MOMH can be suitable or efficient for any givenproblem. In the MOMH literature, one can find efficient MOMH fornonlinear continuous problems, for combinatorial problems, forproblems where evaluating the objective functions is very expen-sive, for vehicle routing problems, and for many other types ofcomplex problems. Thus, we can say that, regardless of the typeof problem to be solved, one can find an efficient MOMH methodto deal with it. However, in most cases, these MOMH methodsare not designed to including DM’s preferences and modifying itfor such an aim may be cumbersome, as we will see when review-ing MOMH methods including preferences.

In this paper, we propose a new concept of dominance that willallow us to include easily the DM’s preferences into any MOMH,without having to modify the main architecture of the specificsearch engine adopted. This concept will combine the traditionalPareto efficiency (that will be defined in Section 1.1) with theuse of reference points (that will be described in Section 3), andwill be designed to be used together with a MO metaheuristic inorder to let it easily include the DM’s preferences. As mentionedbefore, one of the main advantages with respect to the existing at-tempts to include preferences when using a MO metaheuristic(that will be described in Section 2) is that g-dominance can beused without having to modify the main architecture of the mainmethod, as will be shown in Section 4. Finally, in Section 5 we willvalidate our proposed approach implementing the g-dominance intwo different metaheuristics: the NSGA-II [12], which is a MOEArepresentative of the state-of-the-art in the area, and the DEMORSmethod [29], which is a hybrid of a differential evolution methodwith a Rough Sets tool.

1.1. Pareto efficiency

Given the MultiObjective Programming problem (MOP):

ðMOPÞ Min ðf1ðxÞ; f2ðxÞ; . . . ; fpðxÞÞs: t: : x 2 X;

where �x= (x1, x2, . . ., xn) are the decision variables,�X is the set of fea-sible solutions, �fi are the objective functions, �f = (f1, f2, . . . , fp) iscalled the vector objective function.

A feasible solution x* 2 X is (Pareto) efficient for the MOPproblem if there does not exist any other solution x 2 X, such that

fiðxÞ 6 fiðx�Þ 8i ¼ 1; . . . ;p

with at least one j 2 {1, . . . ,p} such that fj(x) < fj(x*).If this is not the case, this solution x* is said to be dominated by

solution x. The set of all the efficient solutions for MOP is called thePareto front.

2. Including preferences with a MultiObjective Metaheuristic

As indicated before, only a few works can be found usingMOMH and including DM’s preferences. In Refs. [3,4], we can finda survey on including preferences when using a multiobjectiveevolutionary algorithm (MOEA). The following are some of themethods reviewed therein.

In Refs. [16], we can find the earliest attempt to incorporatepreferences, and the proposal was to use MOGA (introduced inthe same paper) together with goal information as an additionalcriterion to asign ranks to the population. Greenwood and Hu[17] adopt utility functions to perform ranking of attributes, andalso incorporate preference information into the survival criteria.Cvetkovic and Parmee, [6] and [7], use binary preference relations(translated into weights) to narrow the search. These weights areused in different ways to modify the concept of dominance. Rekieket al. [29] use the PROMETHEE method to generate weights for aMOEA. Massebeuf et al. [23] use PROMETHEE II in an a posterioriform: a MOEA generates efficient solutions and PROMETHEE II se-lects some of them based on the DM’s preferences. In [8,12], Debuses variations of Compromise Programming to bias the searchof a MOEA. Finally, in Refs. [10,11], Deb requires the DM to providespecific goals for each objective.

More recently, some other approaches can be found, as in Ref.[27] where Phelps and Koksalan use pairwise comparisons to in-clude the DM’s preferences into the fitness funcion. In the GuidedMultiObjective Evolutionary Algorithm (G-MOEA) proposed in Ref.[2], user preferences are taken into account using trade-offs, sup-plied by the DM, in order to modify the definition of dominance.In Ref. [1], Branke and Deb propose two schemes to include prefer-ence information when using a MOEA (they use the NSGA-II [13]for validation purposes): (1) modifying the definition of dominance(using the Guided Dominance Principle of G-MOEA) and (2) using abiased crowding distance based on weights. In Deb et al. [14], pref-erences are included through the use of reference points. In this pa-per, the authors claim that ‘‘a single solution does not provide a goodidea of the properties of solutions near the desired region of the front”and that ‘‘by providing a (reference point), the decision-maker is notusually looking for a single solution, rather she/he is interested inknowing the properties of solutions which correspond to the optimumand near-optimum solutions respecting the (reference point).” But theapproach followed to get this approximation is based on rankingsand then can only be applied with ranking-based methods, suchas the NSGA-II. Also, being the case of a ranking-based method,important modifications of the main algorithm have to be carriedout.

In Ref. [28] (using a tabu search and simulated annealing meth-od) and in Ref. [32] (using a simulated annealing method) theauthors ask the DM to provide levels for each of the objectives ateach iteration, and use such levels to constrain the solution spaceto be explored. Finally, Hapke et al. [18] (using a simulated anneal-ing method) compute the approximation of the Pareto front andthen invoke an interactive procedure to find the most preferredsolution within that set.

Fig. 1. Including DM’s preferences.

Fig. 2. Projection onto the Pareto front.

Fig. 3. Sample around the projection.

J. Molina et al. / European Journal of Operational Research 197 (2009) 685–692 687

Summarizing, all of these methods require important modifica-tions to the MOMH used as a search engine in order to generate thePareto front, and then it becomes more difficult to introduce fur-ther modifications for incorporating the DM’s preferences. In gen-eral, a change in the main architecture of the MOMH is required forincorporating user’s preferences. This makes things difficult forpracticioners, who are normally interested only in a small set ofefficient solutions rather than the entire Pareto front. Thus, to solvea MOP problem, one must be able to find efficient solutions (i.e.,resolution capabilities) and must be able to interact with the DM(i.e., interaction capabilities) in order to incorporate his/her prefer-ences during the search process. But, as shown in Fig. 1, the mostsuitable method (Evolutionary Algorithms (EMO), Tabu Search(TS), Scatter Search (SS), etc) to solve a given problem can be verydifficult to modify in order to incorporate interaction, and then onecan be forced to change the MOMH adopted.

3. Reference points and achievement scalarizing functions

Achievement scalarizing functions (asf) were first proposed inRef. [33] and nowadays are part of many MOP methods. Theachievement (scalarizing) function projects any given (feasible orinfeasible) reference point g 2 Rp onto the Pareto front. Also, asshown in Ref. [25] any efficient solution can be found using anasf. This approach transforms a MultiObjective Optimization prob-lem (MOP) into the following single-objective problem (ASFP)

ðASFPÞ Min sgðfðxÞÞ ¼ maxi¼1; ... ;p

xiðfiðxÞ � giÞf g þ qPpi¼1ðfiðxÞ � giÞ

s: t: : x 2 X;

where q > 0 is a small augmentation coefficient and x1,� � �,xp areweights. Fig. 2 shows how an asf projects reference points intothe Pareto front. See Ref. [25] for more details about asf, the roleof the weights, the augmentation coefficient, etc.

This is, asfs let you transform an MOP into a single-objectiveproblem and obtain a single solution but adapted to the DM’s pref-erences. But in most cases an iterative method is required to obtainthe most preferred solutions as the DM learns about his/her prefer-ences and the problem throughout the interaction process, mainly

changing the reference point or the weights in the asf. Then, anapproximation of the Pareto front around the projected solutioncould be more interesting than simply the projected solutions, asa wider set of alternatives could be shown, all of them adaptedto the DM’s preferences, as shown in Fig. 3.

Fig. 4. Flags based on g1.

Fig. 5. Infeasible reference point.

Fig. 6. Feasible reference point.

688 J. Molina et al. / European Journal of Operational Research 197 (2009) 685–692

As indicated before, this can be done by changing the referencepoint or the parameters in the asf and performing multiple runs.This requires the use of a single-objective optimizer instead of amultiobjective solver, and as a result a fixed number of solutionsaround the reference point are obtained. The main issue with thisapproach is how to manage the parameters in order to obtain aspread (but not too wide) approximation of the area of interest

Fig. 7. New refe

of the efficient front, this is, a representative sample of the areaaround the projection.

On the other hand, our proposal consists of modifying the Par-eto dominance definition in order to directly obtain an approxima-tion of the Pareto front around the projection using amultiobjective solver without setting or varying any parameter.Our proposed approach has the advantage of being very easy toimplement and to couple into any MOMH. This aims to give theuser the freedom of choosing the MOMH that considers as the mostappropriate for the problem at hand, without having to worryabout possible modifications to the architecture of the search en-gine, as a requirement to incorporate his/her preferences.

4. g-dominance

Given a reference point v 2 Rp and a point w 2 Rp, we defineFlagv(w) in the following way:

FlagvðwÞ ¼1 if wi 6 vi 8i ¼ 1 ; . . . ; p

1 if vi 6 wi 8i ¼ 1 ; . . . ; p

0 otherwise:

8><>:

This is, given a reference point g1, we divide the space in the follow-ing way (Fig. 4).

And, based on these flags, we propose the following dominancerelation (g-dominance).

rence point.

J. Molina et al. / European Journal of Operational Research 197 (2009) 685–692 689

Given two points w;w0 2 Rp, then, w0 is g-dominated by w if:

1. Flagg(w) > Flagg(w0)or2. Being Flagg(w) = Flagg(w0), we have:

wi 6 w0i 8i ¼ 1; . . . ;p

with at least one j such that wj < w0j.This will drive the search naturally to the desired area of the

efficient front (it does not matter if the reference point is feasibleor not), as shown in Figs. 5 and 6.

Our proposed g-dominance can be easily implemented into anyMOMH, by just changing the dominance-checking function or bychanging the way in which the objective functions are evaluated.This last case is the most simple way to implement g-dominancein an existing code, as only requires the modification of the module

Fig. 8. Efficient solutions generated by the NSGA-II (

evaluating the objective functions. For our problem (minimization)and given a reference point g, the g-dominance can be introducedevaluating the functions in the way shown in Algorithm 1, whereM is a big number. This is based on a simple idea: penalize solu-tions with Flagg(f) = 0 with a big amount M in order to make anysolution with Flagg(f) = 0 to be dominated by any solution withFlagg(f) = 1. Computing the flags is very simple, too, as it is illus-trated in Algorithm 2.

Algorithm 1. Function: evaluate f(x)

1: Evaluate fi(x), i = 1, . . . , p2: Compute Flagg(f)3: if Flagg(f) = 0 then4: fi(x) = fi(x) + M i = 1, . . . , p5: end if

left) and DEMORS(right) for the ZDT1 problem.

pera

Algorithm 2. Function: Compute Flagg(f)

1: Flagg(f) = 12: for i = 1,� � �,p3: if fi(x) > gi then4: Flagg(f) = 05: end if6: end for7: if Flagg(f) = 08: Flagg(f) = 19: for i = 1, . . . ,p do10: iffi(x) < gi then11: Flagg(f) = 012: end if13: end for14: end if

690 J. Molina et al. / European Journal of O

Fig. 9. Efficient solutions generated by the NSGA-II (

This simple modification makes it possible to use g-dominancewith any MOMH. In the next section we describe how to use the

g-dominance integrated into a generic interactive scheme, inorder to let the DM to iteratively achieve his/her most preferredsolution.

4.1. Using g-dominance in an interactive way

Our proposed g-dominance can be used in an interactivescheme, where the DM will be guided iteratively to the most pre-ferred solution. The way preferences are going to be included ateach iteration is by changing the current reference point or byselecting a solution from the sample shown. Then, the DM willbe shown a set of efficient solutions adapted to this new informa-tion provided. This is, the interaction will be carried out as shownin Algorithm 3, where, once a reference point gt is provided at iter-

tional Research 197 (2009) 685–692

left) and DEMORS(right) for the deb32 problem.

J. Molina et al. / European Journal of Operational Research 197 (2009) 685–692 691

ation t, the set of g-efficient solutions is called PFgt, and the sample

from this set selected to be shown to the DM is called RSt.

Algorithm 3. Interaction

1: t = 0. Ask the DM to provide a reference point g0

2: while the DM is not satisfied3: Compute the set PFgt

of gt-efficient solutions.4: Select rs representative solutions from PFgt

;RSt ¼ fs1

gt; . . . ; srs

gtg.

5: Show the set RSt to the DM.6: if the DM is not satisfied with any of these solutions7: if the DM wants to provide a new reference point gt + 1

8: Ask the DM to provide the new reference point gt + 1.9: end if10: if the DM wants to select a solution in RSt

11: Ask the DM to choose the most preferred solution inRSt.

12: Use this information to compute the new referencepoint gt + 1.

13: end if14: t = t + 115: end if16: end while

In other words, at each iteration, the DM is shown a set of solu-tions adapted to a reference point gt, and if he/she does not feel sat-isfied with any of these solutions, he/she can modify the referencepoint in order to refine the preferences or he/she can select a solu-tion sk

gtin RSt and a new reference point gt+1 will be computed

using this information.In this last case, the way to compute the new reference point

gt+1 is by doing a convex combination of skgt

and gt+1:

gtþ1 ¼ ð1� hÞ � gt þ h � skgt;

where h is a parameter in (0,1) and represents the speed of conver-gence of the algorithm. The closer h is to 1, the closest the new ref-erence point is to sk

gtand then the closest is the new set from gt+1-

efficient solutions around skgt

. This effect is shown in Fig. 7.The construction of RSt is not trivial or simple. Some important

questions arise, such as, for example, the number of solutions to in-clude. Quite a lot of literature on Interactive Methods could be usedto deal with these questions, and we consider it an interesting fu-ture research path.

The way we propose to select rs representative solutions fromPFgt

, is by using a clustering procedure. What we try to do hereis to show the DM a number (rs > p) of representative solutionsfrom which to choose the most preferred ones. These rs referencesolutions at iteration t will be the representative item of a clusterin PFgt

. Given an iteration t and its corresponding set of solutionsPFgt

, the following procedure is used to choose the referencesolutions.

Algorithm 4. Building the set RSt

1: for i = 1,� � �,p do2: Choose the best solution in PFgt

for criteria i.

3: Include it in RSt.4 end for5: while #(RSt) < rs do6: Choose the solution in PFgt

n RSt maximizing the distancefrom RSt.

7: Include it in RSt.8: end while

Thus, this set contains a representative sample of PFgtincluding

its p extreme points and rs � p diverse compromise solutions. Asmentioned above, this is only one possible way to build this set,but many questions remain open at this point.

5. Computational experiments

In order to validate our proposed approach, we coupled g-dom-inance to two different metaheuristics: the NSGA-II [13], which is aMOEA representative of the state-of-the-art in the area, and theDEMORS method [30], which is a hybrid of a differential evolutionmethod with a Rough Sets tool. We used two test problems for ourexperiments: ZDT1 from the ZDT set [34] and deb32 from the Debset [9]. Each problem is solved for three different reference points,feasible and infeasible.

Figs. 8 and 9 show how both methods (i.e., the NSGA-II and DE-MORS) are able to find a set of efficient points adapted to the infor-mation contained in the reference points. None of them required adeep modification in their structure and they worked both for thefeasible and the infeasible case.

6. Conclusions

In this paper, we propose a new concept of dominance, whichwe call g-dominance. This concept lets us approximate the efficientset around the area of the most preferred point without using anyscalarizing function. This kind of dominance is independent of theMOMH used and can be easily coupled to any of them (either evo-lutionary or not) without any deep modification to the main struc-ture of the method chosen.

We propose the use of g-dominance in an interactive scheme,where the DM is guided iteratively to the most preferred solution.The way preferences are to be included at each iteration is bychanging the current reference point or by selecting a solutionfrom the sample shown, and then the DM is shown a set of efficientsolutions adapted to this new information provided. This kind ofinteraction is easy and intuitive for the DM and, together withthe possibility of choosing any MOMH available, we believe thatit may become an efficient tool to deal with real-world problems.

On the other hand, some related aspects deserve a deeper anal-ysis in the future. This is the case of the construction of the repre-sentative sample to be shown to the DM, or the performance of thisapproach when the number of objectives is increased.

Acknowledgements

The authors thank the anonymous reviewers for their valuablecomments, which greatly helped them to improve the contents ofthis paper.

The second author acknowledges support from CONACyTthrough a scholarship to pursue graduate studies at the ComputerScience Department of CINVESTAV-IPN. The fourth authoracknowledges support from CONACyT through project number45683-Y. This research has been partially funded too by the re-search projects of Andalusian Regional Government and SpanishMinistry of Education and Science.

References

[1] J. Branke, K. Deb, Integrating user preferences into evolutionary multi-objective optimization, in: Y. Jin (Ed.), Knowledge Incorporation inEvolutionary Computation, Springer, Heidelberg, 2005, ISBN 3-540-22902-7,pp. 461–477.

[2] J. Branke, T. Kaußler, H. Schmeck, Guidance in evolutionary multi-objectiveoptimization, Advances in Engineering Software 32 (2001) 499–507.

692 J. Molina et al. / European Journal of Operational Research 197 (2009) 685–692

[3] C.A. Coello Coello, Handling preferences in evolutionary multiobjectiveoptimization: A survey, 2000 Congress on Evolutionary Computation, vol.1,IEEE Service Center, Piscataway, New Jersey, 2000, pp. 30–37. July.

[4] C.A. Coello Coello, G.B. Lamont, D.A. Van Veldhuizen, Evolutionary Algorithmsfor Solving Multi-Objective Problems, 2nd ed., Springer, New York, 2007, ISBN978-0-387-33254-3. September.

[5] C.A. Coello Coello, D.A. Van Veldhuizen, G.B. Lamont, Evolutionary Algorithmsfor Solving Multi-Objective Problems, Kluwer Academic Publishers, New York,2002, ISBN 0-3064-6762-3.

[6] D. Cvetkovic, I.C. Parmee, Genetic algorithm-based multi-objectiveoptimisation and conceptual engineering design, Congress on EvolutionaryComputation - CEC99, vol.1, IEEE, Washington DC, USA, 1999, pp. 29–36.

[7] D. Cvetkovic, I.C. Parmee, Preferences and their application in evolutionarymultiobjective optimisation, IEEE Transactions on Evolutionary Computation 6(1) (2002) 42–57. February.

[8] K. Deb, Multi-Objective Evolutionary Algorithms: Introducing Bias AmongPareto-Optimal Solutions, KanGAL Report 99002, Indian Institute ofTechnology, Kanpur, India, 1999.

[9] K. Deb, Multi-Objective Genetic Algorithms: Problem Difficulties andConstruction of Test Problems, Evolutionary Computation 7 (3) (1999) 205–230. Fall.

[10] K. Deb, Solving goal programming problems using multi-objective geneticalgorithms, in: 1999 Congress on Evolutionary Computation, IEEE ServiceCenter, Washington, DC, 1999, pp. 77–84. July.

[11] K. Deb, Nonlinear goal programming using multi-objective genetic algorithms,Journal of the Operational Research Society 52 (3) (2001) 291–302.

[12] K. Deb, Multi-objective evolutionary algorithms: Introducing bias amongpareto-optimal solutions, in: A. Ghosh, S. Tsutsui (Eds.), Advances inEvolutionary Computing. Theory and Applications, Springer, Berlin, 2003, pp.263–292.

[13] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjectivegenetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation 6(2) (2001) 182–197. April.

[14] K. Deb, J. Sundar, N. Udaya Bhaskara Roa, S. Chaudhuri, Reference point basedmulti-objective optimization using evolutionary algorithms, InternationalJournal of Computational Intelligence Research 2 (3) (2006) 273–286.

[15] M. Ehrgott, X. Gandibleux, A survey and annotated bibliography ofmultiobjective combinatorial optimization, OR Spektrum 22 (2000)425–460.

[16] C.M. Fonseca, P.J. Fleming, Genetic algorithms for multiobjective optimization:Formulation, discussion and generalization, in: S. Forrest (Ed.), Proceedings ofthe Fifth International Conference on Genetic Algorithms, University of Illinoisat Urbana-Champaign, Morgan Kauffman Publishers, San Mateo, California,1993, pp. 416–423.

[17] G.W. Greenwood, X.S. Hu, J.G. D’Ambrosio, Fitness functions for multipleobjective optimization problems: Combining preferences with paretorankings, in: R.K. Belew, M.D. Vose (Eds.), Foundations of GeneticAlgorithms, vol. 4, Morgan Kaufmann, San Mateo, California, 1997, pp. 437–455.

[18] M. Hapke, A. Jaszkiewicz, R. Slowinski, Interactive analysis of multiple-criteriaproject scheduling problems, European Journal of Operational Research 107(2) (1998) 315–324.

[19] A. Jaszkiewicz, R. Slowinski, The light beam search approach – an overview ofmethodology and applications, European Journal of Operational Research 113(2) (1999) 300–314.

[20] D. Jones, S. Mirrazavi, M. Tamiz, Multi-objective metaheuristics: An overviewof the current state-of-the-art, European Journal of Operational Research 137(1) (2002) 1–9. February.

[21] P. Korhonen, J. Laakso, A visual interactive method for solving the multiplecriteria problem, European Journal of Operational Research 24 (2) (1986) 277–287.

[22] P. Korhonen, J. Wallenius, A pareto race, Naval Research Logistics 35 (6) (1988)615–623.

[23] S. Massebeuf, C. Fonteix, L.N. Kiss, I. Marc, F. Pla, K. Zaras, Multicriteriaoptimization and decision engineering of an extrusion process aided by adiploid genetic algorithm, in: 1999 Congress on Evolutionary Computation,IEEE Service Center, Washington, DC, 1999, pp. 14–21. July.

[24] K. Miettinen, M.M. Mäkelä, Synchronous approach in interactivemultiobjective optimization, European Journal of Operational Research 170(3) (2006) 909–922.

[25] K.M. Miettinen, Nonlinear Multiobjective Optimization, Kluwer AcademicPublishers, Boston, Massachusetts, 1999.

[26] H. Nakayama, Y. Sawaragi, Satisficing trade-off method for multiobjectiveprogramming, in: M. Grauer, A. Wierzbicki (Eds.), Interactive DecisionAnalysis. Proceedings of the International Workshop on Interactive DecisionAnalysis and Interpretative Computer Intelligence. Lecture Notes in Economicsand Mathematical Systems, vol. 229, Springer-Verlag, 1984, pp. 113–122.

[27] S. Phelps, M. Koksalan, An interactive evolutionary metaheuristic formultiobjective combinatorial optimization, Management Science 49 (12)(2003) 1726–1738. December.

[28] M. João Alves, João Clímaco, An Interactive method for 0–1 multiobjectiveproblems using simulated annealing and tabu search, Journal of Heuristics 6(3) (2000) 385–403. August.

[29] B. Rekiek, P.D. Lit, F. Pellichero, T. L’Eglise, E. Falkenauer, A. Delchambre,Dealing with user’s preferences in hybrid assembly lines design. in:Proceedings of the MCPL’2000 Conference, 2000.

[30] L.V. Santana-Quintero, N. Ramírez-Santiago, C.A. Coello Coello, J. Molina Luque,A.G. Hernández-Díaz, A new proposal for multiobjective optimization usingparticle swarm optimization and rough sets theory, in: T.P. Runarsson, H.-G.Beyer, E. Burke, J.J. Merelo-Guervós, L.D. Whitley, X. Yao (Eds.), ParallelProblem Solving from Nature – PPSN IX, 9th International Conference,Springer. Lecture Notes in Computer Science, vol. 4193, Reykjavik, Iceland,September 2006, pp. 483–492.

[31] R.E. Steuer, Multiple Criteria Optimization: Theory Computation andApplication, John Wiley, New York, 1986.

[32] E. Ulungu, J. Teghem, C. Ost, Efficiency of interactive multi-objective simulatedannealing through a case study, Journal of the Operational Research Society 49(1998) 1044–1050.

[33] A.P. Wierzbicki, The use of reference objectives in multiobjective optimization,in: G. Fandel, T. Gal (Eds.), Multiple Criteria Decision Making Theory andApplication, Springer-Verlag, New York, 1980, pp. 469–486.

[34] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionaryalgorithms: Empirical results, Evolutionary Computation 8 (2) (2000) 173–195. Summer.


Recommended