+ All Categories
Home > Documents > An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in...

An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in...

Date post: 27-Jan-2017
Category:
Upload: arpan
View: 213 times
Download: 0 times
Share this document with a friend
15
1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation 1 An Algorithm for Many-Objective Optimization with Reduced Objective Computations: A Study in Differential Evolution Sanghamitra Bandyopadhyay, Senior Member, IEEE, and Arpan Mukherjee Abstract—In the present work we have developed an algo- rithm for Many-Objective Optimization (MaOO) problems, that will work faster than existing ones, while having competitive performance. The algorithm periodically reorders the objectives based on their conflict status and selects a subset of conflicting objectives for further processing. We have taken Differential Evolution Multi-Objective Optimization (DEMO) as the under- lying metaheuristic evolutionary algorithm, and implemented the technique of selecting a subset of conflicting objectives using a correlation-based ordering of objectives. The resultant method is called α-DEMO, where α is a parameter determining the number of conflicting objectives to be selected. We have also proposed a new form of elitism so as to restrict the number of higher ranked solutions that are selected in the next population. The α- DEMO with the revised elitism is referred to as α-DEMO-revised. Extensive results on five DTLZ functions show that the number of objective computations required in the proposed algorithm is much less compared to the existing algorithms, while the convergence measures are competitive or often better. Statistical significance testing is also performed. A real-life application on structural optimization of factory shed truss is demonstrated. Keywords: Many-Objective Optimization, Correlation based or- dering, Differential Evolution, Elitism, Structural optimization. I. I NTRODUCTION Multi-Objective Optimization (MOO) problem requires the simultaneous optimization of more than one objective, which are typically in conflict with each other. Evolutionary Al- gorithms (EAs) [1], [2], [3] have been a popular choice in solving MOO problems. When the number of objectives becomes larger (more than four) [4], as in the case of most practical problems, the chances of poor convergence to the true Pareto optimal front and computational cost increase. The pro- cess of Pareto-dominance makes the individuals become non- dominated with each other at early generations. The search ability of conventional MOEAs decreases with increasing number of objectives, since the selection pressure towards a true Pareto Front gets weaker. EMO has a clear advantage for bi-objective optimization problems, but with more number of objectives the desired accuracy is not achieved. Thus we have tried to develop an algorithm which works better than the existing MOEAs and is comparable with present dimensionality reduction techniques. Sanghamitra Bandyopadhyay is with the Machine Intelli- gence Unit, Indian Statistical Institute, Kolkata-700108, India. Email : sanghami@isical.ac.in Arpan Mukherjee is with the Statistical Quality Control and Oper- ations Research Unit, Indian Statistical Institute, Kolkata-700108, India. Email : arpanmukherjee.isi@gmail.com The difficulties with MaOO has been discussed by Farina and Amato [5], where the Pareto-optimality for large num- ber of objectives has been redefined. Studies [4], [6], [7], [8] have shown the lack of usefulness of EMO algorithms in Many-Objective Optimization problems. Three MOEAs namely, PESA, SPEA2 and NSGA-II were applied on four scalable functions with number of objectives ranging from 2 to 8 [7], where the convergence obtained from PESA was better, NSGA-II proved to be the fastest and NSGA-II and SPEA2 have equal diversity of solutions. Techniques like Multiple Single Objective Pareto Sampling (MSOPS) and Repeated Single Objective (RSO) approaches are shown to outperform NSGA-II [9] for large number of objectives. In MaOO problems, fitness function computation is quite time consuming [10], [11]. Coello et al. [11] have identified real-world engineering optimization problems with objective functions of high non-linearity, high-dimensionality and hence requiring expensive evaluations. In this scenario there is a need of developing an MOEA that will solve an MaOO problem using less computational time, and also preserve the conver- gence and diversity of the solutions obtained. Hughes [12] has developed an extension of MSOPS [13] by using scalarizing functions to evaluate the fitness of each solution. The resultant method has shown to perform better than the original MSOPS, when tested on highly constrained 2D test function and 5D constrained optimization functions. A fuzzy-based definition of optimality and hence an algorithm FDGA [14] has been developed and implemented on a 10-objective DTLZ function. The algorithm was compared with NSGA-II based on a metric which is similar to the purity of solutions [15]; the results obtained were similar to those of NSGA-II. Zitzler et al. have developed a hypervolume-based algorithm HypE [16] and have used Monte-Carlo simulations to find out the hypervolumes for MaOO problems. The algorithm has been tested for problems involving objectives upto 50 and showed significant results when compared to other contemporary MOEAs with respect to Hypervolume Indicator value. In subsequent sections we will define the Hypervolume Indicator and discuss how it can explain both convergence and diversity of a solution set. There are other algorithms [17], [18], [19] developed to improve solution characteristics in MaOO problems. DMOEA [17] uses the concept of entropy and density to develop a dynamic MOEA. It has been to shown to perform well in terms of hypervolume value for three to nine objective test problems with respect to MSOPS [13], MSOPS-II [12] and NSGA-II. UPS-MOEA [18] uses the concept of Unrestricted Population,
Transcript
Page 1: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

1

An Algorithm for Many-Objective Optimizationwith Reduced Objective Computations: A Study in

Differential EvolutionSanghamitra Bandyopadhyay, Senior Member, IEEE, and Arpan Mukherjee

Abstract—In the present work we have developed an algo-rithm for Many-Objective Optimization (MaOO) problems, thatwill work faster than existing ones, while having competitiveperformance. The algorithm periodically reorders the objectivesbased on their conflict status and selects a subset of conflictingobjectives for further processing. We have taken DifferentialEvolution Multi-Objective Optimization (DEMO) as the under-lying metaheuristic evolutionary algorithm, and implemented thetechnique of selecting a subset of conflicting objectives using acorrelation-based ordering of objectives. The resultant method iscalled α-DEMO, where α is a parameter determining the numberof conflicting objectives to be selected. We have also proposeda new form of elitism so as to restrict the number of higherranked solutions that are selected in the next population. The α-DEMO with the revised elitism is referred to as α-DEMO-revised.Extensive results on five DTLZ functions show that the numberof objective computations required in the proposed algorithmis much less compared to the existing algorithms, while theconvergence measures are competitive or often better. Statisticalsignificance testing is also performed. A real-life application onstructural optimization of factory shed truss is demonstrated.Keywords: Many-Objective Optimization, Correlation based or-dering, Differential Evolution, Elitism, Structural optimization.

I. INTRODUCTION

Multi-Objective Optimization (MOO) problem requires thesimultaneous optimization of more than one objective, whichare typically in conflict with each other. Evolutionary Al-gorithms (EAs) [1], [2], [3] have been a popular choicein solving MOO problems. When the number of objectivesbecomes larger (more than four) [4], as in the case of mostpractical problems, the chances of poor convergence to the truePareto optimal front and computational cost increase. The pro-cess of Pareto-dominance makes the individuals become non-dominated with each other at early generations. The searchability of conventional MOEAs decreases with increasingnumber of objectives, since the selection pressure towards atrue Pareto Front gets weaker. EMO has a clear advantagefor bi-objective optimization problems, but with more numberof objectives the desired accuracy is not achieved. Thuswe have tried to develop an algorithm which works betterthan the existing MOEAs and is comparable with presentdimensionality reduction techniques.

Sanghamitra Bandyopadhyay is with the Machine Intelli-gence Unit, Indian Statistical Institute, Kolkata-700108, India.Email : [email protected]

Arpan Mukherjee is with the Statistical Quality Control and Oper-ations Research Unit, Indian Statistical Institute, Kolkata-700108, India.Email : [email protected]

The difficulties with MaOO has been discussed by Farinaand Amato [5], where the Pareto-optimality for large num-ber of objectives has been redefined. Studies [4], [6], [7],[8] have shown the lack of usefulness of EMO algorithmsin Many-Objective Optimization problems. Three MOEAsnamely, PESA, SPEA2 and NSGA-II were applied on fourscalable functions with number of objectives ranging from 2 to8 [7], where the convergence obtained from PESA was better,NSGA-II proved to be the fastest and NSGA-II and SPEA2have equal diversity of solutions. Techniques like MultipleSingle Objective Pareto Sampling (MSOPS) and RepeatedSingle Objective (RSO) approaches are shown to outperformNSGA-II [9] for large number of objectives.

In MaOO problems, fitness function computation is quitetime consuming [10], [11]. Coello et al. [11] have identifiedreal-world engineering optimization problems with objectivefunctions of high non-linearity, high-dimensionality and hencerequiring expensive evaluations. In this scenario there is a needof developing an MOEA that will solve an MaOO problemusing less computational time, and also preserve the conver-gence and diversity of the solutions obtained. Hughes [12] hasdeveloped an extension of MSOPS [13] by using scalarizingfunctions to evaluate the fitness of each solution. The resultantmethod has shown to perform better than the original MSOPS,when tested on highly constrained 2D test function and 5Dconstrained optimization functions. A fuzzy-based definitionof optimality and hence an algorithm FDGA [14] has beendeveloped and implemented on a 10-objective DTLZ function.The algorithm was compared with NSGA-II based on a metricwhich is similar to the purity of solutions [15]; the resultsobtained were similar to those of NSGA-II. Zitzler et al. havedeveloped a hypervolume-based algorithm HypE [16] and haveused Monte-Carlo simulations to find out the hypervolumes forMaOO problems. The algorithm has been tested for problemsinvolving objectives upto 50 and showed significant resultswhen compared to other contemporary MOEAs with respectto Hypervolume Indicator value. In subsequent sections wewill define the Hypervolume Indicator and discuss how it canexplain both convergence and diversity of a solution set. Thereare other algorithms [17], [18], [19] developed to improvesolution characteristics in MaOO problems. DMOEA [17] usesthe concept of entropy and density to develop a dynamicMOEA. It has been to shown to perform well in terms ofhypervolume value for three to nine objective test problemswith respect to MSOPS [13], MSOPS-II [12] and NSGA-II.UPS-MOEA [18] uses the concept of Unrestricted Population,

Page 2: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

2

and each time generates population of non-dominated solutiononly, and promises a better convergence result. However suchmethod can possibly lead to poorer convergence and lack ofdiversity if we increase the objective size. Coello et al. devel-oped two new algorithms CEGA and MDFA and had shownbetter convergence values for 5 to 50 objectives test problemscompared to NSGA-II, MSOPS, HypE, and DMO [19]. Morerecently, there have been several modifications to the standardNSGA-II algorithm [20], [21], [22], which can improve itsperformance. These improved algorithms focus on improvingthe diversity of the solution by changing the elitism selectionmechanism. Coello et al. have also designed two novel ap-proaches CEGA and MDFA to improve diversity of solutionand have tested them on test problems with dimensionalityranging from 5 to 50 [23].

With several new algorithms developed and modified defi-nitions of optimality and dominances defined [5], [14], [17],[24], [25], [26], [27], the focus then shifted to reductionof objectives for achieving solutions at a faster rate. Thusresearchers have tried to identify conflicting objective func-tions [28], [29], which leads to objective reduction andhence the time required to reach optimality. Several objectivereduction techniques have been suggested [30], [31], [32],[33], [34], [35], [36] showing the time reduction in objectivecomputations. In our work, we select objectives, based oncorrelation among them. The proposed approach is integratedwith Differential Evolution for Multi-Objective Optimizationor DEMO [37], and is called α-DEMO. We then proposea new way of selecting solutions for the next generationsuch that higher ranked solutions are not able to fill up theentire population even if they are numerous. This technique isreferred to as α-DEMO-revised. We present our experimentson several standard scalable test functions. In Section II wefirst formulate the MOO and MaOO problems, followed by adescription of the DEMO algorithm in Section III. Thereafter,the proposed algorithms are described in detail in Section IV.The experimental setup and test problems are discussed inSection V. Section VI provides the experimental results wherewe apply our algorithm and compare it with other algorithms.Section VII contains a real-life application of the proposedtechnique for structural optimization. Finally, Section VIIIconcludes the article.

II. PROBLEM FORMULATION

Problems with a large number of objectives are commonin many engineering disciplines, like Aeronautics, StructuralEngineering etc. [11], [38]. In this section, we discuss thebasic formulations of multi and many-objective and definesome important terms associated with them.

A. Multi Objective OptimizationConsidering an objective function F : Rn −→ RM involv-

ing n decision variables−→X = [x1, x2, . . . , xn]

T . The MOOproblem can be formulated [39] as follows:

Minimizing

F (−→X ) =

{f1(−→X ), f2(

−→X ), . . . , fM (

−→X )}T

subject to constraints (if any)

gj(−→X ) ≥ 0, j = 1 to J

and

hk(−→X ) = 0, k = 1 to K (1)

where, xLi ≤ xi ≤ xUi , i = 1 to n.

Here n is the number of decision variables and M is thenumber of objectives. gj(

−→X )s are the inequality constraints

and hk(−→X )s are the equality constraints, while J and K are

the numbers of inequality and equality constraints respectively.Intersection of these constraints gives the decision variablespace S. xLi and xUi are the lower and upper bounds of eachdecision variable xi.

B. Definitions

1) Dominance [40]: With reference to a minimizationproblem, a decision vector −→x = (x1, x2, . . . , xn) is saidto dominate another decision vector −→y = (y1, y2, . . . , yn)(or x � y) iff ∀ i ∈ 1 to n fi(

−→x ) ≤ fi(−→y ), and

∃ i ∈ 1 to n, fi(−→x ) < fi(−→y ). The solution set F (−→x )

is said to weakly dominate another solution set F (−→y ) orF (−→x ) � F (−→y ), if x � y.A decision vector −→x = (x1, x2, . . . , xn) is said to stronglydominate another decision vector −→y = (y1, y2, . . . , yn) (orx �� y) iff ∀ i ∈ 1 to M , fi(−→x ) < fi(

−→y ). Similar to theprevious definition, the solution set F (−→x ) is said to stronglydominate another solution set F (−→y ) or F (−→x ) �� F (−→y ), ifx �� y.

2) Non-Dominated Solution Set [40]: A solution vector−→x ∈ S is said to be non-dominated solution, if @ any solutionvector −→y ∈ S, such that y � x. A Non-Dominated SolutionSet F is defined as

F = {−→x ∈ S |−→x is a non-dominated solution} (2)

3) Pareto Optimal Set [40]: A decision vector −→x ∈ S issaid to be Pareto Optimum if @ −→y ∈ S, such that F (−→y ) �F (−→x ). A Pareto Optimal Set PO is defined as

PO = {−→x ∈ S |−→x is a Pareto Optimum} (3)

4) Non-Conflicting Objective Set: Two objective func-tions fi(

−→x ) and fj(−→x ) are said to exhibit conflict [28]

between them if, ∃(−→a ,−→b ) such that

(fi(−→a ) < fi(

−→b ))∧(

fj(−→a ) > fj(

−→b ))

. Zitzler et al. [33] has modified the idea ofconflict and explained it based on induced Pareto dominance.Two objective sets Fi and Fj are said to exhibit conflictbetween themselves if the weak Pareto dominance relationinduced by them differs i.e �Fi

6=�Fj, where �Fi

denotesthe weak Pareto dominance induced by Fi. Thus a Non-Conflicting Objective Set F ′ ⊆ F is a subset of objectiveswith 2 ≤ |F ′| ≤ |F |, such that �F ′=�F i.e the exclusionof the objectives F − F ′ from the EMO does not change theinduced weak Pareto dominance.

Page 3: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

3

C. Many Objective Optimization: Basic Principles and Issues

A problem involving a large number of objectives (M > 4)[4] is generally said to be a Many-Objective Optimization(MaOO) Problem. As pointed out earlier, the efficiency ofconventional MOEAs decreases with increasing M . Firstlythe computation of a large number of objectives takes a lotof time. Secondly the selection rule created by the Paretodominance makes the solutions non-dominated with respectto each other, at an early stage of the EMO [4], [5]. Let usconsider a solution set of size N of an M(M ≥ 4) objectiveMaOO problem. Let us assume that, each of the solutions isdistinct in all M objectives, and each of the objective valuesare continuous variables.

The expected number non-dominated solution [41] is givenby:

A(N,M) =

n∑k=1

(−1)k+1

(N

k

)1

kM−1(4)

To give an interpretable expression we have divided the aboveexpression by the size of the solution N , and hence obtain thefollowing expression:

P (N,M) =A(N,M)

N=

∑nk=1(−1)k+1

(N

k

)1

kM−1

N(5)

It has been found that with different values of N , the value ofP (N,M) converges to 1 with increasing M . The convergenceof the expression is graphically illustrated in Fig. 1.

N=100

N=75

x

Number of Objectives10 20 30 40 50

y

P(N,M

)

0.4

0.6

0.8

1

Fig. 1. Variation of P (N,M) with the number of objectives

The value of P (N,M) approaching 1 indicates that, if wefollow the selection rule defined by Pareto dominance, thechance of getting a non-dominated solution increases as weincrease the number of objectives (M ). This problem canbe dealt with by following two approaches. The first one isto change the dominance criteria. In this direction Farina etal. have developed the idea of (1-k)-dominance and fuzzy-dominance [5], while Wang et al. have proposed a fuzzydominance based algorithm for MaOO problems [14]. Zouet al. have defined L-optimality and formulated an algorithmbased on it [17]. Other methods like controlling the dominance

area [24], concept of E-optimality [25], ε-dominance [26]and g-dominance [27] have been further additions to thedevelopment in this field.The second method is to select a subset of objectives andperform the EMO based on those objectives only. In thecontext of objective reduction, a principal component analysis(PCA) based algorithm has been suggested by Deb et al [30].Another technique called PCSEA [32] searches for the cornerof the Pareto front instead of the whole front and then thecorner solutions are used for dimensionality reduction. Zitzleret al. [33] defined δ-conflict as a measure of conflict amongobjective functions, and used it to select a subset F ′ of Fwhich preserve the weak Pareto dominance as explained bythe original set of objectives F . Later on correlation coefficienthas been taken as the basis for identification of conflicting ob-jectives, thus aiding in dimension reduction [34], [35]. Coelloet al. have developed an algorithm KOSSA [35], in whichcorrelation has been used as a measure of conflict amongobjectives, and a k-sized subset of the objective functions aretaken by stage-wise reduction of the number of objectives oversuccessive generations.

III. DIFFERENTIAL EVOLUTION FOR MULTIOBJECTIVEOPTIMIZATION

Differential Evolution for MultiObjective Optimization(DEMO) [37] is an extension of the Price & Storn Model[42]. Differential Evolution is a population based algorithm,which updates itself in each generation through mutation,crossover and selection. Let xi,G be an n dimensional vectorrepresenting the ith individual in iteration G of the DE process.The mutant vector vi,G+1, corresponding to xi,G is generatedby

−→v i,G+1 = −→x r1,G +R. (−→x r2,G −−→x r3,G) (6)

Where, r1, r2, r3 ∈ {1, 2, . . . , N} are three mutually differentrandom indices not equal to i. R is a real positive number∈ (0, 2), which controls the amplification of the difference(−→x r2,G −−→x r3,G) [42]. The Crossover Ratio (CR) decideswhich parts of the mutant vector is to be considered forcreating the ith trial vector ui,G+1. The decision is takenaccording to the following formulation

uji,G+1 =

{vji,G+1 if rand(0, 1) ≤ CRxji,G else

j =1, 2, . . . , n (7)

Here uji,G+1 and vji,G+1 represents the jth decision variableof the ith trial and mutant vector respectively. The assignmentof −→x i,G+1 is made after comparing the trial vector −→u i,G+1

with −→x i,G. For a minimization problem F : Rn −→ RM , theselection is made as,

−→x i,G+1 =

{−→x i,G if −→x i,G � −→u i,G+1−→u i,G+1 if −→u i,G+1 � −→x i,G

(8)

if −→x i,G and −→u i,G+1 are incomparable, then −→u i,G+1 getsadded to the population. The above model is also termed asDE/rand/1/bin, which is a specialization of DE/x/y/z [42].

Page 4: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

4

IV. PROPOSED ALGORITHM

The proposed technique is based on the concept of optimiz-ing a set of conflicting objectives in MaOO. Let Fconflict andFnon-conflict denote the set of conflicting and non-conflictingobjectives, respectively.Thus,

Fconflict ∪ Fnon-conflict = F (9)

where, F is the set of all the M objectives.In the proposed α-DEMO, each objective is first ordered

based on its conflict with the remaining objectives. Inspiredby KOSSA [35], Pearson’s correlation coefficient is used hereas the measure of conflict among the objectives. Thereafter thetop α fraction of objectives are selected as conflicting, and themethod proceeds with the set. Consequently,

|Fconflict| = [α ∗M ] (10)

where [α ∗M ] means the greatest integer less than or equalto α ∗ M and 0 < α < 1. Note that the initial set ofconflicting objectives depends on the choice of the initialpopulation. Again, as more and more solutions are generatedthe information about the conflict may also change. Thereforethe set Fconflict is recomputed at regular intervals, and alsothe algorithm is executed multiple times from different initialpopulations.α-DEMO works in two phases. In the first phase, the set

Fconflict is identified and the computation proceeds for a fixednumber of iterations. This is followed by a phase where all theobjectives are computed and the normal DEMO steps are exe-cuted for a certain number of iterations. Following this the setFconflict is updated. These two phases are illustrated in Fig. 2.The main function of the algorithm starts with initialising thefirst population and finding the initial solutions to the MOOproblem. It then selects a subset of [αM ] objectives basedon correlation among the function values. At each iteration,we generate a new population based on DE/rand/1/binformulation [42], and then sort the population, with respectto the selected objectives. At the end of a certain number ofiterations (say K0) we compute the rest of the objective values.For the next K1 iterations all the objectives are integrated andthe DEMO is executed. Thereafter the [αM ] objectives arere-computed based on the present conflict status in the currentpopulation. The process iterates in this fashion. The completealgorithm is shown in Algorithm 1.

A. Selection of Objectives based on Correlation Coefficients

Algorithm 2 is about computing the set Fconflict. Given thematrix FunctPopsize×M , first the correlation matrix is calcu-lated by taking the Pearson’s correlation coefficient betweentaking a pair of columns of the Matrix FunctPopsize×M . Thusthe ijth element of the correlation matrix contain the Pearson’scorrelation coefficient between the ith and the jth column ofthe matrix Funct. Next the matrix CorrFMxM is preparedby sorting each column of the correlation matrix in ascendingorder. A random objective (say ith) is considered, and the first[α ∗M ] − 1 objectives in that column are noted. The set of[α ∗M ] objectives thus includes i and the first [α ∗M ] − 1objectives corresponding to the ith column of the correlation

Start of αM phase. ComputeFconflict, Initialise loop counter i

Run DEMO onFconflict, increment i

Is i = K0

End of αM phase

Start of M phase, Integrate allobjectives, Initialise loop counter j

Run DEMO onFconflict, increment j

Is j = K1

End of M phase

Yes

No

Yes

No

αM

phas

eM

phas

e

Fig. 2. The two phases of α-DEMO

Algorithm 1 Algorithm for α-DEMO

Require: Population Size Popsize, Number of Variablesnumvar, Number of Objectives numobjs, Number of α-Objectives alpha, Number of Iterations Gmax

1: Generate initial Population Pop from the Decision Space(Si,min, Si,max) i = 1 to n

2: Obtain the initial Function Values Funct for all theobjectives

3: Obtain the initial set of alpha-objectivesFconflict ← setAlphaOrder(Funct, alpha)

4: initialise loop counter count=15: while count do6: Program follows the αM -phase as shown in Fig. 27: count = count+K0

8: Program follows the M -phase as shown in Fig. 29: count = count+K1

10: if count reaches Gmax then11: Exit the while loop12: end if13: end while

Fig. 3. Algorithm for α-DEMO

matrix. Algorithm 2 shows the pseudocode for this subroutine.The ith column is selected randomly before each run of theαM -phase so as to have different Fconflict, and hence preservethe diversity of the final solution set. An alternative selection ofi was experimented with, namely selecting the objective withmaximum conflict. However, results indicated it to be behavingpoorly as compared to random choice possibly because itmakes the algorithm greedy.

Page 5: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

5

Algorithm 2 Algorithm for setAlphaOrder(Funct, alpha)

Require: FunctPopsize×M matrix, alpha ∈ [0, 1]1: CorrF ← M ×M Correlation Matrix of Funct, with

each column sorted in ascending order.2: ith column is selected randomly3: Fconflict includes the ith objective and the first [α ∗M ]−

1 objectives of the ith column in CorrF

Fig. 4. Algorithm for setAlphaOrder(Funct, alpha)

B. Selection of the New Population

The way of generating a new population in DEMO isdescribed in Section III. In the process the number of in-dividuals in a population might exceed Popsize. The ideaof the subsequent selection step is to reduce the size ofthe population to a pre-defined size, where the best possiblesolutions, determined by Pareto Optimality criteria, are kept. Inthe conventional approach, the higher ranked solutions are firstallowed to enter the new population, followed by lower rankedones if space permits. This is the approach used in NSGA-II[1] as well as our proposed α-DEMO. A problem with thisapproach is that if there are more than Popsize solutions inthe first rank, then the lower ranked solutions have no scopefor surviving. Results in Section II-C have shown that this isindeed the case for Many-Objective problems. In subsequentgeneration, there are chances that the EMO may get stuck ata local optima. The alternative approach allows us to have alimited number of solutions from a few of the other ranks inthe nal population. Algorithm 3 shows our proposed technique,where the new population generated is sorted and reduced tothe size of the initial population (Popsize). We limit the rank-one solutons to maximum β1 percentage of the Popsize, andsolutions of other ranks to a β2 percentage on the basis ofcrowding distance [1]. This is done to preserve the diversityof the solutions and prevent the problem from getting stuck ata local optima. To differentiate between the two algorithmswe have denoted the α-DEMO with the revised selectiontechnique as α-DEMO-revised.

V. EXPERIMENTAL SETUP AND TEST PROBLEMS

To test our proposed algorithms we have taken up thewell-known DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ7 [43]as our test problems and compared the results with otherMOEA based on dimensionality reduction, KOSSA [35],MOEA/D [31] and HypE [16]. We have also compared ouralgorithms with NSGA-II, DEMO and one of the improvedversion of NSGA-II, MO-NSGA-II [21]. For the observationslisted in Tables I,II and III, DTLZη M ,α denotes that theEMO is tested on DTLZη test problem with M number ofobjectives and α is the fraction of the reduced subset ofobjectives. It was observed that PCA-NSGA-II could not finda reduced set of important objective as the Pareto front inDTLZ2 involves all the objectives together [30]. Thus wedo not show the results of PCA-NSGA-II in our comparison

Algorithm 3 Algorithm for Sorting(Pop, Funct, Fconflict)

1: for i = 1 to Popsize do2: Calculate the number of candidate solutions that are

dominated by or that dominate Pi based on theobjectives specified in Fconflict

3: Form the rank-One solution set R1 for which numberof populations dominating Pi is zero

// Ri is the set of rank i solutions4: end for5: Limit the rank-One solution to a percentage β1 of thePopsize, based on the crowding distance forming F1

// where, Fi is the final solution upto the ith rank6: Initialising the front counter, i=17: while i do8: Form a set of the solutions of i+ 1th rank9: Limit the solutions of the i+1th rank to a percentage

β2 of |Ri+1|, based on crowding distance and put inRi+1

10: if |Fi|+ |Ri+1| ≥ Popsize then11: Limit the length of Ri+1 to Popsize−|Fi| based

on crowding distance12: end if13: Form Fi+1 = Fi ∪Ri+1

14: end while

Fig. 5. Algorithm for Sorting(Pop, Funct, Fconflict)

table. For each problem we have used 100 individuals andnumber of generations as 2000. The Crossover Ratio(CR) istaken as 0.8 and the Scaling Factor(F) is taken as 0.1. Bothβ1 and β2 values in α-DEMO-revised have been taken as0.75. All the other MOEAs have been performed under theparameter settings as prescribed by their respective authors. 50simulations have beeen performed on each combination of theaforesaid parameters and, assuming them to follow Gaussiandistribution, their 95% confidence interval of the results aregiven in Section VI.

A. Test Problem DTLZ1

The definition of the test problem DTLZ1 [43] is as follows:

Minimize f1 (X) =1

2x1x2 . . . xM−1 (1 + g (XM )) ,

Minimize f2 (X) =1

2x1x2 . . . (1− xM−1) (1 + g (XM )) ,

...

Minimize fM−1 (X) =1

2x1 (1− x2) (1 + g (XM )) ,

Minimize fM (X) =1

2(1− x1) (1 + g (XM )) ,

0 ≤ xi ≤ 1, for i = 1, 2, . . . , n,

where, g (XM ) =

Page 6: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

6

100

[|XM |+

∑xiεXM

[(xi − 0.5)

2 − cos (20π (xi − 0.5))]](11)

The Pareto-optimal solutions to the above said problem satisfy∑Mi=1 fi(X) = 0.5. As per recommendation [43] we have

taken k = |XM | = 5. Thus the total number of variables isn =M + k − 1 =M + 4.

B. Test Problem DTLZ2

The definition of the test problem DTLZ2 [43] is as follows:

Minimize f1 (X) = (1 + g (XM )) cos (x1π/2) cos (x2π/2)

cos (x3π/2) . . . cos (xM−2π/2) cos (xM−1π/2) ,

Minimize f2 (X) = (1 + g (XM )) cos (x1π/2) cos (x2π/2)

cos (x3π/2) . . . cos (xM−2π/2) sin (xM−1π/2) ,

...Minimize fM−1 (X) = (1 + g (XM )) cos (x1π/2)

sin (x2π/2) ,

Minimize fM (X) = (1 + g (XM )) sin (x1π/2) ,

0 ≤ xi ≤ 1, for i = 1, 2, . . . , n,

where, g (XM ) =∑

xiεXM

(xi − 0.5)2 (12)

The Pareto-optimal solutions to the above said problem satisfy∑Mi=1 f

2i (X) = 1. As per recommendation [43] we have taken

k = |XM | = 10. Thus the total number of variables is n =M + k − 1 =M + 9.

C. Test Problem DTLZ3

The definition of the test problem DTLZ3 [43] is as follows:

Minimize f1 (X) = (1 + g (XM )) cos (x1π/2) cos (x2π/2)

cos (x3π/2) . . . cos (xM−2π/2) cos (xM−1π/2) ,

Minimize f2 (X) = (1 + g (XM )) cos (x1π/2) cos (x2π/2)

cos (x3π/2) . . . cos (xM−2π/2) sin (xM−1π/2) ,

...Minimize fM−1 (X) = (1 + g (XM )) cos (x1π/2)

sin (x2π/2) ,

Minimize fM (X) = (1 + g (XM )) sin (x1π/2) ,

0 ≤ xi ≤ 1, for i = 1, 2, . . . , n,

where, g (XM ) =

100

[|XM |+

∑xiεXM

[(xi − 0.5)

2 − cos (20π (xi − 0.5))]]

(13)

The Pareto-optimal solutions to the above said problem satisfy∑Mi=1 f

2i (X) = 1. As per recommendation [43] we have taken

k = |XM | = 10. Thus the total number of variables is n =M + k − 1 =M + 9.

D. Test Problem DTLZ4

The definition of the test problem DTLZ4 [43] is as follows:

Minimize f1 (X) = (1 + g (XM )) cos (xα1 π/2) cos (xα2 π/2)

cos (xα3 π/2) . . . cos(xαM−2π/2

)cos(xαM−1π/2

),

Minimize f2 (X) = (1 + g (XM )) cos (xα1 π/2) cos (xα2 π/2)

cos (xα3 π/2) . . . cos(xαM−2π/2

)sin(xαM−1π/2

),

...Minimize fM−1 (X) = (1 + g (XM )) cos (xα1 π/2)

sin (xα2 π/2) ,

Minimize fM (X) = (1 + g (XM )) sin (xα1 π/2) ,

0 ≤ xi ≤ 1, for i = 1, 2, . . . , n,

where, g (XM ) =∑

xiεXM

(xi − 0.5)2 (14)

The Pareto-optimal solutions to the above said problem satisfy∑Mi=1 f

2i (X) = 1. As per recommendation [43] we have taken

k = |XM | = 10 and α = 100. Thus the total number ofvariables is n =M + k − 1 =M + 9.

E. Test Problem DTLZ7

The definition of the test problem DTLZ7 [43] is as follows:

Minimize f1 (X) = x1,

Minimize f2 (X) = x2,

...Minimize fM−1 (X) = xM−1,

Minimize fM (X) = (1 + g (XM ))h(f1, f2, . . . , fM−1, g),

0 ≤ xi ≤ 1, for i = 1, 2, . . . , n,

where, g (XM ) = 1 +9

|XM |∑

xiεXM

xi

h(f1, f2, . . . ,fM−1, g) =M −M−1∑i=1

[fi

1 + g(1 + sin(3πfi))

](15)

The Pareto-optimal solutions to the above said problem satisfy∑Mi=1 f

2i (X) = 1. As per recommendation [43] we have taken

k = |XM | = 10 and α = 100. Thus the total number ofvariables is n =M + k − 1 =M + 9.

VI. PERFORMANCE MEASURES AND TEST RESULTS

We have selected convergence [1], hypervolume indicator[44], [45] and minimal spacing [15] as our performancemeasures to compare the six algorithms NSGA-II, KOSSA,MO-NSGA-II, HypE, MOEA/D and DEMO, with our algo-rithms α-DEMO and α-DEMO-revised. Detailed descriptionsof these measures are given in the following subsections.

A. Convergence

Convergence [1] measure can only be applied while dealingwith standard established Multi-Objective Problems, where the

Page 7: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

7

true Pareto Front is known to us. To measure the convergenceδ, we need to consider a non-dominated solution set F , andtake a set H of several (say 500) random points on the truePareto Front. For each solution belonging to F , we computethe minimum euclidean distance di from the H points. Themeasure δ is the average of all these distances.

δF =1

|F |

|F |∑i=1

di (16)

For two solution sets A and B, δA < δB =⇒ ABB. A valueof δ(A) = 0 implies that A lies on the true Pareto Front.

Table I gives us the convergence values of different problemcases, that we have taken up. We have calculated δ and σδ andobtained the 95% confidence interval for each test case. Re-sults show that DTLZ3 and DTLZ7 is associated with poorerconvergence values as compared to the other test problemsfor all the MOEAs. An earlier study also provided similarresults for DTLZ3 even for three objectives [43]. In contrast,for DTLZ2 and DTLZ4, the convergence values are generallyquite good (low). This indicates that possibly it is difficultto attain the true Pareto front in case of DTLZ3 and DTLZ7,while for DTLZ2 and DTLZ4 it is relatively easier. We observethat α-DEMO-revised shows better performance in DTLZ1,DTLZ2 and DTLZ7. In other test problems, MOEA/D andMO-NSGA-II shows better performances.

B. Hypervolume IndicatorHypervolume indicator [16], [44] (IH ) is a measure which

incorporates both the optimality of a solution set, as well asits spread in the objective space. To calculate it for a non-dominated set F , we select an arbitrary reference point r, andgenerate a set S of a large number of random points fromwithin the hypercube formed by r and the origin, using Monte-Carlo simulation technique. We calculate the hypervolume ofthe portion of the hypercube weakly dominated by the setF and express it as a fraction (or percentage) of the wholehypercube. For this purpose an attainment function [45] isused that is defined as follows:

αA(z) =

{1 ifA � {z}0 else

(17)

This function returns 1 if solution set A weakly dominates thepoint z, where z ∈ S. The average of all values of αF (S) givesthe measure IH(F ). Since we have considered a minimizationproblem, r has to be taken considerably away from the originso that the solution set falls between the origin and r. Also,as r is taken away from the origin of the coordinate axes, thevalue of IH increases, irrespective of the solution set. Thustaking the reference point too far away is not advisable asit yields high values of IH (almost equal to 1), making thesolution sets incomparable. For two solution sets A and B,IH(A) > IH(A) =⇒ ABB.

Table II lists the IH values obtained after running theEMOs on different problem cases taken up. While decidingon the reference point, we have taken repeated trials withdifferent locations of r, and observed that with higher values,the measure is unable to distinguish between the different

MOEAs. In this article, the reference point has been taken as{3, 3, 3, . . . ,M times}. We have calculated IH and σIH andobtained the 95% confidence interval for each test case. It maybe noted that some of NSGA-II and DEMO hypervolume val-ues are 0, which may happen when all the solutions fall outsideof the hypercube defined by the origin and the reference point.KOSSA, MOEA/D, α-DEMO and α-DEMO-revised exhibitcomparable performance. Although KOSSA and MOEA/Dhas shown better performance than α-DEMO and α-DEMO-revised, especially in DTLZ1, DTLZ2 and DTLZ4 still it canbe seen that difference in the IH values is very less. Besides α-DEMO and α-DEMO-revised have exhibited positive valuesof IH , for cases where the IH values obtained from otherMOEAs are 0 or almost 0. This gives an advantage to ouralgorithms, while performing the statistical testing. The testdescribed in Section VI-D, has shown that α-DEMO-revisedshows a better performance than other MOEAs. However, astudy in [46] pointed out that maximizing this indicator canmake a bad approximation of the front for small solution sets.

C. Minimal Spacing

Minimal Spacing [15], a modification of Spacing [47], isa measure of spread of the solutions in the non-dominatedfront. For every ith solution in a non-dominated Pareto FrontF , di represents the normalized sum of the difference of theobjective value of the ith solution and the closest solution foreach objective. Thus,

di = minjεF,j 6=i

M∑m=1

|f im − f jm|fmaxm − fminm

(18)

To compute dis, we need to mark a solution once the distancecomputation is done involving it. Initially all solutions are con-sidered unmarked. We take a solution, mark it and compute thenearest distance from it. While computing distance betweentwo solutions, we need to consider only those solutions whichare unmarked. In this way we can avoid repetition of distancecomputing. Once all the dis are computed we compute theminimal spacing Sm by the following formula.

Sm (F ) =

√√√√ 1

|F | − 1

|F |∑i=1

(di − d

)2(19)

For two solution sets A and B, Sm (A) < Sm (B) =⇒ ABB.

Table III lists the Sm values obtained after running theEMOs on different problem cases taken up. We have calculatedSm and σSm

and obtained the 95% confidence interval for eachtest case. Results almost give us α-DEMO and α-DEMO-revised as the winners in this case. For DTLZ1, we findKOSSA to exhibit better spread than the other algorithms.But for other test problems , α-DEMO and α-DEMO-revisedoutperforms other MOEAs for most of the test set ups. Thusthe MOEAs have shown comparable performances in terms ofIH , Sm and convergence. However we do find that α-DEMOand α-DEMO-revised have shown better performance in mostof the test set ups. Thus in the next subsection, a statistical

Page 8: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

8

TABLE ICONVERGENCE VALUES OF DIFFERENT MOEA ALGORITHMS USED ON DTLZ1, DTLZ2, DTLZ3, DTLZ4 AND DTLZ7 TEST-PROBLEMS

NSGA-II KOSSA MO-NSGA-II HypE MOEA/D DEMO α-DEMO α-DEMO-revisedDTLZ1 10,0.5 225.4502 ± 5.9816 2.5645 ± 0.0405 54.3038 ± 10.5281 146.3039 ± 2.2147 2.48 ± 1.0351 142.2519 ± 3.1073 3.0023 ± 0.0466 0.1098 ± 0.0201DTLZ1 20,0.25

176.2357 ± 3.665.9679 ± 0.0942

92.798 ± 13.4227 305.1945 ± 9.7488 3.2397 ± 1.1651 143.5408 ± 2.74343.9898 ± 0.0636 1.7912 ± 0.0323

DTLZ1 20,0.5 5.6876 ± 0.0898 2.9895 ± 0.0537 1.2356 ± 0.021DTLZ1 20,0.75 5.9863 ± 0.0945 4.1256 ± 0.072 2.115 ± 0.0365DTLZ1 50,0.1

142.4226 ± 5.5999

8.9653 ± 0.1415

173.0272 ± 2.8899 148.0675 ± 4.7363 5.5714 ± 1.4302 135.4134 ± 3.2615

1.6989 ± 0.0281 3.1123 ± 0.0538DTLZ1 50,0.2 6.0654 ± 0.0958 4.2587 ± 0.0665 2.8978 ± 0.0505DTLZ1 50,0.3 7.9884 ± 0.1261 5.1654 ± 0.0946 2.5569 ± 0.043DTLZ1 50,0.5 7.1659 ± 0.1131 5.6599 ± 0.0881 3.0112 ± 0.0529DTLZ2 10,0.5 1.4716 ± 0.0317 2.1365 ± 0.0337 1.143 ± 0.0232 1.3979 ± 0.0156 0.7419 ± 0.0101 1.3891 ± 0.0161 1.3678 ± 0.0235 0.0741 ± 0.0012DTLZ2 20,0.25

1.9273 ± 0.02242.6649 ± 0.0421

1.6054 ± 0.0247 1.924 ± 0.0144 1.3116 ± 0.005 1.9009 ± 0.00921.3115 ± 0.0213 0.8912 ± 0.015

DTLZ2 20,0.5 2.9689 ± 0.0469 1.3769 ± 0.0242 1.1112 ± 0.0172DTLZ2 20,0.75 2.7512 ± 0.0434 1.1154 ± 0.0196 1.3446 ± 0.0208DTLZ2 50,0.1

2.0082 ± 0.0284

2.1779 ± 0.0344

1.943 ± 0.0375 1.9943 ± 0.0134 1.4211 ± 0.0007 1.9836 ± 0.0091

1.3656 ± 0.0227 1.4251 ± 0.0238DTLZ2 50,0.2 1.5536 ± 0.0245 1.3989 ± 0.0236 1.1425 ± 0.0184DTLZ2 50,0.3 1.6393 ± 0.0259 1.4547 ± 0.0235 1.3215 ± 0.022DTLZ2 50,0.5 1.4733 ± 0.0233 1.798 ± 0.0296 1.0200 ± 0.0167DTLZ3 10,0.5 1048.074 ± 39.3631 516.89 ± 8.1608 521.717 ± 81.9968 409.5137 ± 3.987 24.8627 ± 4.5587 939.7426 ± 9.8824 412.5689 ± 7.3166 5.7898 ± 0.0922DTLZ3 20,0.25

978.349 ± 44.9975357.266 ± 5.6406

607.4955 ± 67.6193 911.8077 ± 5.5582 37.8409 ± 7.2125 1024.4046 ± 12.5577369.698 ± 6.607 45.5263 ± 0.7134

DTLZ3 20,0.5 365.597 ± 5.7721 306.7451 ± 5.4608 94.6363 ± 1.526DTLZ3 20,0.75 56989 ± 89.9755 412.0075 ± 7.111 77.1109 ± 1.2507DTLZ3 50,0.1

914.58 ± 38.5882

226598 ± 357.7579

858.3404 ± 62.1469 931.3156 ± 11.4185 31.9017 ± 4.7018 1074.2981 ± 9.7319

975.859 ± 17.8662 42.363 ± 0.7513DTLZ3 50,0.2 211012 ± 333.1504 910.8545 ± 15.672 62.6879 ± 1.0576DTLZ3 50,0.3 206989 ± 326.7988 877.6123 ± 14.6986 65.8745 ± 1.0293DTLZ3 50,0.5 1.365 ± 0.0216 992.4127 ± 16.6515 53.0001 ± 0.9263DTLZ4 10,0.5 1.1784 ± 0.0264 1.5898 ± 0.0251 0.491 ± 0.0304 0.8914 ± 0.0106 0.7461 ± 0.0102 1.2663 ± 0.0347 1.0011 ± 0.0172 0.5112 ± 0.0089DTLZ4 20,0.25

1.4337 ± 0.03091.0266 ± 0.0162

0.5600 ± 0.0302 0.9572 ± 0.0077 1.0818 ± 0.007 1.6816 ± 0.0371.3177 ± 0.0203 0.8557 ± 0.0136

DTLZ4 20,0.5 1.9888 ± 0.0314 1.225 ± 0.022 0.7701 ± 0.0127DTLZ4 20,0.75 1.6366 ± 0.0258 1.564 ± 0.0253 1.1291 ± 0.0192DTLZ4 50,0.1

1.5907 ± 0.0287

1.4252 ± 0.0225

0.3569 ± 0.0360 1.1681 ± 0.0083 1.2577 ± 0.0031 1.8166 ± 0.019

0.9001 ± 0.0149 0.771 ± 0.013DTLZ4 50,0.2 1.3696 ± 0.0216 1.5901 ± 0.0291 0.4686 ± 0.0076DTLZ4 50,0.3 1.4893 ± 0.0235 1.711 ± 0.0311 0.9944 ± 0.0183DTLZ4 50,0.5 1.367 ± 0.0216 1.9898 ± 0.034 1.159 ± 0.0187DTLZ7 10,0.5 42.6764 ± 0.7278 42.969 ± 0.6784 42.3306 ± 0.5151 40.0715 ± 0.2762 2.3922 ± 0.1161 41.6292 ± 0.2632 21.6543 ± 0.3655 5.8112 ± 0.0972DTLZ7 20,0.25

78.0439 ± 0.485377.6963 ± 1.2267

85.9443 ± 1.7999 82.4481 ± 0.3383 6.8244 ± 0.5152 84.4237 ± 0.518126.6599 ± 0.4115 6.4997 ± 0.1147

DTLZ7 20,0.5 80.5656 ± 1.272 31.269 ± 0.5643 7.1126 ± 0.1227DTLZ7 20,0.75 76.9879 ± 1.2562 35.1646 ± 0.5421 5.6797 ± 0.0989DTLZ7 50,0.1

207.0405 ± 0.8223

211.697 ± 3.3423

207.0405 ± 0.8223 212.7212 ± 1.1756 7.9443 ± 0.5696 216.0864 ± 1.0918

45.644 ± 0.7635 8.1001 ± 0.1378DTLZ7 50,0.2 236.5896 ± 3.7353 44.595 ± 0.8069 7.7759 ± 0.1313DTLZ7 50,0.3 212.6369 ± 3.3572 46.6457 ± 0.7903 6.1156 ± 0.0976DTLZ7 50,0.5 232.4569 ± 3.6701 40.4777 ± 0.6485 5.7642 ± 0.095

comparison is reported which provides greater insight into therelative performance of the different methods.

D. Statistical Test

A t-test has been conducted to compare the observationsprovided above. The null hypothesis for this test is that there isno significant difference between the performance measures ofα-DEMO-revised and other MOEAs. The alternate hypothesisis that the performance measures for α-DEMO-revised arebetter. Please note that each dataset involved in the statisticaltest is big, with sample sizes as 750 or 2000, making thedegree of freedom for the t-test to be sufficiently large. Insuch case, the p-values observed are either close to 0 orclose to 1. Despite the difference in the mean values of thedatasets being small, the p-values can still be 0 or 1 becauseof the sample sizes. Table IV lists the p-values obtained foreach of the two sample tests. The results show that α-DEMO-revised significantly outperforms almost the other algorithmsin terms of the performance metrics. In convergence andminimal spacing MOEA/D and NSGA-II is observed to exhibitbetter performance, but if we observe the actual values inTables I and III, we can find that the difference in valuesis not much significant

Preliminary Results on MOMBI [48] As a part of our anal-ysis, we have made a few test runs of a recent MOEA namedMOMBI, which has been found to perform satisfactorily onMaOO test problems. We have performed our analysis on the10-dimensional DTLZ problems, on which we have run theother MOEAs. MOMBI shows good performance on DTLZ1,where the δ obtained is 0.6944, IH is 0.9906 and Sm is 0.2381.

For DTLZ2, δ is 1.1719, IH is 0.9936 and Sm is 0.3069.Similarly the values of δ, IH and Sm are 6.1454, 0.0048 and0.1857 for DTLZ3, 0.8055, 0.9928 and 0.1434 for DTLZ4 and13.6311, 0 and 0.0254 for DTLZ7. We find that α-DEMO-revised gives better δ values than MOMBI for these test casesand better Sm values in four out of five test cases. In terms ofIH , MOMBI outperforms α-DEMO-revised in three of the fivetest cases, although the difference is not significant. Howeverwe wish to conduct more test runs, and also take up more testcases to compare MOMBI with our proposed algorithms.

Other Performance Metrics: We also look into otherperformance metrics to strengthen our test results. There areseveral existing metrics which can be used to compare theperformances of the MOEAs, such as R2 indicator [48],[49], Generational Distance (GD) [37] and others. Brockoffet al. [49] showed that Hypervolume value and R2 indicatorexhibit a positive correlation, while the GD appears to be verysimilar or, a replacement of the convergence metric alreadydiscussed above [37]. We have chosen another performancemetric, named purity of non-dominated solution [15] and haveshown some preliminary results based on this metric. Letus consider N,N > 2 MOEAs to be compared, that havebeen run on the same MaOO problem. We consider the non-dominated solution sets Ri, i = 1, . . . , N obtained from theseMOEAs, and ri = |Ri|. Even though the solution sets mayhave common solution, but we define the Ris to be mutuallyexclusive sets. Let us define R∗ as

R∗ =

N⋃i=1

Ri (20)

Page 9: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

9

TABLE IIHYPERVOLUME INDICATOR VALUES OF DIFFERENT MOEA ALGORITHMS USED ON DTLZ1, DTLZ2, DTLZ3, DTLZ4 AND DTLZ7 TEST-PROBLEMS

NSGA-II KOSSA MO-NSGA-II HypE MOEA/D DEMO α-DEMO α-DEMO-revisedDTLZ1 10,0.5 0.0044 ± 0.0061 0.9635 ± 0.0152 0.0027 ± 0.0048 0 ± 0 0.8132 ± 0.0984 0 ± 0 0.8113 ± 0.015 0.9536 ± 0.0166DTLZ1 20,0.25

0 ± 00.9556 ± 0.0151

0.0033 ± 0.0064 0 ± 0 0.7233 ± 0.1172 0 ± 00.8798 ± 0.0154 0.9155 ± 0.0165

DTLZ1 20,0.5 0.9756 ± 0.0154 0.8963 ± 0.0148 0.9779 ± 0.0176DTLZ1 20,0.75 0.9553 ± 0.0151 0.9112 ± 0.0148 0.9146 ± 0.0142DTLZ1 50,0.1

0 ± 0

0.9112 ± 0.0144

0 0 ± 0 0.5083 ± 0.1322 0 ± 0

0.9122 ± 0.0148 0.9540 ± 0.0167DTLZ1 50,0.2 0.9235 ± 0.0146 0.9753 ± 0.0170 0.8932 ± 0.0152DTLZ1 50,0.3 0.9339 ± 0.0147 0.8922 ± 0.0165 0.8464 ± 0.0132DTLZ1 50,0.5 0.9112 ± 0.0144 0.8465 ± 0.0138 0.9656 ± 0.0169DTLZ2 10,0.5 0.8399 ± 0.0079 0.8332 ± 0.0132 0.671 ± 0.0184 0.9514 ± 0.0034 1 ± 0 0.8863 ± 0.0059 0.9989 ± 0.0162 0.911 ± 0.0155DTLZ2 20,0.25

0.828 ± 0.0070.8963 ± 0.0142

0.6964 ± 0.0202 0.9372 ± 0.0019 1 ± 0 0.8487 ± 0.00590.9355 ± 0.0147 0.9560 ± 0.0159

DTLZ2 20,0.5 0.9033 ± 0.0143 0.9988 ± 0.0175 0.9213 ± 0.015DTLZ2 20,0.75 0.9412 ± 0.0149 0.9781 ± 0.0157 0.9145 ± 0.016DTLZ2 50,0.1

0.7645 ± 0.0144

0.9453 ± 0.0149

0.6559 ± 0.0362 0.9253 ± 0.0052 1 ± 0 0.8231 ± 0.0083

0.9788 ± 0.0155 0.9232 ± 0.0145DTLZ2 50,0.2 0.9441 ± 0.0149 0.9153 ± 0.0155 0.9212 ± 0.016DTLZ2 50,0.3 0.9224 ± 0.0146 0.9024 ± 0.0149 0.9455 ± 0.0161DTLZ2 50,0.5 0.9446 ± 0.0149 0.9900 ± 0.0174 0.921 ± 0.0145DTLZ3 10,0.5 0 ± 0 0.0423 ± 0.0007 0 0 ± 0 0.0235 ± 0.0388 0 ± 0 0.0658 ± 0.0011 0.1269 ± 0.0021DTLZ3 20,0.25

0 ± 00.0301 ± 0.0005

0 0 ± 0 0.0301 ± 0.0391 0 ± 00.0913 ± 0.0015 0.1565 ± 0.0025

DTLZ3 20,0.5 0.0356 ± 0.0006 0.5516 ± 0.0095 0.3213 ± 0.0055DTLZ3 20,0.75 0.0124 ± 0 0.3487 ± 0.006 0.2954 ± 0.005DTLZ3 50,0.1

0 ± 0

0 ± 0

0 0 ± 0 0 ± 0.0001 0 ± 0

0.3547 ± 0.0063 0.4547 ± 0.0076DTLZ3 50,0.2 0 ± 0 0.5590 ± 0.0091 0.4490 ± 0.008DTLZ3 50,0.3 0 ± 0 0.5069 ± 0.0083 0.6569 ± 0.0104DTLZ3 50,0.5 0 ± 0 0.4811 ± 0.0084 0.7021 ± 0.0116DTLZ4 10,0.5 0.9765 ± 0.0056 0.9512 ± 0.015 0.6431 ± 0.0146 0.8741 ± 0.0169 1 ± 0 0.9956 ± 0.0012 0.9766 ± 0.0164 0.97 ± 0.0155DTLZ4 20,0.25

0.9914 ± 0.0030.9636 ± 0.0152

0.6646 ± 0.019 0.8963 ± 0.0103 1 ± 0 0.9829 ± 0.01110.981 ± 0.0152 0.9718 ± 0.015

DTLZ4 20,0.5 0.9112 ± 0.0144 0.9862 ± 0.0178 0.9758 ± 0.0173DTLZ4 20,0.75 0.9345 ± 0.0148 0.9868 ± 0.0172 0.9768 ± 0.0158DTLZ4 50,0.1

0.9986 ± 0.0004

0.9112 ± 0.0144

0.6742 ± 0.0136 0.9601 ± 0.0044 1 ± 0 0.9759 ± 0.0188

0.9896 ± 0.0166 0.9769 ± 0.0153DTLZ4 50,0.2 0.9321 ± 0.0147 0.971 ± 0.0177 0.9714 ± 0.0167DTLZ4 50,0.3 0.9635 ± 0.0152 0.9827 ± 0.016 0.9785 ± 0.0176DTLZ4 50,0.5 0.9412 ± 0.0149 0.9749 ± 0.0151 0.9771 ± 0.0163DTLZ7 10,0.5 0 ± 0 0.0012 ± 0 0 0 ± 0 0 ± 0 0 ± 0 0.01200 ± 0.0002 0.0116 ± 0.0002DTLZ7 20,0.25

0 ± 00 ± 0

0 0 ± 0 0 ± 0 0 ± 00.0074 ± 0.0001 0.0075 ± 0.0001

DTLZ7 20,0.5 0 ± 0 0.0115 ± 0.0002 0.0085 ± 0.0002DTLZ7 20,0.75 0.0112 ± 0.0002 0.0104 ± 0.0002 0.0103 ± 0.0002DTLZ7 50,0.1

0 ± 0

0 ± 0

0 0 ± 0 0 ± 0 0 ± 0

0.0116 ± 0.0002 0.0082 ± 0.0001DTLZ7 50,0.2 0 ± 0 0.0117 ± 0.0002 0.0072 ± 0.0001DTLZ7 50,0.3 0.001 ± 0 0.0078 ± 0.0001 0.0119 ± 0.0002DTLZ7 50,0.5 0 ± 0 0.0102 ± 0.0002 0.0113 ± 0.0002

Hence by definition,

r∗ =

N∑i=1

ri (21)

Next we sort the set R∗ by the principle of non-dominatedsorting, and obtain the solution set R∗1, which are Rank Onesolution with respect to the set R∗. Let us R∗i as

R∗i = {γ|γ ∈ Ri ∧ γ ∈ R∗i } i = 1, . . . , N (22)

Subsequently we define r∗i = |R∗i |. The purity of the kth

algorithm (k ∈ {1, 2, . . . , N}) is given as

pk =r∗krk

(23)

Clearly, pk will lie between 0 and 1, and a value close to 1indicates better performance. This metric indicates a good per-formance of any MOEA with respect to non-dominated sortingof the solution set R∗. Thus solution having good convergencemay not have a high value of pk. Since

∑Ni=1 pi 6= 1, hence

it may happen that pi = 1 ∀ i.

We have provided the results of a preliminary study compar-ing the proposed method with HypE using the purity measure.We have chosen HypE, since the MOEA shows good perfor-mance with respect to other MOEAs in Tables I,II and III.Table V shows the purity values for HypE and α-DEMO-revised for the different test problems that we have consideredearlier. We find that α-DEMO-revised clearly outperforms

TABLE VPURITY VALUES OF HYPE AND α-DEMO-REVISED

Test Problem Number of Objectives HypE α-DEMO-revised

DTLZ110 0.6700 0.990020 0.8800 0.980050 0.3700 0.7746

DTLZ210 0.7654 0.800020 0.6560 0.860050 0.8600 0.3130

DTLZ310 0.7600 0.853020 0.8900 1.000050 0.8800 0.9900

DTLZ410 0.0700 0.993220 0.1800 0.990050 0.9900 1.0000

DTLZ710 0.4800 0.916620 0.9900 1.000050 1.0000 1.0000

HypE. However this has been a preliminary test with respect toonly one algorithm. As a part of future work, we may considerother performance metric, or compare the performance withother MOEAs.

Page 10: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

10

TABLE IIIMINIMAL SPACING VALUES OF DIFFERENT MOEA ALGORITHMS USED ON DTLZ1, DTLZ2, DTLZ3, DTLZ4 AND DTLZ7 TEST-PROBLEMS

NSGA-II KOSSA MO-NSGA-II HypE MOEA/D DEMO α-DEMO α-DEMO-revisedDTLZ1 10,0.5 0.2058 ± 0.0056 0.0412 ± 0.0007 0.6608 ± 0.1089 0.2145 ± 0.0101 0.0936 ± 0.0072 0.1756 ± 0.0081 3.0023 ± 0.0483 0.0081 ± 0.0001DTLZ1 20,0.25

0.4441 ± 0.03410.0879 ± 0.0014

1.1304 ± 0.1508 0.5771 ± 0.0328 0.2903 ± 0.0267 0.4418 ± 0.02840.1635 ± 0.0028 0.6182 ± 0.0099

DTLZ1 20,0.5 0.0905 ± 0.0014 0.1477 ± 0.0027 0.6766 ± 0.0132DTLZ1 20,0.75 0.0973 ± 0.0015 0.1014 ± 0.0016 0.7402 ± 0.0113DTLZ1 50,0.1

1.3551 ± 0.1071

0.2112 ± 0.0033

3.4323 ± 0.1347 1.2316 ± 0.1105 0.8932 ± 0.1005 1.4098 ± 0.1128

0.3353 ± 0.0059 2.0462 ± 0.0408DTLZ1 50,0.2 0.2361 ± 0.0037 0.2362 ± 0.0042 1.8531 ± 0.0366DTLZ1 50,0.3 0.2323 ± 0.0037 0.2123 ± 0.0037 2.4863 ± 0.0573DTLZ1 50,0.5 0.2775 ± 0.0044 0.3093 ± 0.0053 2.9283 ± 0.035DTLZ2 10,0.5 0.1585 ± 0.0076 0.0382 ± 0.0006 0.2422 ± 0.0213 0.1149 ± 0.0047 0.0653 ± 0.0031 0.1432 ± 0.0063 0.31 ± 0.0051 0.0691 ± 0.0011DTLZ2 20,0.25

0.3475 ± 0.01880.0958 ± 0.0015

0.6449 ± 0.0663 0.4757 ± 0.0207 0.3375 ± 0.021 0.3741 ± 0.01990.6707 ± 0.0121 0.0503 ± 0.0011

DTLZ2 20,0.5 0.0856 ± 0.0014 0.5865 ± 0.0099 0.0517 ± 0.0011DTLZ2 20,0.75 0.0872 ± 0.0014 0.5874 ± 0.0102 0.0626 ± 0.001DTLZ2 50,0.1

0.7077 ± 0.0759

0.3151 ± 0.005

1.7519 ± 0.2126 1.5492 ± 0.1667 1.3281 ± 0.1246 1.2048 ± 0.0928

0.1781 ± 0.0031 0.1178 ± 0.0018DTLZ2 50,0.2 0.3689 ± 0.0058 0.161 ± 0.0028 0.1017 ± 0.0056DTLZ2 50,0.3 0.2745 ± 0.0043 0.2285 ± 0.0042 0.3203 ± 0.0022DTLZ2 50,0.5 0.2669 ± 0.0042 0.3231 ± 0.0059 0.1075 ± 0.0035DTLZ3 10,0.5 0.1715 ± 0.0078 0.5696 ± 0.009 0.5397 ± 0.1145 0.2042 ± 0.017 0.1051 ± 0.0058 0.1516 ± 0.0057 0.3215 ± 0.0055 0.1553 ± 0.0026DTLZ3 20,0.25

0.3722 ± 0.02680.9122 ± 0.0144

1.1103 ± 0.1969 0.5278 ± 0.0277 0.4797 ± 0.0366 0.3261 ± 0.01810.5489 ± 0.0096 0.1372 ± 0.0086

DTLZ3 20,0.5 0.8659 ± 0.0137 1.6781 ± 0.0259 0.6915 ± 0.0036DTLZ3 20,0.75 1.0123 ± 0.0016 0.3532 ± 0.0058 0.2318 ± 0.0083DTLZ3 50,0.1

1.3118 ± 0.1669

3.6969 ± 0.0058

2.8434 ± 0.3207 1.9071 ± 0.1567 1.7331 ± 0.1186 1.0212 ± 0.0769

2.0687 ± 0.0375 0.8651 ± 0.0128DTLZ3 50,0.2 2.1022 ± 0.0033 1.4707 ± 0.0251 0.5397 ± 0.0131DTLZ3 50,0.3 3.0003 ± 0.0047 2.0114 ± 0.0343 0.7966 ± 0.0123DTLZ3 50,0.5 0.1101 ± 0.0017 1.8518 ± 0.0303 0.5255 ± 0.0098DTLZ4 10,0.5 0.1774 ± 0.0132 0.1356 ± 0.0021 0.5875 ± 0.0341 0.2118 ± 0.0065 0.1107 ± 0.009 0.1807 ± 0.0119 0.0897 ± 0.0015 0.1089 ± 0.0019DTLZ4 20,0.25

0.2227 ± 0.02170.1758 ± 0.0028

0.9238 ± 0.0488 0.3199 ± 0.0075 0.14 ± 0.0055 0.383 ± 0.02590.1651 ± 0.0027 0.1020 ± 0.002

DTLZ4 20,0.5 0.2312 ± 0.0037 0.122 ± 0.0022 0.1254 ± 0.0018DTLZ4 20,0.75 0.9899 ± 0.0156 0.1541 ± 0.0028 0.1018 ± 0.0017DTLZ4 50,0.1

0.4194 ± 0.0081

0.7025 ± 0.0111

1.7656 ± 0.14 0.4955 ± 0.0059 0.4759 ± 0.0079 0.8686 ± 0.058

0.4214 ± 0.0071 0.2749 ± 0.0057DTLZ4 50,0.2 0.6523 ± 0.0103 0.4742 ± 0.0078 0.2722 ± 0.005DTLZ4 50,0.3 0.4758 ± 0.0075 0.6529 ± 0.0102 0.2801 ± 0.0057DTLZ4 50,0.5 0.6123 ± 0.0097 0.6901 ± 0.0123 0.2854 ± 0.0064DTLZ7 10,0.5 0.099 ± 0.0029 0.1253 ± 0.002 0.1945 ± 0.0131 0.0327 ± 0.0012 0.3639 ± 0.0139 0.0857 ± 0.0065 0.0779 ± 0.0012 0.0121 ± 0.0002DTLZ7 20,0.25

0.2598 ± 0.0091.2356 ± 0.0195

0.4697 ± 0.0797 0.0494 ± 0.0012 1.1909 ± 0.0503 0.1162 ± 0.00210.0405 ± 0.0007 0.0344 ± 0.0004

DTLZ7 20,0.5 1.2525 ± 0.0198 0.1492 ± 0.0025 0.0358 ± 0.0003DTLZ7 20,0.75 0.9986 ± 0.0158 0.0678 ± 0.0012 0.0402 ± 0.0006DTLZ7 50,0.1

0.412 ± 0.0153

2.5698 ± 0.0406

0.412 ± 0.0153 0.1051 ± 0.0016 3.2089 ± 0.3373 0.1671 ± 0.0025

0.0934 ± 0.0014 0.1192 ± 0.0017DTLZ7 50,0.2 4.8556 ± 0.0767 0.1017 ± 0.0016 0.1112 ± 0.0017DTLZ7 50,0.3 4.9239 ± 0.0777 0.1052 ± 0.0019 0.1094 ± 0.0017DTLZ7 50,0.5 4.112 ± 0.0649 0.0918 ± 0.0016 0.0993 ± 0.0019

TABLE IVp-values PRODUCED BY TWO SAMPLE T-TEST BETWEEN α-DEMO-REVISED AND OTHER MOEAS INCLUDING α-DEMO

Performance Measure p− valuesNSGA-II KOSSA MO-NSGA-II HypE MOEA/D DEMO

Convergence � 0.005 � 0.005 � 0.005 � 0.005 1 � 0.005Hypervolume Indicator � 0.005 � 0.005 � 0.005 � 0.005 � 0.005 � 0.005

Minimal Spacing 1 � 0.005 � 0.005 � 0.005 � 0.005 � 0.005

E. Performance over other Scalable Test Functions

To strengthen our standpoint on the performance of ouralgorithm, we have taken up two other standard scalable testfunctions WFG1 and WFG2 [50]. The WFG problem toolkitprovides us with methodology by which a user can define aproblem according to his/her needs. However to limit ourselvesin this current work we have run α-DEMO-revised on WFG1and WFG2 only, and recorded the performance in termsof Convergence, Hypervolume Indicator, Minimal Spacingand Purity. We have compared the performance with HypE,because of its performance as per Tables I,II and III. Table VIgives us the Convergence, Hypervolume Indicator, MinimalSpacing and Purity metric for α-DEMO-revised and HypE onWFG1 and WFG2. Although these preliminary results showa comparable performance in terms of convergence values,but α-DEMO-revised excels over HypE on several occasionsin terms of the other performance metrics. Detailed studiesbased on other WFG functions and other variants developedby the toolkit, can be a subject of our future work.

F. Comparison of the Number of Objective Computations

It is natural to expect a large amount of time gain when anobjective reduction technique is integrated with any MOEA.It is stated earlier that objective function calculation takes upmaximum time in an optimization operation. When we aredealing with set of 20 or 50 objectives, a reduced objectiveset of 5 or 10 objectives saves much time. Since, the numberof objective computations for both α-DEMO and α-DEMO-revised are the same, as also for NSGA-II and DEMO, thuswe present a graphical representation of the comparison ofthe number of objective computations as required by DEMO,KOSSA and α-DEMO. KOSSA reduces the objective set stepby step, and integrates all the objectives in the end. In contrast,α-DEMO takes up the α subset of objectives and after agiven number of iterations computes the remaining 1 − αobjectives to recompute the full set. Figs. 6 and 7 showthe comparison of the number of objective computations ofKOSSA and α-DEMO (or α-DEMO-revised), the two algo-rithms which are based on objective reduction. The horizontal

Page 11: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

11

TABLE VICONVERGENCE, HYPERVOLUME INDICATOR, MINIMAL SPACING AND PURITY METRIC FOR α-DEMO-REVISED AND HYPE ON WFG1 AND WFG2

Convergence Hypervolume Minimal Spacing Purityα-DEMO-revised HypE α-DEMO-revised HypE α-DEMO-revised HypE α-DEMO-revised HypE

WFG1 10,0.5 81.1794 ± 0.0086 81.4258 ± 0.1852 0.9401 ± 0.0110 0.7501 ± 0.0013 0.5908 ± 0.0120 0.4451 ± 0.0110 1 1WFG1 20,0.25 371.3509 ± 0.1537

371.7120 ± 0.41200.8241 ± 0.1001

0.6041 ± 0.12000.4210 ± 0.0012

0.7512 ± 0.0012 0.9894 0.5267WFG1 20,0.5 371.5801 ± 0.0812 0.8488 ± 0.0100 0.6427 ± 0.0541WFG1 20,0.75 371.6301 ± 0.6029 0.8430 ± 0.0120 1.1240 ± 0.0151WFG1 50,0.1 1547.70 ± 0.0210

1557.55 ± 0.4120

0.8820 ± 0.0100

0.7720 ± 0.4123

0.7220 ± 0.0108

0.6061 ± 0.0010 1 0.99WFG1 50,0.2 1547.70 ± 0.0336 0.8120 ± 0.0110 1.201 ± 0.0010WFG1 50,0.3 1547.70 ± 0.1391 0.9010 ± 0.1301 0.8081 ± 0.1240WFG1 50,0.4 1547.70 ± 0.1232 0.8011 ± 0.1212 1.1220 ± 0.0130WFG2 10,0.5 81.0541 ± 0086 81.2251 ± 0.1852 0.9088 ± 0.0001 0.8084 ± 0.0377 0.0126 ± 0.0001 0.1142 ± 0.0010 0.7654 0.01

WFG2 20,0.25 371.7711 ± 0.1537371.8500 ± 0.4439

0.9913 ± 0.00040.7654 ± 0.0057

0.0101 ± 0.00120.1957 ± 0.0284 0.4303 0.001WFG2 20,0.5 371.3061 ± 0.0955 0.9917 ± 0.0003 0.0111 ± 0.0007

WFG2 20,0.75 371.6301 ± 0.6029 0.9948 ± 0.0019 0.0128 ± 0.0151WFG2 50,0.1 1547.50 ± 0.0222

1555.55 ± 0.4123

0.9816 ± 0.0313

0.4175 ± 0.0330

0.0163 ± 0.0108

0.7844 ± 0.1840 0.7513 0.001WFG2 50,0.2 1547.50 ± 0.0336 0.9906 ± 0.0092 0.0133 ± 0.0010WFG2 50,0.3 1547.50 ± 0.1391 0.9823 ± 0.0151 0.0113 ± 0.0090WFG2 50,0.4 1547.50 ± 0.1232 0.9665 ± 0.0080 0.0123 ± 0.0031

24

30

α-DEMO

35

36.437.2

KOSSA 38.68

40 40DEMO

40

x

Length of subset of Objectives5 10 15

y

No.

ofO

bjec

tive

com

puta

tions

(in

1000

s)

25

30

35

40

Fig. 6. Comparison among the objective computations for DEMO, KOSSAand α-DEMO for 20 Objectives and reduced set of 5,10 and 15 objectives.For DEMO there is no objective reduction.

line corresponding to DEMO provides the number of objectivecomputations required in the base or exhaustive case. MO-NSGA-II does not work by reducing the objective set andhence we do not present its number of objective computation.The other algorithms (HypE, MOEA/D) are designed to workfor MaOO problems. We wish to present a detailed study oftheir objective computation in a future work. It is seen thatin terms of the number of objective computations, α-DEMO(or α-DEMO-revised) comprehensively outperforms the basemodel DEMO and KOSSA. This implies that the time requiredis also less for α-DEMO than the other algorithms. Fig. 8shows the maximum time gain of α-DEMO (or α-DEMO-revised) with respect to DEMO and KOSSA. It is calculatedas the difference in the number of objective computations inα-DEMO and the other MOEAs divided by the number ofobjective computations for the other MOEAs. As can be seen,the proposed approach has a clear gain with respect to the

100 100 100DEMO

100

55

60

65

α-DEMO75

89.290.52 90.88

KOSSA94.12

x

Length of subset of Objectives5 10 15 20

y

No.

ofO

bjec

tive

com

puta

tions

(in

1000

s)

60

70

80

90

100

Fig. 7. Comparison among the objective computation for KOSSA and α-DEMO for 50 Objectives and reduced set of 5,10,15 and 25 objectives. ForDEMO there is no objective reduction.

two existing algorithms. The gain in time can be explained bythe large number of objectives discarded in computation whilerunning the proposed EMO.

VII. APPLICATION IN STRUCTURAL OPTIMIZATION

As explained earlier, EMOs are faster techniques, easyin application and are problem-independent. Thus they findapplications in a plethora of engineering problems. StructuralOptimization forms a branch of problems which involvesoptimal design of a given structure. In other words, thestructure should be able to sustain the load applied on it andat the same time meet other design criteria such as reducingthe total weight, having less variability in number of differentshapes of section members used in it etc. Cheng et al. [51]have used Pareto GA and fuzzy penalty function to solve a fourbar pyramid truss, a 72-bar space truss and a four-bar planetruss all of them involving two objective functions. Azarmet al. [52] have taken up a two-bar truss and a vibrating

Page 12: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

12

KOSSA/α-DEMO

Base Model(DEMO)/α-DEMO

x

No. of Objectives10 20 50

yG

ain

intim

eex

pres

sed

aspr

opor

tion

0.2

0.4

0.6

0.8

Fig. 8. Pairwise comparison among the maximum time gain in terms ofobjective computations between α-DEMO and other MOEAs (KOSSA andDEMO)

platform as their examples involving two objectives for eachof them. The Multi-Objective GA (MOGA) has been improvedwith filtering and mating restrictions and introduced statisticalstopping criteria. Similar works [53], [54], [55] involvingtwo or three objectives were also formulated and solvedusing EMOs. Sandgren [56] has coupled Goal-Programmingwith GA to solve two MaOO problems (with more than 4objectives). He solved several linkage design problems eachwith nine objectives, and a ten-bar truss design problem withseven objectives. In this section we have taken up a problemof designing a common factory-shed truss with six objectivefunctions. Fig. 9 gives us the front view of the truss. It is aPratt Roof Truss that has 32 nodes and 61 members.

11

11

11

11

11

11

11

1

1/2

1/2

1/2 1/2

11

11

11

11

Fig. 9. Front view of the Factory-shed Truss

The truss is supported by a pin joint in the left and a rollerjoint in the right. The section members are to be designedand hence are the decision variables for this problem. Thedesign has to be made as per Indian Standards code on SteelTubes for Structural Purposes, IS 1161:1998. We have takenup different load cases namely self-weight, wind-loads andtheir combinations, and analysed the structure to find out thestress level in the members σ, and the deflections δ. Fig. 9shows the truss under unit vertical and oblique point load atthe nodes of the truss. The verical loads can be visualized asa combination of dead and live load, while the oblique loadscan be visualized as the wind load. The wind force is assumedto be acting on the left side of the truss. The six objectiveswe have taken up for optimization are listed below:

1) Minimize the volume of the whole structure∑

(Length×Area of cross-section)

2) Minimize the maximum stress developed in a sectionσmax

3) Minimize the maximum deflection developed in a sec-tion δmaxBy varying the angle of the application of the load andmagnitude of the load ( ± 10%)

4) Minimize the range of stress σmax − σmin5) Minimize the range of deflection δmax − δmin6) Minimize the number of different sections required

Under the given load condition a simple Finite ElementAnalysis has been performed to calculate the force, deflectionand the stress developed for a section member. For a goodperformance of the structure, σ and δ have to be minimized,as well as their variation for different load cases. From aneconomic point of view, the total weight of the structureand simultaneously the number of different sections usedhas to be minimized. We have neglected the failure of theaxial members under buckling, and thermal stress etc. justto simplify the problem. We have used α-DEMO-revised tosolve this problem and compared the results with the twoother algorithms KOSSA, MO-NSGA-II and MOEA/D. EachMOEA has been run for 50 times on the test problem. Theresults are shown in Table VII. Since MOEA/D has shownbetter results compared to HypE, hence the later has not beenincluded to run the truss optimization. The values do notexhibit large difference for all the MOEAs, but α-DEMO-revised has performed better, compared to others.

TABLE VIICOMPARISON OF OBSERVATIONAL VALUES FOR HYPERVOLUME

INDICATOR IH AND MINIMAL SPACING Sm

MO-NSGA-II KOSSA MOEA/D α-DEMO-revisedIH 0.8127 ± 0.0159 0.7214 ± 0.1033 0.8532 ± 0.0021 0.8845 ± 0.1760Sm 0.0932 ± 0.0034 0.1669 ± 0.0612 0.1016 ± 0.0607 0.0909 ± 0.0457

VIII. CONCLUSIONS AND FUTURE SCOPE OF WORK

The correlation based ordering in selecting [αM ] set ofobjectives helps in providing time gain in computation forMaOO problems. The ordering and selection of a subsetof objectives could have been applied to any other MOOstrategy, but DEMO has been shown to outperform otherEMOs [37], and hence we have incorporated our selectiontechnique in it. The technique is applied to eight cases in fiveDTLZ functions with the number of objectives varying from10 to 50 and the value of α varying from 0.1 to 0.6. Theproposed method is also applied to a six-objective problem ofdesigning a factory shed truss. We have deliberately ignoredsome constraints in the truss problem, as our proposed methodis currently developed for unconstrained problems only. Ourobjective has been to show the applicability of our methodin standard engineering problems. Fig. 8 shows that withincrease in dimension of the objective functions, the timegain in computation also increases. In addition, the revisedalgorithm α-DEMO-revised aids in preserving the conver-gence and distribution of solutions over the objective space.

Page 13: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

13

With respect to MOEA/D, HypE, KOSSA and NSGA-II, α-DEMO-revised shows comparable values of the performancemeasures. The two-sample t-test results displayed in Table IVshows superior performance of α-DEMO-revised in terms ofconvergence values and comparable performance with respectto hypervolume indicator and minimal spacing values.

It has been pointed out earlier [33] that a dimensional-ity reduction technique should be one which guarantees thepreservation of the dominance relation. Zitzler et al. haveshowed that it is possible to select a subset of objectives thatwill preserve the dominance relation obtained taking all of theobjectives together. Our algorithm is based on the assumptionthat if an objective is highly correlated with another one, thenit may not be considered as important. The correlation basedobjective reduction technique shows good performance forproblems with high dimensionality, but it is yet to be verified ifthe selected subset of objectives is preserving the dominancerelation or not. It is, however, clear that solutions that arenon-dominated in the smaller objective space are actually non-dominated in the full space. Moreover, the effect of the revisedsorting technique is to be investigated by incorporating it inother EMOs such as NSGA-II or KOSSA.

In this article, we have compared the performance of α-DEMO with a few MaOO techniques. Besides these, wehave also identified some recent algorithms [57], [48], [58]which have shown good performance on MaOO problems. Ofthese, we have performed a preliminary comparative studywith MOMBI [48]. However, a detailed comparison withmany other methods needs to be carried out in the future.Also there are more recent works on developing advancedMaOO solving algorithms, which are to be published infuture editions [59], [60], [61], [62], [63]. Martin et al.have proposed MOPNAR [59] which works with reducedset of objectives. Deb et al. have proposed another modifiedversion of NSGA-II (calling it NSGA-III) for unconstrainedoptimization problems [60]. The method have also been testedon constrained optimization problems, and have exhibited sat-isfactory performance [60]. The concept of Fuzzy dominancehas been incorporated into NSGA-II and SPEA2 for solvingMany Objective (upto 20 dimensions) DTLZ and WFG testproblems by He et al. [62]. Karshenas et al. have incorporatedthe idea of Bayesian Network Clustering into MOEA to solveMaOO problems upto 20 dimensions. These algorithms havebeen shown to perform satisfactorily. A detailed comparisonwith these methods also needs to be carried out in the future.

IX. ACKNOWLEDGEMENT

SB gratefully acknowledges Project Grant No.DST/INT/MEX/RPO-04/2008.

REFERENCES

[1] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: Nsga-ii,” IEEE Trans. Evol. Comput.,vol. 6, no. 2, pp. 182–197, Apr 2002.

[2] D. W. Corne, J. D. Knowles, , and M. J. Oates, “The pareto envelope-based selection algorithm for multiobjective optimization,” in ParallelProblem Solving from Nature VI Conf. Berlin, Germany, 2002, pp.839–848.

[3] E. Zitzler, M. Laumanns, L. Thiele, E. Zitzler, E. Zitzler, L. Thiele,and L. Thiele, “Spea2: Improving the strength pareto evolutionaryalgorithm,” 2001.

[4] D. W. Corne and J. D. Knowles, “Techniques for highly multiobjectiveoptimisation: some nondominated points are better than others,” inProceedings of the 9th annual conference on Genetic and evolutionarycomputation. ACM, 2007, pp. 773–780.

[5] M. Farina and P. Amato, “On the optimal solution definition for many-criteria optimization problems.” in In Proceedings of the NAFIPS-FLINTInternational Conference, Piscataway, New Jersey, June 2002, pp. 233–238.

[6] R. C. Purshouse and P. J. Fleming, “Evolutionary many-objectiveoptimisation: An exploratory analysis,” vol. 3, pp. 2066–2073, 2003.

[7] V. Khare, X. Yao, and K. Deb, “Performance scaling of multi-objectiveevolutionary algorithms,” in Evolutionary Multi-Criterion Optimization.Springer, 2003, pp. 376–390.

[8] H. Ishibuchi, N. Tsukamoto, and Y. Nojima, “Evolutionary many-objective optimization,” in Genetic and Evolving Systems, 2008. GEFS2008. 3rd International Workshop on. IEEE, 2008, pp. 47–52.

[9] E. J. Hughes, “Evolutionary many-objective optimisation: many onceor one many?” in Evolutionary Computation, 2005. The 2005 IEEECongress on, vol. 1. IEEE, 2005, pp. 222–227.

[10] I. Alberto, C. Azcarate, F. Mallor, and P. M. Mateo, “Optimization withsimulation and multiobjective analysis in industrial decision-making: Acase study,” European Journal of Operational Research, vol. 140, no. 2,pp. 373–383, 2002.

[11] L. V. Santana-Quintero, “A review of techniques for handling expensivefunctions in evolutionary multi-objective optimization,” ComputationalIntelligence in Expensive Optimization Problems, pp. 29–59, 2010.

[12] E. J. Hughes, “Msops-ii: A general-purpose many-objective optimiser,”in Evolutionary Computation, 2007. CEC 2007. IEEE Congress on.IEEE, 2007, pp. 3944–3951.

[13] ——, “Multiple single objective pareto sampling,” in EvolutionaryComputation, 2003. CEC’03. The 2003 Congress on, vol. 4. IEEE,2003, pp. 2678–2684.

[14] G. Wang and J. Wu, “A new fuzzy dominance ga applied to solve many-objective optimization problem,” in Innovative Computing, Informationand Control, 2007. ICICIC’07. Second International Conference on.IEEE, 2007, pp. 617–617.

[15] S. Bandyopadhyay, S. K. Pal, and B. Aruna, “Multiobjective gas,quantitative indices, and pattern classification.” IEEE Transactions onSystems, Man, and Cybernetics, Part B: Cybernetics,, vol. 34, no. 5, pp.2088–2099, 2004.

[16] J. Bader and E. Zitzler, “Hype: An algorithm for fast hypervolume-basedmany-objective optimization,” Evolutionary Computation, vol. 19, no. 1,pp. 45–76, 2011.

[17] X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithmfor solving many-objective optimization problems,” Systems, Man, andCybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 38, no. 5,pp. 1402–1412, 2008.

[18] T. Aittokoski and K. Miettinen, “Efficient evolutionary approach toapproximate the pareto-optimal set in multiobjective optimization, ups-emoa,” Optimisation Methods & Software, vol. 25, no. 6, pp. 841–858,2010.

[19] M. Garza-Fabre, G. Toscano-Pulido, and C. A. C. Coello, “Two novelapproaches for many-objective optimization,” in Evolutionary Compu-tation (CEC), 2010 IEEE Congress on. IEEE, 2010, pp. 1–8.

[20] S. F. Adra and P. J. Fleming, “Diversity management in evolutionarymany-objective optimization,” Evolutionary Computation, IEEE Trans-actions on, vol. 15, no. 2, pp. 183–195, 2011.

[21] K. Deb and H. Jain, “Handling many-objective problems using animproved nsga-ii procedure,” in Evolutionary Computation (CEC), 2012IEEE Congress on. IEEE, 2012, pp. 1–8.

[22] H. Jain and K. Deb, “An improved adaptive approach for elitist non-dominated sorting genetic algorithm for many-objective optimization,”in EMO, 2013, pp. 307–321.

[23] M. Garza-Fabre, G. Toscano-Pulido, and C. A. C. Coello, “Two novelapproaches for many-objective optimization,” in Evolutionary Compu-tation (CEC), 2010 IEEE Congress on. IEEE, 2010, pp. 1–8.

[24] A. B. de Carvalho and A. Pozo, “The control of dominance area inparticle swarm optimization algorithms for many-objective problems,”in Neural Networks (SBRN), 2010 Eleventh Brazilian Symposium on.IEEE, 2010, pp. 140–145.

[25] Z. Kang, L. Kang, C. Li, Y. Chen, and M. Liu, “Convergence propertiesof e-optimality algorithms for many objective optimization problems,”in Evolutionary Computation, 2008. CEC 2008.(IEEE World Congress

Page 14: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

14

on Computational Intelligence). IEEE Congress on. IEEE, 2008, pp.472–477.

[26] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining con-vergence and diversity in evolutionary multiobjective optimization,”Evolutionary computation, vol. 10, no. 3, pp. 263–282, 2002.

[27] J. Molina, L. V. Santana, A. G. Hernandez-Dıaz, C. A. Coello Coello,and R. Caballero, “g-dominance: Reference point based dominancefor multiobjective metaheuristics,” European Journal of OperationalResearch, vol. 197, no. 2, pp. 685–692, 2009.

[28] R. C. Purshouse and P. J. Fleming, “Conflict, harmony, and indepen-dence: Relationships in evolutionary multi-criterion optimisation,” inEvolutionary Multi-Criterion Optimization. Springer, 2003, pp. 16–30.

[29] ——, “On the evolutionary optimization of many conflicting objectives,”Evolutionary Computation, IEEE Transactions on, vol. 11, no. 6, pp.770–784, 2007.

[30] K. Deb and D. K. Saxena, “On finding pareto-optimal solutions throughdimensionality reduction for certain large-dimensional multi-objectiveoptimization problems,” Kangal report, vol. 2005011, 2005.

[31] Q. Zhang and H. Li, “Moea/d: A multiobjective evolutionary algorithmbased on decomposition,” Evolutionary Computation, IEEE Transactionson, vol. 11, no. 6, pp. 712–731, 2007.

[32] H. K. Singh, A. Isaacs, and T. Ray, “A pareto corner search evolutionaryalgorithm and dimensionality reduction in many-objective optimizationproblems,” Evolutionary Computation, IEEE Transactions on, vol. 15,no. 4, pp. 539–556, 2011.

[33] D. Brockhoff and E. Zitzler, “Objective reduction in evolutionary mul-tiobjective optimization: theory and applications,” Evolutionary Compu-tation, vol. 17, no. 2, pp. 135–166, 2009.

[34] T. Murata and A. Taki, “Examination of the performance of objectivereduction using correlation-based weighted-sum for many objectiveknapsack problems,” in Hybrid Intelligent Systems (HIS), 2010 10thInternational Conference on. IEEE, 2010, pp. 175–180.

[35] A. L. Jaimes, C. A. C. Coello, and J. E. U. Barrientos, “Online objectivereduction to deal with many-objective problems,” in Evolutionary Multi-Criterion Optimization. Springer, 2009, pp. 423–437.

[36] A. Sinha, D. K. Saxena, K. Deb, and A. Tiwari, “Using objective re-duction and interactive procedure to handle many-objective optimizationproblems,” Applied Soft Computing, 2012.

[37] T. Robic and B. Filipic, “Demo: Differential evolution for multiobjectiveoptimization,” in Evolutionary Multi-Criterion Optimization. Springer,2005, pp. 520–533.

[38] A. A. Montano, C. A. C. Coello, and E. Mezura-Montes, “Multiobjectiveevolutionary algorithms in aeronautical and aerospace engineering,”IEEE Transactions on Evolutionary Computation, vol. 16, no. 5, pp.662–694, 2012.

[39] K. Deb, Multi-objective optimization using evolutionary algorithms,2001.

[40] C. A. Carlos, B. Gary, and D. A. V. Veldhuisen, Evolutionary algorithmsfor solving multi-objective problems. Springer, 2007.

[41] C. Buchta, “On the average numberof maxima in a set of vectors,”Information Processing Letters, vol. 33, no. 2, pp. 63–65, 1989.

[42] R. Storn and K. Price, “Differential evolution–a simple and efficientheuristic for global optimization over continuous spaces,” Journal ofglobal optimization, vol. 11, no. 4, pp. 341–359, 1997.

[43] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the Congress onEvolutionary Computation (CEC-2002),(Honolulu, USA). Proceedingsof the Congress on Evolutionary Computation (CEC-2002),(Honolulu,USA), 2002, pp. 825–830.

[44] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. Da Fon-seca, “Performance assessment of multiobjective optimizers: An analysisand review,” Evolutionary Computation, IEEE Transactions on, vol. 7,no. 2, pp. 117–132, 2003.

[45] E. Zitzler, D. Brockhoff, and L. Thiele, “The hypervolume indicatorrevisited: On the design of pareto-compliant indicators via weightedintegration,” in Evolutionary Multi-Criterion Optimization. Springer,2007, pp. 862–876.

[46] K. lBringmann, , and T. Friedrich, “Approximation quality of thehypervolume indicator,” Artificial Intelligence, vol. 195, pp. 265–290,2013.

[47] J. R. Schott, “Fault tolerant design using single and multicriteria geneticalgorithm optimization.” Ph.D. dissertation, 1995.

[48] R. Hernandez and C. Coello, “MOMBI: A new metaheuristic formany-objective optimization based on the R2 indicator,” in 2013 IEEEConference on Evolutionary Computation, vol. 1, June 20-23 2013, pp.2488–2495.

[49] D. Brockhoff, T. Wagner, and H. Trautmann, “On the properties of ther2 indicator,” in Proceedings of the fourteenth international conferenceon Genetic and evolutionary computation conference. ACM, 2012, pp.465–472.

[50] S. Huband, L. Barone, L. While, and P. Hingston, “A scalable multi-objective test problem toolkit,” in Evolutionary multi-criterion optimiza-tion. Springer, 2005, pp. 280–295.

[51] F. Y. Cheng and D. Li, “Multiobjective optimization design with paretogenetic algorithm,” Journal of Structural Engineering, vol. 123, no. 9,pp. 1252–1261, 1997.

[52] S. Narayanan and S. Azarm, “On improving multiobjective geneticalgorithms for design optimization,” Structural Optimization, vol. 18,no. 2-3, pp. 146–155, 1999.

[53] W. A. Crossley, A. M. Cook, and D. W. Fanjoy, “Using the two-branchtournament genetic algorithm for multiobjective design,” AIAA journal,vol. 37, no. 2, pp. 261–267, 1999.

[54] C. Dhaenens, J. Lemesre, N. Melab, M. Mezmaz, and E.-G. Talbi,“Parallel exact methods for multiobjective combinatorial optimization,”Parallel Combinatorial Optimization, pp. 187–210, 2006.

[55] S. R. Norris and W. A. Crossley, “Pareto-optimal controller gainsgenerated by a genetic algorithm,” in AIAA 36th Aerospace SciencesMeeting and Exhibit, 1998, pp. 98–0010.

[56] E. Sandgren, “Multicriteria design optimization by goal programming,”Advances in Design Optimization, pp. 225–265, 1994.

[57] S. Yang, M. Li, X. Liu, and J. Zheng, “A grid-based evolutionaryalgorithm for many-objective optimization,” 2013.

[58] R. Denysiuk, L. Costa, and I. Espırito Santo, “Many-objective optimiza-tion using differential evolution with variable-wise mutation restriction,”in Proceeding of the fifteenth annual conference on Genetic and evolu-tionary computation conference. ACM, 2013, pp. 591–598.

[59] D. Martin, A. Rosete, J. Alcala-Fdez, and F. Herrera, “A new multi-objective evolutionary algorithm for mining a reduced set of interestingpositive and negative quantitative association rules.”

[60] K. Deb and H. Jain, “An evolutionary many-objective optimizationalgorithm using reference-point based non-dominated sorting approach,part i: Solving problems with box constraints.”

[61] H. Jain and K. Deb, “An evolutionary many-objective optimizationalgorithm using reference-point based non-dominated sorting approach,part ii: Handling constraints and extending to an adaptive approach.”

[62] Z. He, G. G. Yen, and J. Zhang, “Fuzzy-based pareto optimality formany-objective evolutionary algorithms.”

[63] H. Karshenas, R. Santana, C. Bielza, and P. Larranaga Mugica, “Multi-objective estimation of distribution algorithm based on joint modelingof objectives and variables,” 2012.

Page 15: An Algorithm for Many-Objective Optimization With Reduced Objective Computations: A Study in Differential Evolution

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2332878, IEEE Transactions on Evolutionary Computation

15

LIST OF FIGURES

1 Variation of P (N,M) with the number of objec-tives . . . . . . . . . . . . . . . . . . . . . . . . 3

2 The two phases of α-DEMO . . . . . . . . . . . 43 Algorithm for α-DEMO . . . . . . . . . . . . . . 44 Algorithm for setAlphaOrder(Funct, alpha) . 55 Algorithm for Sorting(Pop, Funct, Fconflict) . 56 Comparison among the objective computations

for DEMO, KOSSA and α-DEMO for 20 Objec-tives and reduced set of 5,10 and 15 objectives.For DEMO there is no objective reduction. . . . 11

7 Comparison among the objective computation forKOSSA and α-DEMO for 50 Objectives andreduced set of 5,10,15 and 25 objectives. ForDEMO there is no objective reduction. . . . . . 11

8 Pairwise comparison among the maximum timegain in terms of objective computations be-tween α-DEMO and other MOEAs (KOSSA andDEMO) . . . . . . . . . . . . . . . . . . . . . . 12

9 Front view of the Factory-shed Truss . . . . . . 12

LIST OF TABLES

I Convergence values of different MOEA algo-rithms used on DTLZ1, DTLZ2, DTLZ3, DTLZ4and DTLZ7 test-problems . . . . . . . . . . . . . 8

II Hypervolume Indicator values of differentMOEA algorithms used on DTLZ1, DTLZ2,DTLZ3, DTLZ4 and DTLZ7 test-problems . . . 9

V Purity Values of HypE and α-DEMO-revised . . 9III Minimal Spacing values of different MOEA algo-

rithms used on DTLZ1, DTLZ2, DTLZ3, DTLZ4and DTLZ7 test-problems . . . . . . . . . . . . . 10

IV p-values produced by two sample t-test betweenα-DEMO-revised and other MOEAs including α-DEMO . . . . . . . . . . . . . . . . . . . . . . . 10

VI Convergence, Hypervolume Indicator, MinimalSpacing and Purity metric for α-DEMO-revisedand HypE on WFG1 and WFG2 . . . . . . . . . 11

VII Comparison of observational values for Hyper-volume Indicator IH and Minimal Spacing Sm . 12

Sanghamitra Bandyopadhyay (SM05) received206 the B.Tech., M.Tech., and Ph.D. degrees incomputer science from University of Calcutta, Cal-cutta, India, Indian Institute of Technology Kharag-pur (IITK), Kharagpur, India, and Indian StatisticalInstitute (ISI), Kolkata, India, respectively. She iscurrently a Professor at the ISI. She has co-authoredfive books and more than 250 technical articles. Herresearch interests include bioinformatics, soft andevolutionary computation, pattern recognition, anddata mining. Dr. Bandyopadhyay received the Dr.

Shanker Dayal Sharma Gold Medal and also the Institute Silver Medal fromIITK in 1994, the Young Scientist Awards of the Indian National ScienceAcademy, in 2000, the Indian Science Congress Association in 2000, theYoung Engineer Award of the Indian National Academy of Engineeringin 2002, the Swarnajayanti Fellowship from the Department of Scienceand Technology in 2007, the prestigious Shanti Swarup Bhatnagar Prize inEngineering Science in 2010, and the Humboldt Fellowship from Germanyin 2009. She has been selected as a Senior Associate of ICTP, Italy, in 2013.She is a fellow of the National Academy of Sciences, Allahabad, India, andthe Indian National Academy of Engineering.

Arpan Mukherjee received his Bachelors in CivilEngineering from Jadavpur University, Kolkata in2011 and Masters in Statistical Quality Control andOperations Research from Indian Statistical Institute,Kolkata in 2013. He is currently a PhD student in theMechanical and Aerospace Engineering Departmentfrom University at Buffalo-SUNY. His research in-terests are Optimization, Uncertainty Quantificationand Non-linear Dynamics.


Recommended