+ All Categories
Home > Documents > [ACM Press Proceeding of the fifteenth annual conference - Amsterdam, The Netherlands...

[ACM Press Proceeding of the fifteenth annual conference - Amsterdam, The Netherlands...

Date post: 19-Dec-2016
Category:
Upload: aurora
View: 218 times
Download: 1 times
Share this document with a friend
8
Iterated Multi-Swarm: A Multi-Swarm Algorithm Based on Archiving Methods Andre Britto CAPES Foundation Ministry of Education of Brazil Brasilia, DF, Brazil [email protected] Sanaz Mostaghim Institute AIFB Karlsruhe Institute of Technology (KIT) Karlsruhe, Germany [email protected] Aurora Pozo Computer Science Department Federal University of Parana Curitiba, PR, Brazil [email protected] ABSTRACT Usually, Multi-Objective Evolutionary Algorithms face seri- ous challengers in handling many objectives problems. This work presents a new Particle Swarm Optimization algorithm, called Iterated Multi-Swarm (I-Multi Swarm), which ex- plores specific characteristics of PSO to face Many-Objective Problems. The algorithm takes advantage of a Multi-Swarm approach to combine different archiving methods aiming to improve convergence to the Pareto-optimal front and di- versity of the non-dominated solutions. I-Multi Swarm is evaluated through an empirical analysis that uses a set of many-Objective problems, quality indicators and statistical tests. Categories and Subject Descriptors I.2.m [Computing Methodologies]: Artificial Intelligence— Miscellaneous Keywords Many-Objective Optimization; Particle Swarm Optimiza- tion; Multi-Objective Optimization 1. INTRODUCTION Multi-Objective Particle Swarm Optimization (MOPSO) algorithms is a class of Multi-Objective Evolutionary Algo- rithms (MOEAs) that has been successfully applied to solve Multi-objective Optimization Problems (MOPs) [19]. MOPs involve the simultaneously optimization of two or more ob- jectives subject to certain constraints [5]. In such problems, the objectives to be optimized are usually in conflict, which means that they do not have a single best solution, but a set of non-dominated solutions. Most MOEAs modify evolu- tionary algorithms by incorporating a selection mechanism based on Pareto optimality and adopting a diversity preser- vation mechanism that avoids the convergence to a single Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GECCO’13, July 6–10, 2013, Amsterdam, The Netherlands. Copyright 2013 ACM 978-1-4503-1963-8/13/07 ...$15.00. solution. Recently, research efforts have been made to inves- tigate the scalability of these algorithms with respect to the number of objectives [4] [12] [20]. In this situation, MOEAs face serious difficulties mainly because of the exponentially increase of the number of non-dominated solutions in each population. This severely weakens the Pareto dominance- based selection pressure toward the Pareto-optimal front. As consequence, the convergence property of MOEAs is severely deteriorated [6] [18]. MOPs with four or more objectives are often referred to as Many-Objective Problems (MaOPs). Until fairly recently most of the research has been focused on a small group of algorithms, often based on Genetic Al- gorithms. In the literature some works deal with MaOPs using PSO algorithms e.g., [3, 15]. One challenge in MOPSO concerns the selection of the so called global best solution from an archive of non-dominated solutions [16, 19]. This has a great impact on the conver- gence and diversity of solutions. However, it is more critical in MaPOs as the number of the non-dominated solutions stored in the archive can drastically increase. Therefore, it is reasonable to consider a maximum fixed size for the archive. If the number of non-dominated solutions increases this maximum size, some of them have to be deleted using an archiving method such as crowding distance method [7] or clustering [16]. In [3] different archiving methods such as Ideal and Multi-level Grid (MGA) are explored and tested on MaPOs. It has been shown that archiving methods in- fluence the results to a large extent. For instance, the Ideal method contributes to the convergence while the MGA archiv- ing increases the diversity of obtained solutions. It is possi- ble to remark that these two particular challengers are faced by algorithms when dealing with MaOPs: convergence to the Pareto-optimal front and diversity of the non-dominated so- lutions. In this paper, we combine different archiving methods in a Multi-Swarm approach. The algorithm initially per- forms a diversified search using the MGA archiving method. Thereby, we aim to find a large area in the objective space. Then we propose to use sub-swarms around the found solu- tions by the MGA. We take a MOPSO with Ideal archiving method in the sub-swarms and by this means, we intend to focus on the convergence. In this way, the Multi-swarm is defined by diverse populations running independently ex- changing information with each other during the search [10]. This approach allows a search that covers different areas of the objective space. I-Multi Swarm is evaluated through an empirical analysis 583
Transcript

Iterated Multi-Swarm: A Multi-Swarm Algorithm Based onArchiving Methods

Andre BrittoCAPES Foundation

Ministry of Education of BrazilBrasilia, DF, Brazil

[email protected]

Sanaz MostaghimInstitute AIFB

Karlsruhe Institute ofTechnology (KIT)

Karlsruhe, [email protected]

Aurora PozoComputer Science

DepartmentFederal University of Parana

Curitiba, PR, [email protected]

ABSTRACTUsually, Multi-Objective Evolutionary Algorithms face seri-ous challengers in handling many objectives problems. Thiswork presents a new Particle Swarm Optimization algorithm,called Iterated Multi-Swarm (I-Multi Swarm), which ex-plores specific characteristics of PSO to face Many-ObjectiveProblems. The algorithm takes advantage of a Multi-Swarmapproach to combine different archiving methods aiming toimprove convergence to the Pareto-optimal front and di-versity of the non-dominated solutions. I-Multi Swarm isevaluated through an empirical analysis that uses a set ofmany-Objective problems, quality indicators and statisticaltests.

Categories and Subject DescriptorsI.2.m [Computing Methodologies]: Artificial Intelligence—Miscellaneous

KeywordsMany-Objective Optimization; Particle Swarm Optimiza-tion; Multi-Objective Optimization

1. INTRODUCTIONMulti-Objective Particle Swarm Optimization (MOPSO)

algorithms is a class of Multi-Objective Evolutionary Algo-rithms (MOEAs) that has been successfully applied to solveMulti-objective Optimization Problems (MOPs) [19]. MOPsinvolve the simultaneously optimization of two or more ob-jectives subject to certain constraints [5]. In such problems,the objectives to be optimized are usually in conflict, whichmeans that they do not have a single best solution, but aset of non-dominated solutions. Most MOEAs modify evolu-tionary algorithms by incorporating a selection mechanismbased on Pareto optimality and adopting a diversity preser-vation mechanism that avoids the convergence to a single

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.GECCO’13, July 6–10, 2013, Amsterdam, The Netherlands.Copyright 2013 ACM 978-1-4503-1963-8/13/07 ...$15.00.

solution. Recently, research efforts have been made to inves-tigate the scalability of these algorithms with respect to thenumber of objectives [4] [12] [20]. In this situation, MOEAsface serious difficulties mainly because of the exponentiallyincrease of the number of non-dominated solutions in eachpopulation. This severely weakens the Pareto dominance-based selection pressure toward the Pareto-optimal front. Asconsequence, the convergence property of MOEAs is severelydeteriorated [6] [18]. MOPs with four or more objectivesare often referred to as Many-Objective Problems (MaOPs).Until fairly recently most of the research has been focusedon a small group of algorithms, often based on Genetic Al-gorithms. In the literature some works deal with MaOPsusing PSO algorithms e.g., [3, 15].

One challenge in MOPSO concerns the selection of the socalled global best solution from an archive of non-dominatedsolutions [16, 19]. This has a great impact on the conver-gence and diversity of solutions. However, it is more criticalin MaPOs as the number of the non-dominated solutionsstored in the archive can drastically increase. Therefore,it is reasonable to consider a maximum fixed size for thearchive. If the number of non-dominated solutions increasesthis maximum size, some of them have to be deleted usingan archiving method such as crowding distance method [7]or clustering [16]. In [3] different archiving methods such asIdeal and Multi-level Grid (MGA) are explored and testedon MaPOs. It has been shown that archiving methods in-fluence the results to a large extent. For instance, the Idealmethod contributes to the convergence while the MGA archiv-ing increases the diversity of obtained solutions. It is possi-ble to remark that these two particular challengers are facedby algorithms when dealing with MaOPs: convergence to thePareto-optimal front and diversity of the non-dominated so-lutions.

In this paper, we combine different archiving methodsin a Multi-Swarm approach. The algorithm initially per-forms a diversified search using the MGA archiving method.Thereby, we aim to find a large area in the objective space.Then we propose to use sub-swarms around the found solu-tions by the MGA. We take a MOPSO with Ideal archivingmethod in the sub-swarms and by this means, we intendto focus on the convergence. In this way, the Multi-swarmis defined by diverse populations running independently ex-changing information with each other during the search [10].This approach allows a search that covers different areas ofthe objective space.

I-Multi Swarm is evaluated through an empirical analysis

583

that observes how the algorithm performs in many-objectivescenarios, in terms of convergence to the Pareto-optimalfront and diversity of the non-dominated solutions, using theDTLZ many-objective family of benchmarking problem [8].Furthermore, I-Multi Swarm is compared with the algo-rithms MGA-SMPSO and Ideal-SMPSO presented in [3].Different quality indicators are used to measure the qualityof the non-dominated solutions of the algorithms. The re-sults indicate that we can improve the quality of solutionsin terms of both diversity and convergence.

The remaining sections of this paper are organized as fol-lows. Section 2 presents the main aspects of Multi-ObjectiveParticle Swarm Optimization and discusses related works.The I-Multi Swarm approach is presented in Section 3 andSection 4 presents the experiments and the empirical anal-ysis. Finally, Section 5 concludes the paper.

2. MOPSO AND MANY-OBJECTIVE OPTI-MIZATION

In Multi-Objective Particle Swarm Optimization (MOPSO),we consider a population (set) of candidate solutions calledparticles, which move in the search space by updating theirpositions using velocity vectors. A typical MOPSO algo-rithm [17] contains four basic steps: initialization of theparticles, computation of the velocity, updating the posi-tions and archiving.

The position of a particle x(t + 1) ∈ Rn at time t + 1 isobtained by adding a velocity v(t+1) ∈ Rn to x(t) as shownin Equation 1. n denotes the dimension of the search space.The velocity of a particle v(t+ 1) is computed based on thebest position a particle has obtained so far which is calledpersonal best pbest(t) and the best position obtained by thepopulation which is called global best and denoted as Rh(t).

xj(t+ 1) = xj(t) + vj(t+ 1)

vj(t+ 1) = ω · vj(t) + (C1 · φ1) · (pj,best(t)− xj(t))+(C2 · φ2) · (Rj,h(t)− xj(t)) (1)

In Equation 1, j denotes the index for the correspondingdimension in the serach space, the φ1 and φ2 are randomvalues in [0, 1] and C1 and C2 are constant values. Thecoefficient ω is called inertia weight which determines theinfluence of the old velocity vector. Since in MOPSO wehave a set of optimal solutions, Rh(t) must be selected froma set of non-dominated solutions which are typically storedin an archive that contains the best non-dominated solutionsfound so far. Each particle in the population must select oneof the archive members as Rh(t). There are several strategiesfor this selection mechanism e.g., [2, 16, 19] indicating theimpact of the archive on the quality of results. Often thesize of the archive is limited to a maximum size. When thenumber of non-dominated solutions increases this size, anarchiving method is used to select a set of non-dominatedsolution.

In this paper, we take the SMPSO algorithm [17], Speed-constrained Multi-objective PSO, as it is shown to be ableto find very good solutions. In this approach the velocity ofa particle is limited by using a varying constriction factorχ that varies based on the values of C1 and C2. Besides,the SMPSO introduces a mechanism that constraints theaccumulated velocity in each dimension j. The global bestparticle Rh is chosen by a binary tournament in a fully con-nected neighborhood and the size of the archive is kept to a

maximum value by using the Crowding Distance approachfrom [7]. Since the global best particles are selected from thearchive, the size and elements of the archive and thereforethe approach for maintaining it (archiving method) have agreat impact on the results particularly results of MaPOs.

The paper [3] compares several archiving methods usingSMPSO in Many-Objective Optimization Problems. Thework presented in [3] investigates two archiving methodscalled Ideal and Multi-level Grid Archiving (MGA) from [13].The Ideal archiving method keeps the closest solutions to theIdeal point in the archive, which increases the convergenceof the non-dominated solutions to the Pareto-optimal front.The MGA approach combines the Adaptive Grid schemewith the ε-Pareto Archiving which leads to a very good di-versity of solutions.

MOPSO with multiple populations has been studied in [14].There, a multi-swarm algorithm is presented where the pop-ulation is split into a number of sub-swarms that are exe-cuted in parallel fashion. A non-dominated solution from thearchive is selected as a leader for each sub-swarm. Know-ing the leader, each sub-swarm defines its individual searchspace as a neighborhood of the leader, i.e., the search ofeach swarm is limited to a smaller region. In this process,the search of the sub-swarm focuses on a region of the non-dominated front close to the location of the leader. The algo-rithm executes a pre-defined number of iterations and then,the results of all sub-swarms (the non-dominated solutionsfound so far) are returned to a central server. The centralserver selects the new leaders for the next iteration and thesearch re-starts. In [14] it is proposed two different formsfor selecting the guide a Cluster-based sub-swarm MOPSOand a Hypervolume-based sub-swarm MOPSO. The Cluster-based algorithm will be discussed in the next section.

3. ITERATED MULTI SWARMThe Iterated Multi Swarm (I-Multi Swarm) approach is

designed to deal with many-objective problems. As alreadymentioned, one difficulty in many-objective optimization isto achieve a good convergence of solutions while keeping agood diversity. As in many-objective optimization, most ofthe population members can be non-dominated, it is partic-ularly difficult for MOPSO methods to obtain a good conver-gence. From previous studies, it was possible to remark thatSMPSO using Ideal archiver presented good convergence ofsolutions but it lost diversity [3]. On the other hand, the re-sult of the SMPSO using MGA archiver is good in terms ofdiversity [3]. The design question is how to combine thesetwo archivers. In this point, the multi-swarm approach isadded to cover diverse areas of the objective space and fa-cilitate the convergence towards the Pareto-optimal Front.

I-Multi Swarm assumes that the search can be improved iffirst a diversified search is performed and after that, a searchaiming to obtain more convergence is executed. Hence, theproposed algorithm is divided in two phases: diversity searchand Multi-Swarm search. The first phase performs the di-versity search using SMPSO with the MGA archiver [3].This algorithm performs an initial diversified search to gen-erate well-distributed non-dominated solutions (called ba-sis front) for multi-Swarm initialization. The basis front isused to assign a seed to each sub-swarm. Around each seed,a population of solutions can be generated which is calleda sub-swarm. Three different approaches are used to allo-cate seeds to sub-swarms. The Cluster-based sub-swarm,

584

Algorithm 1 Iterated Multi-Swarm Approach

1: Input: Search Space2: Output: a set of non-dominated solutions3: // Phase 1: Diversity search4: BasisFront = Run-MGA-SMPSO(SearchSpace)5: // Phase 2: Multi-Swarm search6: for i = 1 to SplitIterations7: for k = 1 to Number of Swarms8: Seed(k) = SelectSeed(BasisFront)9: Pop(k) = InitializePop(Seed(k), Search Space)

10: Front(k) = Run-Ideal-SMPSO(Pop(k), Front(k))11: BasisFront = Update (Front(k), BasisFront)12: end13: end14: Return BasisFront

presented in [14], a random approach and a new methodguided by the extreme solutions. These approaches will bediscussed further in this section.

Algorithm 1 presents a description of I-Multi Swarm. Thefirst step is the execution of SMPSO using the MGA archiverperforming an initial diversified search. At the end of thisstep, we have a set of non-dominated solutions called BasisFront from which we select the seeds for the sub-swarms.After finding the seeds, a population (Pop(k)) of solutionsis randomly generated in a predefined fixed area around theseed in the search space. We apply SMPSO using the Idealarchiver to the population of each sub-swarm (indicated byk) and find a set of non-dominated solutions denoted asFront(k). This non-dominated front is integrated into theBasisFront so that only non-dominated solutions are storedin the BasisFront. The Multi-Swarm phase is an iteratedprocess where after the execution of each sub-swarm, thegenerated sets of solutions are joined into a new basis front.Then, the basis front is split again, the seeds are definedand a new cycle starts. Multiple split iterations are used toboth: allow the swarms follows independently theirs leadersand on the other hand to cooperate reorganizing the search.This process enables an indirect communication between thesub-swarms. Starting a new split iteration, the sub-swarmsselect their seeds from the updated archive stored in ba-sis front. At the beginning of the split iteration, each sub-swarm re-initializes its population using a new seed, butkeeps the set of non-dominated solutions (Front(k)) in orderto preserve its search history. At the end of all split itera-tions, the result of the algorithm is the joint solutions, i.e.,the generated approximation of the Pareto-optimal front.One important feature of the algorithm is the method usedto define the seed and the archive of each sub-swarm. Thevariants used here are called I-Multi Centroid, I-Multi Ex-tremes and I-Multi Random:

I-Multi Centroid.In this approach [14], also called Cluster-based sub-swarm

MOPSO, the basis front is clustered into groups and the cen-troid of each group is chosen as a seed for each sub-swarm.The solutions of each cluster are stored in the archive of eachsub-swarm. The number of clusters is defined as the samenumber of swarms. In this strategy, the archive is updatedwith all solutions clustered in the same cluster of the seed.

I-Multi Extremes.Here, the non-dominated solutions which are the so-called

extreme points in terms of the objective values are selectedas seeds for sub-swarms. The extreme points are those closerto each objective axis. For example for a 5-objective prob-lem, we can select 5 solutions from a set of non-dominatedsolutions. Each of the selected solutions in the best in termsof the objective function. The archives of the sub-swarmsare filled with the solutions using the same policy. Withthis method, the proposed algorithm tries to cover areasnear each dimension of the objective space. If the num-ber of swarms is greater than the number of objective func-tions, the remaining seeds are selected randomly from thebasis front. If the opposite occurs, some dimension won’t beexplored by the algorithm. In this strategy, the algorithmchooses the solutions closer to the seed to update the archive.The number of solutions chosen is equal to the archive size.

I-Multi Random.This is the simplest approach where all seeds are selected

randomly from the basis front. The archives of the sub-swarms are also filled randomly. In this strategy, the archiveis updated by random solutions from the basis front. Thenumber of solutions chosen is equal to the archive size.

3.1 DiscussionThe proposed I-Multi Swarm approach adds some more

parameters to the basic SMPSO. First, the issues related tothe sub-swarms such as the size of the search region for sub-swarms and the number of sub-swarms should be specified.The next important parameter is the number of split itera-tions, which indicates the number of information exchangebetween the sub-swarms. Furthermore, in order to increasethe diversity, it is desirable that the sub-swarms have as lit-tle as possible overlap. This can be specified by the waythe seeds for the sub-swarms are selected and the size ofthe individual search space for each sub-swarm. In I-MultiSwarm approach, the seeds are selected from a set of non-dominated solutions produced by an initial run of SMPSOwith MGA as archiving mechanism. This step can addition-ally affect the diversity and the eventually overlapping areasbetween the sub-swarms. Among the methods proposed forselecting the seeds, the I-Multi Extremes and I-Multi Cen-troid can lead to little overlap between the sub-swarms thanthe I-Multi Random. These parameters are analyzed in theexperiments.

4. EXPERIMENTSThis section presents the evaluation of the proposed I-

Multi Swarm approach. In the experiments, we perform ananalysis of the main parameters of I-Multi Swarm namelynumber of split iterations, size of the search region for sub-swarms and number of swarms. Furthermore, the three pro-posed variants of I-Multi Swarm are confronted and their re-sults are compared with SMPSO using the Ideal and MGAarchiving methods [3]. The objective functions evaluationcriterion is used to define the same amount of execution foreach algorithm to perform an equal comparison. 30 inde-pendent runs per test have been executed.

4.1 MethodologyThe basic parameters of SMPSO used in this set of exper-

iments are as follows. ω is randomly selected in the interval[0, 0.8] as well as φ1 and φ2, which are randomly selected in[0, 1]. C1 and C2 are designed to randomly vary over the

585

interval [1.5, 2.5]. The initial search phase uses the MGAarchiving method from [13] with 100 generations, a popu-lation of 100 individuals and archive size of 200 solutions.These parameters were configured through a experimentalanalysis and through the parameter configuration of MGA-SMPSO algorithm presented in [3]. In the Multi-Swarmsearch each sub-swarm uses the Ideal archiving method [3]and has four important parameters: search region size, num-ber of split iterations, number of swarms and size of thepopulation, discussed in Sections 4.2, 4.3 and 4.4. Eachsub-swarm executed 100 generations.

In these experiments, four many-objective problems of theDTLZ family [8] are tested. These problems can be scaledto any number of objectives (M) and decision variables (n)and the true Pareto-optimal front is known analytically. Foreach problem, variable k represents the search’s complexity,where k = n −M + 1 (n number of variables, M numberof objectives). In this study, the algorithms are applied toproblems with 3, 5, 10, 20 and 30 objective functions. TheDTLZ2 problem can be used to investigate the ability of thealgorithms to scale up their performances in large numbersof objectives. The DTLZ4 problem is used to investigatethe ability to maintain a good distribution of solutions. TheDTLZ4 problem generates more solutions near the fm − f1plane, so the algorithms focus their solutions in this region.The DTLZ6 is a variation of the DTLZ2, where the Pareto-optimal Front is defined by a curve, instead of a sphere.Also, this problem presents (3k-1) local optima. Finally,the DTLZ7 problem has 2M−1 disconnected Pareto-optimalregions in the search space. This problem evaluates the abil-ity to maintain a subpopulation in different Pareto-optimalregions of an algorithm.

The Pareto front of each problem is generated analytically.The size of each Pareto front is defined through the studyof works of literature ( [8]) and experimental analysis. Thisprocess seeks to build a Pareto front that covers differentregions of each problem. The generation of the true Paretofronts will be further investigated in future work in orderto discuss which is the ideal number of solutions in eachPareto front that reproduces well the function that defineseach problem.

The experimental study investigates the behavior of theproposed approach, especially in terms of convergence to thePareto-optimal front and diversity of the non-dominated so-lutions, as well as, the scalability with respect to the numberof objectives. The selection of experimental measures is amajor obstacle in Many-Objective Optimization research. Inthe literature, different quality indicators are used [1] [20].In our experiments, since we are dealing with benchmarkingproblems where the Pareto-optimal front can be obtainedanalytically, we combine a set of indicators that measuresthe distance to the Pareto-optimal front. This set of qualityindicators is described as follows:

The Generational Distance (GD) [5] measures the distancebetween the generated approximation set (PFapprox) andthe true Pareto-optimal front of the problem PFtrue. It isa minimization measure and it allows us to observe if thealgorithm converges to some region in the Pareto-optimalfront. The Convergence Measure presented in [11], mea-sures the distance from the best converged solution to thePFtrue, i.e., the solution which presents the minimum dis-tance to the Pareto-optimal front. This measure helps theGD analysis to observe if the algorithm converged to some

region in the PFtrue. Inverse Generational Distance (IGD)measures the minimum distance of each point of PFtrue tothe points of PFapprox. IGD allows us to observe if PFapprox

converges to the Pareto-optimal front and also if this set iswell diversified. It is important to perform a joint analysisof GD and IGD indicators because if only GD is consideredit is not possible to notice if the solutions are distributedover the entire Pareto-optimal front. On the other hand, ifonly IGD is considered it is possible to define a sub-optimalsolution as a good solution.

However, it is not always possible to obtain a completeview of the diversity of the obtained solutions using onlythe IGD. Therefore, here we propose a new quality indicatorcalled Largest Distance. Largest Distance (LD) is theopposite of the Convergence Measure. Now, instead of ob-taining the minimum distance to PFtrue, the maximum dis-tance from PFtrue to PFapprox is calculated. This representsthe major contribution to the IGD calculus. So, a PFapprox

restricted to a specific region of the Pareto-optimal front willlead to a high value of LD, since it will be far from the otherregions of PFtrue. Likewise, a well-diversified PFapprox willhave a small LD, since it is close to several regions of thePareto-optimal front.

It is important to highlight the use of all indicators andcombining the results of them to perform a more completeanalysis of convergence to the Pareto-optimal front and thediversity of the obtained solutions. The quality indicatorsare compared using the Friedman test [9] at 5% significancelevel. The test is applied to raw values of each metric. Thepost-test of the Friedman test indicates if there is any statis-tically difference between each analyzed data set and then,the mean values are used to identify which algorithm hasthe best values.

4.2 Search regionIn I-Multi Swarm approach, each sub-swarm is defined

by a seed selected from a set of non-dominated solutions.This seed defines the center of a smaller search region foreach sub-swarm in the decision variable space. The size ofthis search region is a parameter of the algorithm. Here, ashort analysis of the influence of this parameter is performed.Due to space limit, future works will extend this analysis.I-Multi Centroid variant was chosen for this analysis. Thealgorithm is applied to DTLZ2 problem with 5 objectivefunctions. The search space of this problem is limited toa range between 0 and 1. The parameter was defined withfour different values: 0.5, 0.25, 0.1, and 0.05. These valuesvary from 50% to 5% of the search space size. The resultsare presented in Figure 1(a). This chart presents values ofGD versus values of IGD.

It can be observed that with a largest search region, 0.5,the best IGD and the worst GD were obtained, since thealgorithm prioritized diversity over convergence. The inverseis true, the smaller values of the search region, 0.05 and 0.1,the better GD values and worse IGD values were generated,i.e., convergence is prioritized over diversity. For a searchregion size of 0.25, the algorithm obtained similar valuesof GD and IGD. So, the largest is the search region, morediversified is the search performed by the algorithm. And,as this size decreases, the algorithm limits its search in asmaller region and obtains better convergence towards thePareto-optimal front. So, willing to obtain convergence anddiversity, a dynamic strategy is adopted. In this strategy,

586

0,000 0,005 0,010 0,015 0,020

0,0015

0,0019

0,0023

0,0027

0,0031

0,0035

GD

IGD

0.05 0.1 0.25 0.5

(a) Search Region

0,002 0,004 0,006 0,008 0,010

0,0015

0,0020

0,0025

0,0030

0,0035

GD

IGD

1 5 10 20

(b) Split Iterations

Figure 1: Mean values of GD and IGD for I-MultiCentroid with different sizes of the search spaces forthe sub-swarms (a) and number of split iteration (b)

the algorithm varies the size of the search region during theexecution. At the beginning of the search, the diversity willbe favored, and the search region will be set to 0.5. At eachsplit iteration, the size of the search region will be uniformlydecreased prioritizing convergence until the lowest value of0.1 is reached at the last split iteration.

4.3 Number of split iterationsThe number of split iterations represents number of ex-

changes information between swarms, i.e., the number oftimes that the information of the multi-swarm will be com-bined. Also, this number means new selection of seeds andre-initialization of the swarms. The split iterations representthe steps on the search of I-Multi Swarm. The same config-uration of the previous parameter analysis is used: I-MultiCentroid applied to DTLZ2 with 5 objective functions. Fourdifferent numbers of split iterations is analyzed: 1, 5, 10 and20. It is important to highlight that the higher the numberof iterations, the higher is the number of the objective func-tions evaluations. Figure 1(b) presents the results of GDversus IGD.

The results show (Figure 1(b)) that as the number of splititerations grows I-Multi Swarm loses diversity and gains con-vergence. However, for 20 split iterations the algorithmstarts to deteriorate and obtained worse average values ofGD than for 10 iterations. For the rest of experiments 5split iterations is chosen. This number presented a goodcompromise between convergence and diversity.

4.4 Number of swarmsThis subsection investigates if the number of swarms in-

fluences the PSO search as the number of objective grows.Different to previous parameters analysis, here, we extendthe study using I-Multi Swarm variants: Centroid, Extremesand Random. The analysis is performed for DTLZ2, using10 objective functions, the number of objectives is selectedto be higher than the last experiments in order to intro-duce more complexity to the search and to promote a betteranalysis of this parameter.

To perform a fair comparison between the different swarms,the number of objective functions evaluations must be thesame. Four different numbers of swarms are defined: 3 with250 particles, 5 with 150 particles, 10 with 75 particles and30 with 25 particles. The influence of the size of the popu-lation in each sub-swarm will be discussed in future works.The results are analyzed through the chart of GD versusIGD presented in Figures 2(a), 2(b) and 3.

0,002 0,003 0,004 0,005 0,006 0,007

0,0000

0,0020

0,0040

0,0060

0,0080

GD

IGD

3 5 10 30

(a) I-Multi Centroid

0,000 0,002 0,004 0,006 0,008 0,010 0,012

0,0000

0,0030

0,0060

0,0090

0,0120

GD

IGD

3 5 10 30

(b) I-Multi Extremes

Figure 2: Mean values of GD and IGD on DTLZ2problem for different number of sub-swarms

For I-Multi Centroid (Figure 2(a)), it can be observed thatthe higher the number of swarms, the best is the results ofthe search. A higher number of swarm, the best is the pro-duced PFapprox in terms of convergence and coverture. ForI-Multi Extremes (Figure 2(b)), it can be observed that the30-swarms approach had the best results. The extreme vari-ant performed best with a larger number of swarms, besidesif the number of swarms is smaller than the numbers of ob-jectives, the algorithm did not perform well. This can beobserved in configurations with number of swarms set to 3and 5 where the results are worse than configurations set to10 and 30. Finally, for I-Multi Random, the 30-swarms con-figuration obtained the best results, in terms of convergence

587

and diversity. It obtained the best values for GD and IGD.The higher the number of swarms, the better is the search.Furthermore, the random process of selecting the seeds foreach swarm was sufficient for the algorithm generated a di-versified PFapprox.

0,000 0,002 0,003 0,005 0,006 0,008

0,0000

0,0020

0,0040

0,0060

0,0080

GD

IGD

3 5 10 30

Figure 3: Mean values of GD and IGD using I-MultiRandom on DTLZ2 problem for different number ofsub-swarms

In summary, the configurations with the higher numbersof swarms obtained the best results for DTLZ2 problem.These configurations obtained the best values of GD andIGD even with a small number of particles in each swarm.

4.5 Multi-Swarm ComparisonThe main goal of this set of experiments is to observe

if with combining different archiving methods in a Multi-Swarm approach, we can improve the results of a PSO al-gorithm in Many-Objective Optimization. Here, the threevariants of I-Multi Swarm are confronted with each otherand with SMPSO using Ideal and MGA archivers [3]. Forsimplification, these two algorithms will be noted as MGAand Ideal, respectively.

The original SMPSO algorithm is not used here, sinceits results were outperformed by the previous algorithms,as presented in [3]. The algorithms are executed with thefollowing parameters: search region initially set by 0.5 andreduced to 0.1 proportionally at each split iteration. Thenumber of split iterations is selected as 5. The number ofswarm for I-Multi Swarm is defined as 30 swarms, for alloptimization problems. The only exception is the I-MultiCentroid for the problem DTLZ4 because empirically it wasobserved that 5 swarms presented the best results. Eachswarm has its external archive limited to 200 solutions.

Ideal and MGA algorithms are executed with the same setof parameters, differing only by the archiving method. Thepopulation and the external archives are limited to 200 so-lutions. These algorithms are executed with the same num-ber of objective function evaluations performed by I-MultiSwarm. This number is obtained through the configurationsdescribed in the preceding paragraph: a total of 385850 fit-ness evaluations.

Table 1 presents the mean values of each indicator for thedifferent algorithms and different problems. The best resultsaccording to the Friedman test are marked with light gray.

We observe that for the DTLZ2 problem, the Ideal ob-tained the best results in terms of the Convergence (Con)metric for all number of objectives. This algorithm favorsconvergence over diversity and good values of GD and Con-

vergence Measure are expected. This approach obtains theworst results in terms of LD which confirms the above issue.On the other hand, the I-Multi Random approach obtainsresults with best to good IGD and partially good GD. Theresults of the I-Multi Extremes and I-Multi Random can behighlighted. I-Multi Extremes obtained very good resultsfor 3, 5 and 10 objectives. However, as discussed before, I-Multi Extremes loses performance when the number of ob-jectives approaches the number of swarms. MGA algorithmobtained very poor values of GD and Convergence. Analyz-ing the diversity measures, IGD and LD, I-Multi Randomand I-Multi Extremes obtained the best values for both mea-sures, but, again, I-Multi Extremes loses performance forhigh number of objective functions. I-Multi Centroid alsoobtained good values of LD, similar to the other variants foralmost all objectives.

For the second problem is DTLZ4, Ideal obtained thebest values of GD and Convergence for almost all objectivefunctions, but a low value for diversity measures, showingthat the algorithm was trapped in the denser region of thePareto-optimal Front. I-Multi Swarm algorithm also ob-tained a good value of GD and Convergence. Both Idealand I-Multi Swarm controlled the deterioration of the con-vergence (the values of the measures did not change muchwhen the number of objectives grows). The three variantsof I-Multi Swarm obtained the best values of IGD and LDand could better control the deterioration of the diversity.I-Multi Random obtained the best results of IGD and LDfor almost all number of objective functions, especially forthe higher dimensions. I-Multi Extremes had good results,but loses performance when the number objective grows. I-Multi Centroid obtained results similar to I-Multi Randombut was outperformed for higher dimensions. MGA has theworst values of GD and is outperformed by I-Multi Swarmapproaches in terms of diversity. In summary, I-Multi Ran-dom was the best algorithm, but closely followed by I-MultiCentroid.

DTLZ6 presented the major threats for I-Multi Swarm ap-proaches. The algorithms are able to obtain diversity, butsuffer some problems with the multiple local optimums. Allthree variants did not obtain good values of GD and Conver-gence. The three variants of I-Multi Swarm obtained similarvalues of GD, worse than Ideal, but better than MGA. ForGD, Ideal obtained the best values for almost all numberof objectives. Observing the diversity measures, IGD andLD, I-Multi Swarm obtained the best result in all numberof objectives. The results of I-Multi Swarm approaches areequivalent to MGA for the highest dimensions. In summary,I-Multi Swarm approaches obtained similar results to Idealand MGA. They are outperformed in terms of convergenceby Ideal, but obtained better diversity. I-Multi Swarm andMGA obtained equivalent results in terms of diversity.

Finally, DTLZ7 allows us to observe how each algorithmdeals with disconnected Pareto-optimal fronts. Here, I-MultiSwarm approaches outperformed both Ideal and MGA. I-Multi Swarm obtained the best values of GD and Conver-gence for almost all number of objectives. For the diver-sity measures, again, I-Multi Swarm presented better re-sults than the other algorithms. Both I-Multi Centroid andI-Multi Extremes obtained the best values of IGD and LD,for almost all objective functions, covering a greater amountof disconnect fronts (especially I-Multi Centroid). However,as the number of objective grows, the number of disconnect

588

Table 1: Summary.DTLZ2 DTLZ4

Obj Algorithms GD IGD LD Con GD IGD LD Con

3 I-Extremes 5.53E-04 2.75E-04 8.15E-02 4.50E-03 4.39E-03 2.57E-04 2.45E-01 1.01E-03I-Centroid 1.05E-03 4.12E-04 1.04E-01 7.72E-03 3.33E-03 2.29E-04 2.26E-01 7.25E-04I-Random 5.95E-04 3.49E-04 1.24E-01 4.80E-03 3.90E-03 2.66E-04 2.41E-01 1.08E-03

Ideal 3.25E-04 6.20E-03 1.22E+00 7.29E-04 1.70E-03 2.13E-03 9.12E-01 5.09E-04Mga 1.41E-03 5.99E-04 1.69E-01 4.37E-03 4.45E-03 2.92E-04 3.12E-01 5.21E-04

5 I-Extremes 1.50E-03 9.08E-04 3.23E-01 1.61E-02 6.93E-03 3.52E-04 5.21E-01 2.34E-03I-Centroid 2.03E-03 1.11E-03 3.05E-01 2.28E-02 7.29E-01 4.08E-04 5.47E-01 1.83E-03I-Random 1.54E-03 9.76E-04 3.71E-01 1.66E-02 6.30E-03 3.54E-04 5.18E-01 2.12E-03

Ideal 1.44E-03 5.82E-03 1.21E+00 1.45E-03 2.93E-03 1.84E-03 1.02E+00 9.04E-04Mga 1.32E-02 1.90E-03 4.50E-01 2.34E-02 1.04E-02 4.28E-04 5.63E-01 1.25E-03

10 I-Extremes 2.82E-03 1.42E-03 6.10E-01 1.83E-02 7.14E-03 5.13E-04 7.29E-01 3.75E-03I-Centroid 2.88E-03 1.57E-03 6.70E-01 3.06E-02 9.47E-03 6.04E-04 7.60E-01 2.54E-03I-Random 2.68E-03 1.44E-03 6.55E-01 2.15E-02 7.46E-03 5.21E-04 7.27E-01 3.66E-03

Ideal 3.28E-03 6.55E-03 1.28E+00 2.56E-03 4.12E-03 1.84E-03 1.17E+00 1.60E-03Mga 3.34E-02 2.51E-03 8.87E-01 2.79E-02 1.87E-02 7.07E-04 7.53E-01 2.65E-03

20 I-Extremes 3.42E-03 1.56E-03 6.55E-01 2.21E-02 8.53E-03 1.05E-03 8.21E-01 7.14E-03I-Centroid 2.97E-03 1.64E-03 6.61E-01 3.34E-02 7.39E-03 1.14E-03 8.59E-01 5.21E-03I-Random 2.81E-03 1.46E-03 6.41E-01 1.94E-02 7.19E-03 1.03E-03 8.09E-01 6.75E-03

Ideal 5.33E-03 5.54E-03 1.19E+00 3.05E-03 3.49E-03 3.42E-03 1.30E+00 1.44E-03Mga 3.91E-02 2.53E-03 8.88E-01 3.32E-02 2.56E-02 1.48E-03 8.90E-01 3.64E-03

DTLZ6 DTLZ7

Obj Algorithms GD IGD LD Con GD IGD LD Con

3 I-Extremes 2.79E-06 1.47E-05 9.34E-03 3.11E-08 5.11E-04 5.38E-03 1.39E+00 1.96E-04I-Centroid 1.86E-06 1.72E-05 1.28E-02 2.06E-08 3.51E-04 2.39E-04 2.03E-01 1.33E-04I-Random 2.77E-06 2.73E-04 5.34E-02 4.49E-08 9.69E-04 9.09E-03 1.98E+00 2.38E-04

Ideal 7.76E-06 2.30E-03 3.94E-01 4.49E-07 2.84E-04 2.24E-02 3.56E+00 2.54E-04Mga 7.62E-06 5.92E-05 2.52E-02 3.00E-07 6.18E-04 1.01E-03 4.33E-01 4.77E-04

5 I-Extremes 1.85E-01 1.14E-03 2.92E-01 1.50E-06 2.67E-03 4.69E-03 1.89E+00 1.65E-02I-Centroid 1.70E-01 1.62E-04 6.35E-02 2.63E-07 2.38E-03 2.31E-03 9.64E-01 1.33E-02I-Random 1.28E-01 9.73E-04 2.59E-01 1.25E-06 3.25E-03 8.59E-03 2.80E+00 1.59E-02

Ideal 3.17E-04 1.89E-03 4.12E-01 5.85E-07 6.11E-03 3.89E-02 6.51E+00 2.32E-02Mga 3.48E-01 1.52E-04 5.32E-02 7.80E-07 7.73E-03 5.15E-03 1.82E+00 2.39E-02

10 I-Extremes 1.08E-01 3.48E-03 6.92E-01 2.95E-05 1.14E-02 8.52E-03 3.23E+00 2.04E-01I-Centroid 9.50E-02 1.07E-03 2.86E-01 3.97E-06 9.00E-03 8.10E-03 3.52E+00 1.90E-01I-Random 1.15E-01 3.56E-03 6.65E-01 9.80E-05 1.43E-02 1.15E-02 4.49E+00 2.09E-01

Ideal 4.90E-04 8.37E-03 1.40E+00 7.50E-05 5.48E-02 6.67E-02 1.10E+01 2.80E-01Mga 3.99E-01 1.11E-03 2.67E-01 1.45E-05 5.42E-02 1.35E-02 5.55E+00 3.21E-01

20 I-Extremes 8.03E-02 3.42E-03 7.40E-01 3.74E-02 2.33E-02 1.62E-02 4.40E+00 6.84E-01I-Centroid 8.18E-02 1.19E-03 3.23E-01 8.50E-03 2.02E-02 1.75E-02 5.95E+00 6.91E-01I-Random 1.01E-01 4.08E-03 7.01E-01 1.00E-01 3.01E-02 1.92E-02 6.95E+00 7.13E-01

Ideal 7.83E-04 8.19E-03 1.36E+00 1.03E-02 1.52E-01 9.45E-02 1.73E+01 1.41E+00Mga 4.02E-01 1.18E-03 2.75E-01 9.71E-04 1.53E-01 2.85E-02 8.78E+00 1.02E+00

fronts becomes too high. Since there are several fronts tobe covered, the number of swarms is insufficient to coverthe entire Pareto-optimal Front and I-Multi Swarm pre-sented equivalent results to MGA in terms of diversity. Thethree variants of I-Multi Swarm deteriorated for high dimen-sions, especially in terms of LD (which represents, that thePFapprox is far from some regions of PFtrue). In summary,I-Multi Centroid obtained the best results in terms of di-versity. When there are disconnected fronts the strategy ofclustering shown to be efficient. Furthermore, I-Multi Ex-tremes, which also extracts information from the generatedsolutions to guide the search performed better than the ran-dom approach.

In summary, using a cooperative Multi-Swarm strategycombining the qualities of MGA and Ideal archiver showsto be a good way of dealing with Many-Objective Opti-mization. Analyzing different many-objective scenarios, theproposed algorithm, I-Multi Swarm, outperformed the al-gorithms that use only the Ideal archiving method or onlythe MGA. Combining the diversity properties of MGA (tocreate a spread initial set of solutions) with a multi-swarmsearch with the Ideal archiving method (aiming to obtainmore convergence) improved the results both in terms of di-versity of the obtained solutions and the convergence to thePareto-optimal front.

5. CONCLUSIONSThis work presents a cooperative Particle Swarm Opti-

mization algorithm designed to deal with Many-ObjectiveProblems. The proposed Multi-Swarm algorithm, I-MultiSwarm, has the major characteristic to combine differentarchiving methods willing to improve the search both interms of convergence to the Pareto-optimal front and diver-

sity of the non-dominated solutions. This algorithm takesadvantage of the multi-swarm approach to improve the re-sults of the archiving methods.

I-Multi Swarm is evaluated through an empirical analy-sis using a set of Many-objective Optimization Problems ofthe DTLZ family. The influence of some parameters likethe number of swarms, number of split iterations and sizeof the search region are analyzed. Furthermore, the threemethods for selecting seeds and archives for the sub-swarmsare analyzed and compared with each other. Moreover, thevarious I-Multi Swarm approaches are compared to Ideal-SMPSO and MGA-SMPSO algorithms. In general, I-MultiSwarm methods obtain promising results. The three pro-posed variants of I-Multi Swarm obtained good results andeach one standing out in different scenarios. I-Multi Ran-dom was more effective in DTLZ2 problem. This algorithmis also very good in DTLZ4 problem, where the random ap-proach is effective to avoid the denser region. In addition,the I-Multi Centroid obtained a very good result, similar toI-Multi Random, for DTLZ4 problem. The I-Multi Centroidapproach stands out for DTLZ7 problem. I-Multi Centroidcharacterizes by the strategy of clustering the generated so-lutions and then defining the seed of each swarm as thecentroid of the cluster. I-Multi Swarm is effective to pro-mote a better coverage of the disconnected Pareto-optimalFront. I-Multi Extremes, in general, obtains very good re-sults. However, this approach suffers a great deteriorationwhen the number of objectives is similar to the number ofswarms, since only the extremes are addressed in this situ-ation. The main shortcoming of I-Multi Swarm was facedon DTLZ6 problem. All strategies are unable to avoid thelocal optimums and generated a PFapprox with a low conver-gence toward the PFtrue. However, in general, the results

589

of I-Multi outperformed MGA-SMPSO and Ideal-SMPSOin almost all comparison. Future works include the compar-ison of I-Multi with other Many-Objective algorithms likethe ones proposed in [6] and [20], the proposal of new meth-ods to initialize the sub-swarms and the promotion of effec-tive communications between sub-swarms to improve theirsearch.

AcknowledgmentsThis work was sponsored by CAPES Foundation, Process:0192-12-0 and CNPQ productivity grants 305986/2012-0.

6. REFERENCES[1] S. Adra and P. Fleming. Diversity management in

evolutionary many-objective optimization.Evolutionary Computation, IEEE Transactions on,15(2):183–195, april 2011.

[2] J. Branke and S. Mostaghim. About selecting thepersonal best in multi-objective particle swarmoptimization. In Parallel Problem Solving fromNature(PPSN IX), volume LNCS 4193, pages523–532, 2006.

[3] A. Britto and A. Pozo. Using archiving methods tocontrol convergence and diversity for many-objectiveproblems in particle swarm optimization. InEvolutionary Computation (CEC), 2012 IEEECongress on, pages 605–612, june 2012.

[4] A. B. d. Carvalho and A. Pozo. Measuring theconvergence and diversity of cdas multi-objectiveparticle swarm optimization algorithms: A study ofmany-objective problems. Neurocomputing, 75:43–51,Jan. 2012.

[5] C. A. C. Coello, G. B. Lamont, and D. A. V.Veldhuizen. Evolutionary Algorithms for SolvingMulti-Objective Problems (Genetic and EvolutionaryComputation). Springer-Verlag New York, Inc.,Secaucus, NJ, USA, 2006.

[6] K. Deb and H. Jain. Handling many-objectiveproblems using an improved nsga-ii procedure. InEvolutionary Computation (CEC), 2012 IEEECongress on, pages 1 –8, june 2012.

[7] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. Afast and elitist multiobjective genetic algorithm:NSGA-II. IEEE Transactions on EvolutionaryComputation, 6(2):182–197, August 2002.

[8] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler.Scalable multi-objective optimization test problems.In Congress on Evolutionary Computation (CEC2002), pages 825–830, 2002.

[9] J. Demsar. Statistical comparisons of classifiers overmultiple data sets. The Journal of Machine LearningResearch, 7:1–30, 2006.

[10] M. El-Abd and M. S. Kamel. A taxonomy ofcooperative particle swarm optimizers. InternationalJournal of Computational Intelligence Research,4(2):137–144, 2008.

[11] M. Garza-Fabre, G. T. Pulido, and C. A. Coello.Ranking methods for many-objective optimization. InProceedings of the 8th Mexican InternationalConference on Artificial Intelligence, MICAI ’09, pages633–645, Berlin, Heidelberg, 2009. Springer-Verlag.

[12] H. Ishibuchi, N. Tsukamoto, and Y. Nojima.Evolutionary many-objective optimization: A shortreview. In CEC 2008. IEEE Congress on EvolutionaryComputation, pages 2419–2426, 2008.

[13] M. Laumanns and R. Zenklusen. Stochasticconvergence of random search methods to fixed sizepareto front approximations. European Journal ofOperational Research, 213(2):414 – 421, 2011.

[14] S. Mostaghim, J. Branke, and H. Schmeck.Multi-objective particle swarm optimization oncomputer grids. In Proceedings of the 9th annualconference on Genetic and evolutionary computation,GECCO ’07, pages 869–875, New York, NY, USA,2007. ACM.

[15] S. Mostaghim and H. Schmeck. Distance basedranking in many-objective particle swarmoptimization. In G. Rudolph, T. Jansen, S. Lucas,C. Poloni, and N. Beume, editors, Parallel ProblemSolving from Nature U PPSN X, volume 5199 ofLecture Notes in Computer Science, pages 753–762.Springer Berlin / Heidelberg, 2008.

[16] S. Mostaghim and J. Teich. Strategies for finding goodlocal guides in multi-objective particle swarmoptimization. In Swarm Intelligence Symposium, pages26–33, 2003.

[17] A. Nebro, J. Durillo, J. Garcia-Nieto, C. A. C. Coello,F. Luna, and E. Alba. SMPSO: A new pso-basedmetaheuristic for multi-objective optimization. InIEEE symposium on Computational intelligence inmiulti-criteria decision-making, 2009. mcdm ’09,pages 66–73, 2009.

[18] R. C. Purshouse, C. Jalba, and P. J. Fleming.Preference-driven co-evolutionary algorithms showpromise for many-objective optimisation. InProceedings of the 6th international conference onEvolutionary multi-criterion optimization, EMO11,pages 136–150, Berlin, Heidelberg, 2011.Springer-Verlag.

[19] M. Reyes-Sierra and C. A. C. Coello. Multi-objectiveparticle swarm optimizers: A survey of thestate-of-the-art. International Journal ofComputational Intelligence Research, 2(3):287–308,2006.

[20] O. Schutze, A. Lara, and C. A. C. Coello. On theinfluence of the number of objectives on the hardnessof a multiobjective optimization problem. IEEE Trans.Evolutionary Computation, 15(4):444–455, 2011.

590


Recommended