I-MOPSO: A Suitable PSO Algorithm forMany-Objective Optimization
Andre Britto
Federal University of Parana
Curitiba, Parana, Brazil 81531-980
Aurora Pozo
Federal University of Parana
Curitiba, Parana, Brazil 81531-980
Abstract—Multi-Objective Optimization Problems are prob-lems with more than one objective function. In the litera-ture, there are several Multi-Objective Evolutionary Algorithms(MOEAs) that deals with MOPs, including Multi-ObjectiveParticle Swarm Optimization (MOPSO). However, these al-gorithms scale poorly when the number of objective grows.Many-Objective Optimization researches methods to decreasethe negative effect of applying MOEAs into problems with morethan three objective functions. Here, it is proposed a new PSOalgorithm, called I-MOPSO, which explores specific aspects ofMOPSO to deal with Many-Objective Problems. This algorithmtakes advantage of an archiving method to introduce moreconvergence and from the strategy of the leader’s selection tointroduce diversity on the search. I-MOPSO is evaluated throughan empirical analysis aiming to observe how it works in Many-Objective scenarios in terms of convergence and diversity tothe Pareto front. The proposed algorithm is compared to otherMOEAs from the literature through the use of quality indicatorsand statistical tests.
Keywords-Multi-Objective Particle Swarm Optimization;Multi-Objective Optimization; Many-Objective Optimization
I. INTRODUCTION
Multi-Objective Particle Swarm Optimization (MOPSO) is a
population based Multi-Objective Meta-Heuristic that has been
used to solve several Multi Objective Optimization Problems
(MOP) [1]. MOPs involve the simultaneously optimization of
two or more conflicting objectives functions subject to certain
constraints. MOPSO algorithms are specifically designed to
provide robust and scalable solutions. In these algorithms, each
element, called particle, uses simple local rules to govern its
actions and by means of the interactions of the entire group,
the swarm achieves its objectives.
However, in spite of the good results of MOEAs, includ-
ing MOPSO algorithms [2], these algorithms scale poorly
when dealing with problems with more than 3 objective
functions [3], [4]. These problems are called Many-Objective
Optimization Problems (MaOPs). One of the main challenges
faced by MOEAs with many objectives is the deterioration
of the search ability. This deterioration occurs due to the
increase of the number of non-dominated solutions with the
number of objectives and, consequently, there is no pressure
towards the Pareto front. The Many-Objective Optimization is
the search for new techniques with the goal to overcome these
limitations [5].
In the literature, MaOPs have been tackled through different
approaches like: decomposition strategies, the proposal of
new preference relations, dimensionality reduction, among
others [3]. Our goal is to explore MOPSO in Many-Objective
Optimization, a topic few explored in the literature. However,
a different approach is taken, and here, specific features of
MOPSO are considered. A new algorithm is proposed, called
I-MOPSO (Ideal Point Guided MOPSO). This algorithm has
two main aspects: an archive method which introduces more
convergence on the search and the leader’s selection method
to deal with diversity. The archiving method uses the idea to
guide the solutions in the archive to a specific area of the ob-
jective space near the ideal point [6]. For the leader’s selection
it was chosen the NWSum method [7] which introduces more
diversity on the search and avoids the concentration of the
solutions on a small region of the Pareto Front.
I-MOPSO algorithm is evaluated through an empirical anal-
ysis. The algorithm is compared to other MOPSO algorithm
designed for Many-Objective Optimization, called CDAS-
SMPSO [8]. Furthermore, two MOEAs from the literature
are also compared: SMPSO [2] and NSGA-II [9]. In this
comparison, the algorithms solve the DTLZ2 many-objective
bechmarking problem [10]. Also, a set of quality indicators are
applied to investigate how these algorithms scale up in terms
of convergence and diversity in many objective scenarios,
they are: Generational Distance (GD), Inverse Generational
Distance (IGD), Spacing and also it is analysed the distribution
of the Tchebycheff distance over the ”knee” of the Pareto
front [11].
The rest of this paper is organized as follows: Section II
describes the main concepts of Many-Objective Optimization.
The Multi Objective Particle Swarm Optimization is presented
in Section III. After, the proposed algorithm is discussed
in Section IV. Section V presents the empirical analysis
performed to evaluate I-MOPSO. Finally, Section VI presents
the conclusions and future works.
II. MANY-OBJECTIVE OPTIMIZATION
A Multi-Objective problem (MOP) involves the simultane-
ous satisfaction of two or more objective functions. Further-
more, in such problems, the objectives to be optimized are
usually in conflict, which means that they do not have a single
best solution, but a set of solutions. To find this set of solutions
2012 Brazilian Symposium on Neural Networks
1522-4899/12 $26.00 © 2012 IEEE
DOI 10.1109/SBRN.2012.20
166
it is used the Pareto Optimality Theory [12]. The general multi-
objective minimization problem, without constraints, can be
stated as (1).
Minimizef(x) = (f1(x), f2(x)..., fm(x)) (1)
subject to x ∈ Ω, where: x ∈ Ω is a feasible solution vector,
Ω is the feasible region of the problem, m is the number
of objectives and fi(x) is the i-th objective function of the
problem.
In this case, the purpose is to optimize m objective functions
simultaneously, with the goal to find a good trade-off of
solutions that represent the better compromise between the
objectives. So, given f(x) = (f1(x), f2(x)..., fm(x)) and
f(y) = (f1(y), f2(y)..., fy(x)), f(x) dominates f(y) , de-
noted by f(x) ≺ f(y), if and only if (minimization):
∀i ∈ {1, 2, ...,m} : fi(x) ≤ fi(y), and∃i ∈ {1, 2, ...,m} : fi(x) < fi(y)
f(x) is non-dominated if there is no f(y) that dominates f(x).
Also, if there is no solution y that dominates x, x is called
Pareto Optimal and f(x) is a non-dominated objective vector.
The set of all Pareto Optimal solutions is called Pareto Optimal
Set, denoted by P ∗, and the set of all non-dominated objective
vector is called Pareto Front, denoted by PF ∗.
In MOP, the MOEAs modify Evolutionary Algorithms by
incorporating a selection mechanism based on Pareto opti-
mality and adopting a diversity preservation mechanism that
avoids the convergence to a single solution [12]. However,
since in most applications the search for the Pareto optimal set
is NP-hard, the MOEAs focuses on finding an approximationPareto Front, as close as possible to the true Pareto Front.
Recently, research efforts have been oriented to investigate
the scalability of these algorithms with respect to the number
of objectives [3] [4]. Many-Objective Optimization is the area
that studies new techniques for problems that have more than
3 objectives, called Many-Objective Problems (MaOPs).
In the literature, some studies have showed that
MOEAs scale poorly in many-objective optimization prob-
lems [3] [8] [5] [13]. The main reason for this is the number
of non-dominated solutions that increases exponentially with
the number of objectives. As consequence: first, the search
ability is deteriorated because it is not possible to impose
preferences for selection purposes; Second, the number of so-
lutions required for approximating the entire Pareto front also
increases, and finally there exists difficulty of the visualization
of solutions. To avoid these problems, currently, these issues
has been tackled using mainly the adaptation of preference
relations that induce a finer order on the objective space, the
dimensionality reduction and decomposition strategies [3].
Our main goal, is to propose a PSO algorithm suitable for
Many-Objective Problems. The idea is to explore methods to
store the non-dominated solutions in the external archive and
the leader selection procedure. Until fairly recently most of
the research was concentrated on a small group of algorithms,
often Genetic Algorithms.
III. MULTI-OBJECTIVE PARTICLE SWARM OPTIMIZATION
Particle Swarm Optimization (PSO) is a cooperative
population-based heuristic inspired by the social behavior of
birds flocking to find food [1]. The set of possible solutions
is a set of particles, called a swarm, which moves in the
search space, in a cooperative search procedure. In PSO, a
set of solutions searches for optimal solutions by updating
generations. These movements are performed by the velocity
operator that is guided by a local and a social component. In
Multi-Objective Optimization, Multi-Objective Particle Swarm
Optimization (MOPSO), the Pareto dominance relation is
adopted to establish preferences among solutions to be consid-
ered as leaders. By exploring the Pareto dominance concepts,
each particle in the swarm could have different leaders, but
only one may be selected to update the velocity.
This set of leaders is stored in an external archive (or
repository) that contains the best non-dominated solutions
found so far. Normally, this archive is bounded and has a
maximum size. So, two important features of PSO are: the
method to archive the solutions in the repository and how
each particle will choose its leader (leader’s selection).
The basic steps of a MOPSO algorithm are: initialization of
the particles, computation of the velocity, position update and
update of leader’s archive.
Each particle pi, at a time step t, has a position x(t) ∈ Rn,
that represents a possible solution. The position of the particle,
at time t + 1, is obtained by adding its velocity, v(t) ∈ Rn,
to x(t), Equation 2. The velocity of a particle pi is based on
the best position already fetched by the particle, −→p best(t), and
the best position already fetched by the set of neighbors of pi,−→Rh(t), that is a leader from the repository, see Equation 3.
−→x (t+ 1) = −→x (t) +−→v (t+ 1) (2)
−→v (t+ 1) = � · −→v (t) + (C1 · φ1) · (−→p best(t)−−→x (t))
+(C2 · φ2) · (−→Rh(t)−−→x (t)) (3)
The variables φ1 and φ2, in (3), are coefficients that de-
termine the influence of the particle best position, randomly
obtained in each iteration. Constants C1 and C2 indicate how
much each component influences on velocity. The coefficient
� is the inertia of the particle, and controls how much the
previous velocity affects the current one. The local leader,−→p best(t), is the best position ever achieved by the particle. If
the new position and the current −→p best(t) are non-dominated,
the new value is chosen randomly between these two vectors.−→Rh is a particle from the repository, chosen as a guide of pi,obtained through a global neighborhood (totaly connected).
In the literature some works deal with MaOPs using PSO
algorithms. It can be highlighted the work presented at [8].
This work studied the influence of Control of Dominance Area
of Solutions [14] in different MOPSO algorithms. The study
167
showed that the technique improves the results of MOPSO for
problems with many objectives. It proposes a new algorithm,
called CDAS-SMPSO, that outperformed the SMPSO algo-
rithm in Many-Objective scenarios. In [15], a PSO algorithm
handles many-objectives using a Gradual Pareto dominance
relation to overcome the problem of finding non-dominated
solutions when the number of objectives grows and Mostaghim
and Schmeck [16] presented an overview of MOPSO with
many objectives, also two variants of MOPSO are proposed
based on the ranking of non-dominated solutions.
IV. I-MOPSO
It is known that Pareto based algorithms have several limita-
tions when dealing with MaOPs, but it is possible to introduce
new features into traditional MOEAs to avoid these problems.
As discussed in Section II, the Many-Objective Optimization
literature concentrates its work in tasks like the proposal
of new preference relation, dimensionality reduction, among
others. Here, our interest is to explore specific characteristic
of PSO algorithms willing to reduce the limitations observed
in Many-Objective Optimization.
A new Multi-Objective Particle Swarm Optimization al-
gorithm called, I-MOPSO (Ideal Point Guided MOPSO) is
proposed. This algorithm has to main features: the archiving
process which introduces more convergence and the leader’s
selection method which provides diversity to the search.
I-MOPSO has as basis the SMPSO algorithm [2]. It uses
the procedure that limits the velocity of each particle. The
velocity of the particle is limited by a constriction factor χ,
that varies based on the values of C1 and C2. Besides, the
SMPSO introduces a mechanism that links or constraints the
accumulated velocity of each variable j (in each particle).
Also, after the velocity of each particle has been updated
a mutation operation is applied. A polynomial mutation is
applied [9] in 15% of the population, randomly selected.
The proposed algorithm differs from SMPSO on the archiv-
ing method and the leader’s selection strategy. In I-MOPSO,
the archiving method introduces more convergence towards
the Pareto front. It is used the Ideal archiver, presented in [6]
For this, the archiving method guides the solutions in the
archive to a specific area of the objective space. So, in this
method the ideal point [12] is selected as guide. The ideal
point is a vector with the best value for every objective value,
obtained at each iteration between the points in the external
archive. In this approach, the distance to the ideal point defines
which solutions will remain in the archive. Here, when the
archive becomes full and a new solution tries to enter the
following procedure is executed: first the ideal point between
all solutions in the archive and the new solution is obtained;
second, the Euclidean distance from each point to the ideal
point is calculated; finally, it is removed the point with the
highest distance. The main idea of this archive is that guiding
the selection of the points in the archive to a region close to
the ideal point will increase the convergence of the search to
the Pareto front and will place the solutions in a good area of
the objective space.
However, this process that guides the solutions to a region
near the ideal point could introduce lack of diversity into the
PSO algorithm search. So, to avoid the concentration of the
generated approximation Front to a small region, it is chosen
a leader’s selection method which introduces diversity on the
search.
The NWSum method proposed in [7] is used. This method
consists in guiding the particle to the dimension where it closer
(dcloser). The select leader will be the particle in the repository
who is closer to dcloser. With this method, it is possible
to guide the particles closer to the axis of each dimension,
avoiding them to be only located near to the ideal point. This
method calculates weights for each objective values and give
more power for those objectives where the particle has good
values. It is defined by Equation 4, where xi represents the
position of the particle i and pi is a possible leader for xi
F =∑
j
fj(xi)∑k fk(xi)
fj(pi) (4)
The particle pi that generates the greatest weighted sum is
used for the update, aiming to push the particles towards the
axis it’s already close.
V. EMPIRICAL ANALYSIS
To evaluate the proposed algorithm, I-MOPSO, it was
compared with some state-of-art MOEA and some algorithms
specific designed for Many-Objective Problems. The MOEAs
chosen were the SMPSO algorithm [2], that have very good
results when compared to other MOPSO algorithms, and the
NSGA-II [9], that have very good results for MOP and is often
used as basis in MOP literature. Furthermore, the I-MOPSO
was compared with the CDAS-SMPSO algorithm [8], that was
designed for MaOPs.
Each algorithm executed 50000 fitness evaluation. For
SMPSO, CDAS-SMPSO and I-MOPSO the population was
limited to 250 particles. In each iteration ω varied randomly
in the interval [0, 0.8]. φ1 and φ2 varied randomly in [0, 1].C1 and C2 varied randomly over the interval [1.5, 2.5]. For the
CDAS-SMPSO, the parameter that control the dominance area,
Si, was set to 0.25, 0.30, 0.35, 0.40 and 0.45, that obtained
the best results in [8]. The archive as limited to 250 solutions.
NSGA-II was executed with a population of 250 individuals.
The algorithms were applied to the DTLZ2 many-objective
problem [10]. This problem can be scaled to any number of
objectives (m) and decision variables (n) and the global Pareto
front is known analytically. The DTLZ2 problem can be used
to investigate the ability of the algorithms to scale up their
performances in large numbers of objectives. In this analysis
the problem was scaled to 2, 3, 5, 10, 15 and 20 number of
objectives.
Here, our goal is to observe aspects like convergence
towards the Pareto Front and the diversity of the approxi-
mation of the Pareto front generated by each algorithm. To
measure the convergence it was used the Generational Distance
(GD) [12], that measures how far the approximated Pareto
168
Fig. 1: Mean of GD values for all 30 executions for all
algorithms
Front (PFapprox) generated by each archiver is from the true
Pareto front of the problem PFtrue. It is a minimization
measure. To observe if the (PFapprox) is well distributed
over the Pareto Front the Inverse Generational Distance (IGD)
was applied. IGD measures the minimum distance of each
point of PFtrue to the points of PFapprox. IGD allows us to
observe if PFapprox converges to the true Pareto front and
also if this set is well diversified. It is important to perform
a joint analysis of GD and IGD indicators because if only
GD is considered it is not possible to notice if the solutions
are distributed over the entire Pareto front. Finally, it was
used the Spacing quality indicator [12], that measures the
range variance between neighboring solutions in the front.
If the value of this metric is 0, all solutions are equally
distributed in the objective space. The Hypervolume metric
isn’t used here, because it has some limitations when applied
to Many-Objective Optimization, like to give higher values for
PFapprox near the edges in this context, as discussed in [11].
Besides the previous quality indicators, here it was also used
a methodology presented in [11]. Since, one of the problems
of the Many-Objective Optimization is the visualization of the
approximation set, one way to tackle this issue is the use of
histograms. So, seeking to observe where the approximation
set generated by each algorithm is located, it is made the
analysis of the distribution of Tchebycheff distance. This
methodology compares the Tchebycheff distance of each point
of the PFapprox to the ideal point (or the knee) of the Pareto
Front. The distributions of the Tchebycheff distance for all
solutions are presented in distribution charts, for all analyzed
objectives.
Every algorithm was executed thirty times. The quality
indicators are compared using the Friedman test at 5% sig-
nificance level. The test is applied to raw values of each
metric. The post-test of the Friedman test indicates if there
is any statistically difference between each analyzed data set,
to identify which data set has the best values it is used
some boxplot charts. The boxplot gives information about the
location, spread, skewness and tails of the data. Due to space
Fig. 2: Mean of IGD values for all 30 executions for all
algorithms
Fig. 3: Mean of Spacing values for all 30 executions for all
algorithms
limitations, the boxplots were omitted in this paper.
The results are presented in Figures 1 to 6 and Table I.
Figures 1 to 3 present the mean values of the GD, IGD and
Spacing for each algorithm. Every curve in each chart repre-
sents the GD, IGD and Spacing values evolution for different
objective numbers. Figures 4 to 6 present the distribution of
the Tchebycheff distance. Table I presents the summary of best
algorithm obtained by the Friedman test.
First, the GD is analyzed to observe if the algorithms
converged to the Pareto Front. At Table I, it can be observed
that only the algorithms designed for MaOPs obtained the best
results, according to the Friedman test. The CDAS-SMPSO
algorithm obtained the best results for 2, 3 and 5 objective
numbers, for different Si values. However, for high number
of objectives the proposed algorithm I-MOPSO obtained the
best results. This result is expected, since both algorithms
are specially designed to introduce more convergence to the
search, specially the I-MOPSO, that guide the search to a
region near the ideal point. In Figure 1, it can be observed that
the I-MOPSO had very good values of GD for all objective
values. In opposite way, the MOEAs from literature, SMPSO
169
TABLE I: Best archivers according to Friedman test for DTLZ2 problem
Obj Best algorithmsGD IGD Spacing
2 0.25, 0.30 and 0.35 NSGA-II 0.25, 0.30 and 0.353 0.25, 0.40 and 0.45 0.40, I-MOPSO, NSGA-II 0.25, 0.45 and SMPSO5 0.25, 0.35, 0.40 and I-MOPSO 0.35, 0.40 and I-MOPSO 0.25 and 0.45
10 I-MOPSO 0.35 and 0.40 0.25 and I-MOPSO15 I-MOPSO 0.30, 0.35 and 0.40 0.45 and I-MOPSO20 I-MOPSO 0.30, 0.35, 0.40 and I-MOPSO 0.45
and NSGA-II, suffer a huge deterioration when the number of
objective grows.
IGD is analyzed to observe the diversity properties of each
algorithm. Again, the algorithms with Many-Objective tech-
niques obtained the best results, specially when the number
of objective grows. The I-MOPSO obtained the best result,
along the CDAS-SMPSO, for 5 and 20 objective numbers.
The CDAS-SMPSO obtained the best diversity for almost all
objective numbers, with different Si values. The NSGA-II has
the best result only for two objective values, and the SMPSO
did not obtain any best result. Through Figure 2, again, it
can be observed that the I-MOPSO did not have a great
deterioration when the number of objective grows, but have
worse values for high number of objectives than for a low
number of objectives. It occurs, because the search is directed
to a region near the ideal point, favoring convergence instead
diversity. The results of the NSGA-II were omitted from the
chart, since it obtained very high values of IGD for a high
number of objectives.
Fig. 4: Tchebycheff distance distribution, 10 objective func-
tions
For the Spacing, the I-MOPSO algorithm generated a sim-
ilar value for all objectives and did not deteriorate when the
number of objective grows. It obtained the best values for 10and 15 objectives. The CDAS-SMPSO also had good results
obtaining the best value of Spacing for all objective values,
however, with different Si values. As presented in [8], often
the CDAS-SMPSO obtain good spacing values due to the
small size of the approximation Pareto set. Again, SMPSO
and NSGA-II had a high deterioration when the number of
objective grows. In Figure 3, it can be observed the good
Fig. 5: Tchebycheff distance distribution, 15 objective func-
tions
Fig. 6: Tchebycheff distance distribution, 20 objective func-
tions
results of I-MOPSO, specially for high number of objectives,
the good results of some CDAS-SMPSO configurations and
the deterioration of the NSGA-II and SMPSO.
Finally, the distributions of the Tchebycheff distance are
analyzed. In these charts, curves that have peaks near small
values of Tchebycheff distance concentrated their solutions
near the knee (ideal point). Here it is presented only the charts
for the higher objective numbers, 10, 15 and 20, however this
analysis relies for all objective numbers. For small number of
objectives (2, 3 and 5), almost all algorithms have a similar
distribution, often near the knee. However when the number
of objective grows, both NSGA-II and SMPSO tend to spread
170
their distribution in different regions of the Pareto front. Since
these algorithms can’t reach the Pareto Front, it is expected a
distribution with values far from the ideal point. I-MOPSO had
distributions near the knee of the Pareto front, often having
peaks near the origin of the chart. For CDAS-SMPSO, as
discussed in [8], one of the characteristic introduced by CDAS,
is to guide the search to a region near the knee. Therefore,
the distributions for the different configurations of the CDAS-
SMPSO were often located near the ideal point.
In summary, the I-MOPSO algorithm obtained good results
for many objective problems, in terms of convergence and
diversity. The algorithm has a good convergence towards the
Pareto front and generates the approximation Pareto front near
the knee, due to the archiving method. However, the algorithm
loss some diversity, since it tries to guide the search to the
ideal point. To avoid that the final solution concentrate in a
small region of the Pareto front, the leader selection method,
introduces more diversity on the search. The CDAS-SMPSO
had very good results, as presented in [8]. I-MOPSO had very
similar results but with the advantage of the using the original
Pareto dominance and not needing any additional configuration
parameters. Through the results, it can be observed that
different CDAS-SMPSO configurations obtained good values,
so it is not possible to obtain the best result in only one
execution of the CDAS-SMPSO. Finally, like observed in other
works in the literature [8] [9], the SMPSO and the NSGA-II
algorithms had a high deterioration on the search when the
number of objective grows.
VI. CONCLUSION
Multi-Objective Particle Swarm Optimization has demon-
strated to be very powerful, dealing with MOPs in a suitable
way and providing a set of good solutions for the problem
considering Pareto non-dominance concepts [8]. However,
just as other MOEAs, MOPSO algorithms suffer a great
deterioration in Many-Objective Problems.
This work presented a new MOPSO algorithm with the
goal to be suitable for problems with more than 3 objective
function, called I-MOPSO. The main idea was to explore con-
vergence and diversity through specific features of MOPSO.
Therefore, to introduce more convergence towards the Pareto
Front, I-MOPSO uses an archiving method that guide the
solutions to a region near the ideal point of the Pareto Front.
Besides, to avoid the solutions to be concentrate only in
a single point and introduce more diversity on the search,
the leader’s selection chosen for I-MOPSO was the NWSum
method.
I-MOPSO was evaluated through an empirical analysis, that
used the DTLZ2 many-objective problem, and was compared
to the CDAS-SMPSO, the SMPSO and NSGA-II. It was
concluded that the proposed algorithm could obtain good
results in a many-objective scenario. The algorithm presented
a good convergence and covered a region of the Pareto Front
near the ideal point. The results of I-MOPSO were very
similar to the CDAS-SMPSO algorithm, however the proposed
algorithm did not have any additional parameters and did use
the original Pareto dominance relation.
Future works include exploring other characteristics of
MOPSO algorithm, seeking to obtain a more diversified ap-
proximation of the Pareto front, without losing convergence.
Also, I-MOPSO will be analyzed in others bechmarking
problems, such as discontinuous Pareto fronts.
REFERENCES
[1] M. Reyes-Sierra and C. A. C. Coello, “Multi-objective particle swarmoptimizers: A survey of the state-of-the-art,” International Journal ofComputational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006.
[2] A. Nebro, J. Durillo, J. Garcia-Nieto, C. A. C. Coello, F. Luna, andE. Alba, “SMPSO: A new pso-based metaheuristic for multi-objectiveoptimization,” in IEEE symposium on Computational intelligence inmiulti-criteria decision-making, 2009. mcdm ’09, 2009, pp. 66–73.
[3] H. Ishibuchi, N. Tsukamoto, and Y. Nojima, “Evolutionary many-objective optimization: A short review,” in CEC 2008. IEEE Congresson Evolutionary Computation, 2008, pp. 2419–2426.
[4] O. Schutze, A. Lara, and C. A. C. Coello, “On the influence of thenumber of objectives on the hardness of a multiobjective optimizationproblem,” IEEE Trans. Evolutionary Computation, vol. 15, no. 4, pp.444–455, 2011.
[5] S. Adra and P. Fleming, “Diversity management in evolutionary many-objective optimization,” Evolutionary Computation, IEEE Transactionson, vol. 15, no. 2, pp. 183 –195, april 2011.
[6] A. Britto and A. Pozo, “Using archiving methods to control convergenceand diversity for many-objective problems in particle swarm optimiza-tion,” in Evolutionary Computation (CEC), 2012 IEEE Congress on,june 2012, pp. 605–612.
[7] N. Padhye, J. Branke, and S. Mostaghim, “Empirical Comparison ofMOPSO Methods - Guide Selection and Diversity Preservation -,”Evolutionary Computation, no. x, pp. 2516–2523, 2009.
[8] A. B. d. Carvalho and A. Pozo, “Measuring the convergence anddiversity of cdas multi-objective particle swarm optimization algorithms:A study of many-objective problems,” Neurocomputing, vol. 75, pp. 43–51, Jan. 2012.
[9] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,” IEEE Transactions onEvolutionary Computation, vol. 6, no. 2, pp. 182–197, August 2002.
[10] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Congress on EvolutionaryComputation (CEC 2002), 2002, pp. 825–830.
[11] A. L. Jaimes and C. A. C. Coello, “Study of preference relations inmany-objective optimization,” Proceedings of the 11th Annual confer-ence on Genetic and evolutionary computation - GECCO ’09, pp. 611–618, 2009.
[12] C. A. C. Coello, G. B. Lamont, and D. A. V. Veldhuizen, EvolutionaryAlgorithms for Solving Multi-Objective Problems (Genetic and Evolu-tionary Computation). Secaucus, NJ, USA: Springer-Verlag New York,Inc., 2006.
[13] R. Purshouse, C. Jalba, and P. Fleming, “Preference-driven co-evolutionary algorithms show promise for many-objective optimisation,”in Evolutionary Multi-Criterion Optimization, ser. Lecture Notes inComputer Science, R. Takahashi, K. Deb, E. Wanner, and S. Greco,Eds. Springer Berlin / Heidelberg, 2011, vol. 6576, pp. 136–150.
[14] H. Sato, H. E. Aguirre, and K. Tanaka, Controlling Dominance Area ofSolutions and Its Impact on the Performance of MOEAs, ser. LectureNotes in Computer Science 4403: Evolutionary Multi-Criterion Opti-mization. Berlin: Springer, 2007, pp. 5–20.
[15] M. Koppen and K. Yoshida, “Many-objective particle swarm optimiza-tion by gradual leader selection,” in ICANNGA ’07: Proceedings ofthe 8th international conference on Adaptive and Natural ComputingAlgorithms, Part I. Berlin, Heidelberg: Springer-Verlag, 2007, pp. 323–331.
[16] S. Mostaghim and H. Schmeck, “Distance based ranking in many-objective particle swarm optimization,” in Parallel Problem Solving fromNature PPSN X, ser. Lecture Notes in Computer Science, G. Rudolph,T. Jansen, S. Lucas, C. Poloni, and N. Beume, Eds. Springer Berlin /Heidelberg, 2008, vol. 5199, pp. 753–762.
171