MO-TRIBES, an adaptive multiobjective particle swarm optimization algorithm

Post on 11-Dec-2023

0 views 0 download

transcript

Comput Optim Appl (2011) 49: 379–400DOI 10.1007/s10589-009-9284-z

MO-TRIBES, an adaptive multiobjective particleswarm optimization algorithm

Yann Cooren · Maurice Clerc · Patrick Siarry

Received: 29 July 2008 / Published online: 8 August 2009© Springer Science+Business Media, LLC 2009

Abstract This paper presents MO-TRIBES, an adaptive multiobjective ParticleSwarm Optimization (PSO) algorithm. Metaheuristics have the drawback of beingvery dependent on their parameter values. Then, performances are strongly relatedto the fitting of parameters. Usually, such tuning is a lengthy, time consuming anddelicate process. The aim of this paper is to present and to evaluate MO-TRIBES,which is an adaptive algorithm, designed for multiobjective optimization, allowingto avoid the parameter fitting step. A global description of TRIBES and a compar-ison with other algorithms are provided. Using an adaptive algorithm means thatadaptation rules must be defined. Swarm’s structure and strategies of displacementof the particles are modified during the process according to the tribes behaviors.The choice of the final solutions is made using the Pareto dominance criterion. Rulesbased on crowding distance have been incorporated in order to maintain diversityalong the Pareto Front. Preliminary simulations are provided and compared with thebest known algorithms. These results show that MO-TRIBES is a promising alterna-tive to tackle multiobjective problems without the constraint of parameter fitting.

Keywords Particle swarm optimization · Parameter-free · Pareto dominance ·Crowding distance · Adaptive

Y. Cooren · M. Clerc · P. Siarry (�)Laboratoire Images, Signaux et Systèmes Intelligents, LiSSi, E.A. 3956, Université de Paris 12,61 avenue du Général de Gaulle, 94010 Créteil, Francee-mail: siarry@univ-paris12.fr

Y. Coorene-mail: cooren@univ-paris12.fr

M. Clerce-mail: maurice.clerc@writeme.com

380 Y. Cooren et al.

1 Introduction

Problems with multiple objectives are present in a great variety of real-life optimiza-tion problems. In these problems, there are several conflicting objectives to be opti-mized, which means that there is no single solution for these problems. Instead, theaim is to find good “trade-off” solutions that represent the best possible compromisesolutions among objective functions.

The complexity of such problems leads researchers to call for new approaches.Over the last years, metaheuristics [10, 16] have attracted a great interest. Meta-heuristics are a family of stochastic optimization algorithms, which find approximatesolutions for single-objective or multiobjective problems. From these metaheuristics,MultiObjective Evolutionary Algorithms (MOEA) have been found to be successfulto solve multiobjective optimization problems [11, 13, 28, 52]. Another techniquethat has been adopted in the last years for dealing with multiobjective optimizationproblems is Particle Swarm Optimization (PSO) [22], which is precisely the approachadopted in the work reported in this paper. PSO is basically designed for continuoussingle-objective optimization, but several attempts were made to design PSO algo-rithms for continuous multiobjective optimization [8, 19, 39]. A good survey on PSOfor multiobjective optimization is available in [40].

PSO and, more commonly, “metaheuristics” are algorithms, which require the tun-ing of a set of parameters. For example, in the case of genetic algorithms, the sizeof the population and the mutation rate are two of the parameters of the algorithm.Performances of a given algorithm are strongly related to the values given to the pa-rameter set [43]. Hence, the parameters of the algorithm must be carefully tuned inview of having optimal performances. Evidently, the time spent to find the “optimal”parameter set increases with the number of parameters. However, there are applica-tions where the user cannot afford to spend a lot of time in determining an optimal setof free parameters on a trial and error basis. In case the objective function values areobtained through a slow experimental process, conducting repeated tests with differ-ent parameter value sets becomes impractical. Hence, determining the optimal set ofparameter values in a reasonably short duration of time requires a good knowledgeof the algorithm. Dynamic problems [20] may also introduce difficulties. Indeed, insuch cases, the hypersurface of the objective function may change too quickly andthe user may not have sufficient time to modify the parameters of the algorithm opti-mally. Considering these facts, it can be concluded that reducing the number of “free”parameters must be an interesting option. In many cases, the loss of efficiency in theprocess gets more than balanced by the time gained during the experiments. Never-theless, it must be pointed out that, for a given problem, an algorithm using carefullyand manually tuned parameters will generally be more efficient that an algorithm withautomatically defined parameters.

An adaptive process is an algorithm whose parameter values are automaticallychanged according to some results previously found. Several adaptive metaheuristicsalready exist for single-objective optimization: PSO [44, 45, 48–50], genetic algo-rithms [29, 41, 42], ant colony algorithms [4, 15, 18], tabu search [2] and simu-lated annealing [21]. Adaptive methods already exist for multiobjective problems:PSO [3, 27] and evolutionary algorithms [1, 14, 23, 35]. However, none of these al-gorithms is entirely adaptive, that is, there is still sufficient scope to define and adapt

MO-TRIBES, an adaptive multiobjective particle swarm optimization 381

more parameters. For example, in [45], Suganthan proposed a modification of thesize of each particle neighborhood dynamically with time but no rule was defined toadapt swarm’s size, inertia weight or acceleration coefficients. Hence the ideal wouldbe to design algorithms that do not need any human intervention. An adaptive algo-rithm with no parameter to tune can be interpreted as a “black-box”, where the useronly needs to define the problem and run the process. Such systems are already inexistence. An example is the genetic algorithm proposed by Sawai and Adachi [41].Clerc [7] has proposed a similar adaptive PSO variant called TRIBES. There, the usercan only define and/or add new adaptation rules. The present paper aims at definingthe characteristics of a multiobjective version of TRIBES called MO-TRIBES.

The remainder of this paper is organized as follows: Sect. 2 provides some basicconcepts on multiobjective optimization required to make the paper self-contained,Sect. 3 is an introduction to the PSO strategy and to TRIBES algorithm, Sect. 4presents the MO-TRIBES algorithm. Numerical results and comparisons are pro-vided in Sect. 5. Conclusions are given in Sect. 6.

2 Basic concepts on multiobjective optimization

We are interested in solving problems of the type:

Minimize �f (�x) = (f1(�x), f2(�x), . . . , fk(�x)) (1)

where �x = (x1, x2, . . . , xD) is the vector of decision variables, fj : RD → R, j =

1, . . . , k, are the objective functions. To describe the concept of optimality in whichwe are interested, we will introduce a few definitions.

Definition 1 For two vectors �a, �b ∈ χ ⊂ RD , we say that �a ≤ �b if ai ≤ bi for i =

1, . . . ,D, and that �a dominates �b in χ (denoted by �a ≺ �b) if �a ≤ �b and �a �= �b. The ≺operator defines a partial order relation on χ called Pareto dominance.

Definition 2 We say that a vector �x ∈ χ ⊂ RD is nondominated with respect to the

search space χ , if there does not exist another �x′ ∈ χ such that �f (�x′) ≺ �f (�x). At theopposite, �x is said dominated with respect to the search space χ , if ∃�x′ ∈ χ such thatf (�x′) ≺ f (�x).

Figure 1 shows a particular case of the dominance relation in a 2D objective space.The objective space is R

2+, the gray dots represent the dominated solutions and theblack dot is an example of a nondominated solution.

Definition 3 We say that a vector of decision variables �x∗ ∈ χ ⊂ RD is Pareto opti-

mal if it is nondominated with respect to χ .

Definition 4 The Pareto optimal set P ∗ is defined by:

P ∗ = {�x ∈ χ | �x is Pareto optimal} (2)

382 Y. Cooren et al.

Fig. 1 Dominance relation in abi-objective space. The graydots represent the dominatedsolutions and the black dot is anexample of nondominatedsolution

Fig. 2 Example of a Paretofront. The gray dots representthe dominated solutions and theblack ones form the Pareto front

Definition 5 The Pareto front FP ∗ is defined by:

FP ∗ = { �f (�x) ∈ Rk| �x ∈ P ∗} (3)

Figure 2 shows a particular case of a Pareto front in a 2D objective space. Theobjective space is R

2+, the gray dots represent the dominated solutions and the blackones form the Pareto front.

We thus wish to determine the Pareto optimal set of all the decision variable vec-tors.

MO-TRIBES, an adaptive multiobjective particle swarm optimization 383

3 Particle swarm optimization and TRIBES algorithm

3.1 Basic particle swarm optimization

PSO is a simple algorithm, easy to implement. The simplicity of PSO implies that thealgorithm is inexpensive in term of memory requirement [7]. In recent years, PSOhas become very popular in the domain of optimization, because of these favorablecharacteristics.

PSO is a population-based algorithm. It starts with a random initialization of aswarm of particles. Each particle is modeled by its position in the search space andits corresponding velocity. In the global version of the algorithm, at each time step,each particle adjusts its position and velocity as functions of its previous velocity,its best location and the best location found by the entire swarm during all past timesteps. In the local version, instead of the location of the best particle in the swarm,the best particle among its neighbors is considered. The neighborhood of a givenparticle can be chosen using either a fixed topology, or a time-varying topology ora random topology. Although it can be slower, the local version of PSO gives betterresults than the global one [7]. Indeed, each individual is influenced not only by itsown experience, but also by the experience of other particles.

In a D-dimensional search space, the position and the velocity of the ith parti-cle can be represented as �Xi = (xi,1, xi,2, . . . , xi,D) and �Vi = (vi,1, vi,2, . . . , vi,D) re-spectively. Each particle has its own best location �pi = (pi,1,pi,2, . . . , pi,D), whichcorresponds to the best location reached by the ith particle until time t . The globalbest location is denoted by �g = (g1, g2, . . . , gD), which represents the best locationever reached by the entire swarm during all past time steps.

From time t to time t + 1, each velocity is updated using the following equation:

vi,j (t + 1) = w · vi,j (t) + c1 · r1 · (pi,j − xi,j (t)) + c2 · r2 · (gj − xi,j (t)),

j ∈ {1, . . . ,D} (4)

where w is a constant, called inertia weight, c1 and c2 are constants, called accel-eration coefficients, and r1 and r2 are two independent random numbers uniformlydistributed in [0,1] for each dimension at each time step. w controls the influenceof the previous direction of displacement. c1 controls the cognitive behavior of theparticle and c2 controls the influence of the swarm on the particle’s behavior. In theoriginal version of PSO, the value of each component in Vi was clamped within therange [−Vmax,Vmax]. Velocity clamping aims at controlling excessive moves of theparticles outside the search space.

The computation of the position at time t + 1 is derived from (4) using:

xi,j (t + 1) = xi,j (t) + vi,j (t + 1), j ∈ {1, . . . ,D} (5)

In [5], Clerc and Kennedy show that the convergence of PSO may be insured bythe use of a constriction factor. Using the constriction factor emancipates us to defineVmax, but also insures a good balance between intensification and diversification. In

384 Y. Cooren et al.

this case, (4) becomes:

vi,j (t + 1) = K · (vi,j (t) + φ1 · r1 · (pi,j − xi,j (t)) + φ2 · r2 · (gj − xi,j (t)))

j ∈ {1, . . . ,D} (6)

with:

K = 2

φ − 2 + √φ2 − 4φ

, φ = φ1 + φ2, φ > 4 (7)

The convergence characteristic of the system can be controlled by φ. In fact, Clercand Kennedy [5] demonstrated that the system behavior could be controlled so that itexhibits the following features:

– the system does not diverge in a real value region and finally can converge to anequilibrium state,

– the system can search different regions efficiently by avoiding premature conver-gence.

It has been shown mathematically that constriction is a sufficient condition for con-vergence [5]. Although this methodology ensures convergence, it is not possible toguarantee that the algorithm will converge to the global optimum. Other PSO in-spired methods that ensure convergence of the algorithm to an equilibrium stateexist [37, 46, 51].

Standard PSO procedure can be summarized through Algorithm 1.

Algorithm 1 A standard algorithm employed for PSOInitialize a population of particles with random positions and velocitiesFor each individual i, �pi is initialized to �Xi

Evaluate the objective function for each particle and compute �gRepeat

Update the velocities and the positions of the particlesEvaluate the objective function for each individualCompute the new �pi and �g

Until the stopping criterion is met

If the optimum value is known a priori, a preset “acceptable” error can be definedas a stopping criterion. If not, it is common to stop after a maximum “reasonable”number of evaluations of the objective function that finds a “good enough” solu-tion. However, depending on the problem under consideration or the tests performed,many other criteria may be used (maximal number of iterations, maximal time for theexecution, etc.).

3.2 TRIBES

As it was said in the Introduction, this study deals with algorithms comprising of areduced number of “free” parameters, i.e. parameters to be fitted by the user. In sucha framework, the word “parameter” may have two meanings:

MO-TRIBES, an adaptive multiobjective particle swarm optimization 385

– “parameter”: every component of the algorithm; generally numerical values, but itcan also be a probability distribution, a strategy, a topology of information links,etc.

– “user-parameter”: any “parameter” of the algorithm the user can be led to modifyaccording to the treated problem.

Throughout this paper, the word “parameter” is used in the sense “user-parameter”.This section briefly presents TRIBES. For more details, TRIBES is completely

described in Clerc’s book [7]. TRIBES was successfully applied in several real lifeproblems, e.g. flow shop scheduling problem [34], modeling of UMTS radio net-works [31], and image segmentation [30].

3.2.1 Swarm’s structure

In TRIBES, the swarm is structured in different “tribes”. Tribes have different andvariable sizes. The aim is to simultaneously explore several promising areas, gener-ally local optima. Then, results are exchanged among all the tribes in order to find theglobal optimum. Such a network can be compared to those used in other multi-swarmalgorithms [32, 36]. This implies two different types of communication: intra-tribecommunication and inter-tribe communication.

Each tribe is composed of a variable number of particles. Relationships betweenparticles in a tribe are similar to those defined in basic PSO. This indicates that eachparticle of a tribe stores the best location it has found so far and knows the best (andthe worst) particle of the tribe, that is, the particle which has found the best (and theworst) location in the search space. This is termed as intra-tribe communication.

Even if each tribe is able to find a local optimum, a global decision must be takento decide which of these optima is the best one. Each tribe communicates with allother tribes. The communication between any two tribes is made through the bestparticles of the tribes. This is termed as inter-tribe communication.

The most time consuming part of PSO algorithm is the evaluation of the objectivefunction. It is therefore important to carry out as few evaluations of the objectivefunction as possible. Consequently, particles are removed from the swarm as soonas possible. Removals are done hoping that they will not affect the final result. Bythe way, if a tribe has a good behavior, it is considered that the worst particle of thetribe is useless and, then, it is removed from the swarm. At the opposite, if sometribes have bad performances, new particles will be generated, forming a new tribe,and the “bad” tribes will try to use the information provided by these new particles toimprove their performances.

Practically, we need to set up rules that modify the swarm’s structure. For thispurpose, quality qualifiers are defined for each particle and each tribe. A particle istermed good if it has just improved its best performance, and termed neutral if it hasnot. In addition to this qualitative assessment (it is qualitative because the assessmentis not based on values, but on an order relation), the best and the worst particles aredefined within the tribe framework. We also define a good tribe and a bad tribe asfollows. A tribe is declared bad if none of its particles has improved its best locationduring the last iteration. If at least one of the particles of the tribe has improved its

386 Y. Cooren et al.

Fig. 3 Intra-tribe and inter-tribecommunication

best location during the last iteration, the tribe is declared either good or bad with aprobability of 0.5.

Removal of a particle must occur in a good tribe and the removed particle shouldobviously be the worst one. The process of adding particles is quite similar to thatof removing particles. Each bad tribe generates particles. The number of particlesgenerated by a single bad tribe is defined by (8). Equation (8) was empirically deter-mined by Clerc [6] and validated on numerous problems. All the generated particlesform a new tribe.

NBparticles = Max

(2,

⌊9.5 + 0.124 · (D − 1)

tribeNb

⌋)(8)

Details about the removing and generating processes are available in [7].To summarize, each particle is informed by itself (its best position so far �p), by all

the particles of its tribe (called internal informers) and, if the particle is a “shaman”(i.e., the best particle of a tribe), by the “shamans” of other tribes (called externalinformers). All these positions are called the “informers”. Then, the best informer ofa particle is the informer for which the value of the objective function is the low-est (resp. highest) in case of minimization (resp. maximization). So, the swarm iscomposed of a related network of tribes and each tribe is a dense network of parti-cles. Figure 3 illustrates this idea. Arrows symbolize inter-tribe communications andlines symbolize intra-tribe communications. Black particles symbolize the shamansof the different tribes. This structure must be generated and modified automatically,by means of creation, evolution, and removal of the particles.

In the beginning, the swarm is composed of only one particle that represents asingle tribe. If, during the first iteration, this particle does not improve its location,new ones are created, forming a second tribe. During the second iteration, the sameprocess is repeated and this process continues in subsequent iterations. The swarm’ssize will continue to grow until promising areas are found. As the swarm grows,the time elapsed between two consecutive adaptations grows too. Thus, the swarm’sexploratory capability will grow. Adaptations will be more and more spaced in time.

MO-TRIBES, an adaptive multiobjective particle swarm optimization 387

Then, the swarm has more and more chances to find a good solution between twoconsecutive adaptations. On the other hand, once a promising area is found, eachtribe gradually removes its worst particle. Ideally, when convergence is confirmed,each tribe will be reduced to a single particle.

3.2.2 Swarm’s behavior

The second way of adapting the swarm is to choose the strategy of displacement foreach particle. The choice is made according to the recent past of the particle. This willenable a particle with a good behavior to have a greater scope of exploration. A spe-cial strategy that can be compared to a local search is defined for very good particles.According to this postulate, the algorithm will choose to call the most appropriatedisplacement strategy.

There are three possible ways to describe the latest change in particle status: de-terioration, status quo and improvement, that is, the current location of the particleis worse, equal or better than its last position, respectively. These three statuses aredenoted by the following symbols: − for deterioration, = for status quo and + forimprovement. The history of a particle includes the two last variations of its per-formance. For example, an improvement followed by a deterioration is denoted by(+ −). In the multiobjective case, + means that the current position dominates thelast position, − signifies that the current position is dominated by the last positionand = means that the current position neither dominates nor is dominated by the lastposition.

There are nine possible variations of history. However, we simply put them in threegroups. The three strategies used are defined in Table 1. Let us denote by �p the bestlocation of the particle, �g the best position of the informers of the particle. In the caseof a single-objective problem, λ is the objective function of the problem. In the caseof a multiobjective problem, λ is defined as:

λ(�x) = 1

k

k∑

i=1

fi(�x)

maxj∈[1:k](fj (�x))(9)

aleasphere(Hp) is a point uniformly chosen in the hyper-sphere with center �p andradius ‖ �p − �g‖ and aleasphere(Hg) is a point uniformly chosen in the hyper-spherewith center �g and radius ‖ �p − �g‖.

aleanormal(gj − xj , ‖gj − xj‖) is a point randomly chosen with a Gaussian distri-bution with center gj − xj and standard deviation ‖gj − xj‖.

3.2.3 TRIBES algorithm

Algorithm 2 shows a pseudo-code that summarizes TRIBES. �gi is the best informerof the ith particle and the �p’s are the best locations of particles. NL is the numberof information links during the adaptation of the last swarm and n is the number ofiterations since the adaptation of the last swarm.

388 Y. Cooren et al.

Table 1 Strategies of displacement

Gathered Strategy of Equation

statuses displacement

(= +) (+ +) local by xj = gj + aleanormal(gj − xj ,‖gj − xj ‖)independent j ∈ {1, . . . ,D} (10)

(+ =) (− +) disturbed pivot �X = c1 · aleasphere(Hp) + c2 · aleasphere(Hg)

c1 = λ( �p)λ( �p)+λ(�g)

(11)

c2 = λ(�g)λ( �p)+λ(�g)

(− −) (= −) pivot �X = c1 · aleasphere(Hp) + c2 · aleasphere(Hg)

(+ −) (− =) c1 = λ(p)λ( �p)+λ(�g)

(= =) c2 = λ(�g)λ( �p)+λ(�g)

(12)

b = N(0,

λ( �p)−λ(�g)λ( �p)+λ(�g)

)

�X = (1 + b) · �X

Algorithm 2 TRIBES procedureInitialize a population of particles with random positionsFor each individual i, �pi is initialized to �Xi

Evaluate the objective function for each particle and compute �gi

RepeatDetermine statuses of all particlesChoose the displacement strategiesUpdate the positions of the particlesEvaluate the objective function for each particleCompute new �pi and �gi

If n < NLDetermine the qualities of the tribesSwarm’s adaptations (adding/removing particles, reorganizing the information

network)Compute NL

End IfUntil the stopping criterion is met

4 MO-TRIBES

4.1 From single-objective to multiobjective optimization

PSO was originally designed to solve single-objective problems. Applying it to tacklemultiobjective problems means that the original scheme must be modified. Instead offinding a single optimal solution, there are three main goals to achieve [52]:

– Find as many nondominated solutions as possible,– Minimize the distance between the approximated Pareto front found by the algo-

rithm and the true Pareto front,

MO-TRIBES, an adaptive multiobjective particle swarm optimization 389

– Maximize the spread of the solutions found, i.e. the nondominated solutions mustbe distributed as uniformly as possible along the Pareto front.

To achieve these three goals, the original scheme of PSO, and by extension the oneof TRIBES, must be modified, considering the following issues

– How the informers of a given particle (which give �p and �g) must be chosen inorder to give preference to nondominated solutions?

– How to retain nondominated solutions such that these solutions are well spreadalong the Pareto front?

– How to maintain diversity in the swarm in order to avoid convergence towards asingle solution?

The nondominated solutions found during the process are stored in an externalarchive, as used in numerous other methods [40]. Once a nondominated solutionis found, its position is stored in the external archive. Then, at the end of the opti-mization process, the external archive will contain the approximation of the Paretofront found by the algorithm. Moreover, in order to favor nondominated solutions, theset of informers of a given particle will be chosen using the external archive. Con-sidering that, if k ≥ 2, order relations always give preference to some objective, theconcept of informer in multiobjective optimization is different from that used in thesingle-objective case. The new definition is given in Sect. 4.4.

4.2 Maintaining the diversity in the swarm

As it was said in Sect. 4.1, the algorithm must not converge towards a single solu-tion. Thus, maintaining diversity in the swarm is a crucial point, since it permits toapproximate as precisely as possible the Pareto front.

In the case of single-objective problems, PSO is known to converge very quickly,even if it does not converge towards the global optimum [47]. In other words, fewiterations are necessary to reach the final solution. In MO-TRIBES, this drawback ofPSO is used to maintain the diversity in the swarm. Indeed, if after a certain numberof iterations the particles do not have significant displacements, the whole swarm israndomly reinitialized. By this way, new areas of the search space will be explored.Practically, if between two adaptations of the algorithm no solution has been addedto the archive, the swarm is reinitialized.

4.3 Archiving techniques

As seen in Sect. 4.1, the external archive contains the nondominated solutions foundduring the optimization process. This implies that the size of the archive may be im-portant at the end of the process. However, if the size of the archive is too important,the update of the archive may become too complex and, then, too long to execute.Thus, mainly for practical reasons, the size of the archive is bounded. Bounding thearchive size means that two new rules must be added to MO-TRIBES: a rule to de-cide which nondominated solutions to retain once the archive is full and a rule toadaptively set the size of the archive.

390 Y. Cooren et al.

Algorithm 3 Archive’s size processingIf run = 0

archiveSize = �ek�Else

archiveSize = archiveSize + �10 · ln(1 + nDomPrev)�End If

Fig. 4 Crowding distance in atwo dimensional objective space

The size of the archive is determined using Algorithm 3. The parameter k repre-sents the number of objective functions, nDomPrev is the number of nondominatedsolutions found since the last modification of archive’s size and run is the numberof reinitializations of the swarm done since the beginning of the process. The twoequations given in this algorithm have been empirically determined.

Algorithm 3 shows that the size of the archive is initialized as a function of thenumber of objective functions. Moreover, a learning mechanism is added if the swarmis initialized more than one time; the size of the archive is adapted according to thenumber of nondominated solutions found during the previous run.

The diversity of the nondominated solutions stored in the archive is maintainedusing a criterion based on the crowding distance [13]. The crowding distance of theith element of the archive estimates the size of the largest cuboid enclosing the pointi without including any other point in the archive. The idea is to maximize the crowd-ing distance of the particles in order to obtain a Pareto front as uniformly spread aspossible. In Fig. 4, the crowding distance of the ith solution of the archive (blackdots) is the average side length of the cuboid shown by the dashed box. The crowdingdistances of the boundary points of the Pareto front (“0” and “1” on Fig. 4) are set to∞, so that they are always selected.

It is to be noted that crowding distance only works for two or three objectivesspaces [24], and, by the way, that another criterion must be used if k > 3.

The archive is updated as explained in Algorithm 4. tribeNb is the number oftribes in the swarm, tribe[i].explorerNb is the number of particles in the ith tribe,

MO-TRIBES, an adaptive multiobjective particle swarm optimization 391

tribe[i].particle[j] is the j th particle of the ith tribe, nondomCtr is the number of non-dominated solutions stored in the archive and archiveSize is the size of the archive.

Algorithm 4 Archive’s updateFor i = 1 to tribeNb

For j = 1 to tribe[i].explorerNbIf tribe[i].particle[j] dominates some elements of the archive

Delete dominated elementsEnd IfIf tribe[i].particle[j] is nondominated

If archiveSize �= nonDomCtrAdd tribe[i].particle[j] to the archive

ElseCompute the crowding distancesSort the elements of the archive by crowding distanceReplace the particle with the lower crowding distance by tribe[i].particle[j]

End IfEnd If

End ForEnd For

4.4 Choosing the informers

The solution of a multiobjective problem consists in a set of equally good solutions.The concept of informer in MO-TRIBES is, by the way, different from the one inTRIBES.

For a given particle �X, the first informer is �p, the cognitive informer. �p representsthe influence of its own experience on the particle’s behavior. At time t , �p is updatedif �X(t), i.e. the position of the particle at time t , dominates �p. Thus, for the cognitivebehavior of the particle, the dominance criterion is used to evaluate the quality of theparticle.

The second informer of the particle is �g, the social informer. �g represents theinfluence of the social experience on the particle’s behavior. In the single-objectivecase, the social informer �g is the shaman of particle’s tribe. In the multiobjective case,the notion of “shaman” is different, because the notion of “best position ever found bythe tribe” cannot be defined. In the multiobjective case, the shaman of a given tribeis a particle randomly chosen in the tribe at each iteration. The aim of the shamanis to act as a leader of the tribe through a communication process with the externalarchive. In practice, the choice of �g is different, considering that �X is a shaman ornot. If �X is not the shaman of its tribe, �g is set to the best position reached by theshaman. If �X is the shaman of its tribe, �g is randomly chosen in the archive. By thisway, a preference is given to nondominated particles. Choosing randomly the socialinformer of the shaman in the archive permits to maintain diversity in the swarm.

Algorithm 5 summarizes the informing process at time t. tribe[i].particle[j] isthe j th particle of the ith tribe, tribe[i].particle[j].p is the cognitive informer of

392 Y. Cooren et al.

tribe[i].particle[j], tribe[i].particle[j].g is the social informer of tribe[i].particle[j],tribe[i].particle[shaman] is the shaman of the ith tribe and U(archive) is a positionrandomly chosen in the archive.

Algorithm 5 Informing processFor i = 1 to tribeNb

For j = 1 to tribe[i].explorerNbIf tribe[i].particle[j] dominates tribe[i].particle[j].p

tribe[i].particle[j].p=tribe[i].particle[j]End IfIf tribe[i].particle[j] is the shaman of tribe i

tribe[i].particle[j].g=U(archive)Else

tribe[i].particle[j].g=tribe[i].particle[shaman].pEnd If

End ForEnd For

4.5 MO-TRIBES algorithm

Algorithm 6 shows a pseudo-code which summarizes MO-TRIBES process. NL isthe number of information links at the last swarm’s adaptation and n is the numberof iterations since the last swarm’s adaptation.

5 Numerical results and discussion

5.1 Testing procedure

Many test functions have been proposed to evaluate multiobjective optimization algo-rithms [17, 25, 26]. Most of them were developed keeping in mind problem featuresthat may pose difficulties in detecting the optimal Pareto front and maintaining diver-sity in the current nondominated front. The functions used in this paper are detailedin Table 2.

Results of MO-TRIBES are compared with those of two well-known algorithms:NSGA-II [13] and a multiobjective PSO algorithm, called MOPSO [38]. NSGA-IIis used with a population of size 100, a crossover probability of 0.8 and a mutationprobability of 0.1. MOPSO is used with a population of size 100, an inertia weight wequal to 0.4 and acceleration coefficients c1 and c2 equal to 1. For both algorithms,the size of the external archive is bounded by 100. MO-TRIBES has no parameter toset. In its code, for practical reasons, the maximal size of the external archive is alsobounded by 100, but it does not influence the performance, for the size of the archiveis well regulated by the equations defined in Algorithm 3, and is always smaller than100. In MO-TRIBES, NSGA-II and MOPSO, the populations are randomly initial-ized using a uniform distribution.

MO-TRIBES, an adaptive multiobjective particle swarm optimization 393

Algorithm 6 MO-TRIBES procedureCompute the size of the archiveInitialize a population of particles with random positions and velocitiesFor each individual i, �pi is initialized at �Xi

Evaluate the objective functions for each particle and compute �gi

Insert nondominated positions in the archive (cf. Sect. 2)While the stopping criterion is not met

Choose the displacement strategies (cf. Sect. 3.2.2)Update the velocities and the positions of the particles (cf. (10) to (12))Evaluate the objective functions for each particleCompute the new �pi and �gi (cf. Sect. 4.4)Update the archive (cf. Sect. 4.3)If n < NL

Swarm’s adaptations (adding/removing particles, reorganizing the informationnetwork, reinitializing the swarm, etc.) (cf. Sects. 3 and 4.2)

If nDomPrev = 0Restart the algorithm (cf. Sect. 4.2)Update archive’s size (cf. Sect. 4.3)

End IfComputation of NL

End IfEnd While

The stopping criterion of the algorithms is achieving 50 000 evaluations of theobjective functions.

5.2 Performance metrics

In order to compare the algorithms in a quantitative way, several performance metricsare used [33].

Coverage metric the coverage metric C(U,V ) measures the ratio of solutionsstored in the archive V dominated by solutions stored in the archive U . C(U,V )

is defined by (13).

C(U,V ) = |{b ∈ V | ∃a ∈ U, b dominates a}||V | (13)

where the |.| operator gives the cardinal of the given set.The value C(U,V ) = 1 means that all decision vectors in V are weakly dom-

inated by U . At the opposite, C(U,V ) = 0 represents the situation when noneof the points in V is weakly dominated by U . Table 3 gives the mean couples(C(AMO-TRIBES,AX), C(AX,AMO-TRIBES)), where X symbolizes either NSGA-II orMOPSO, over 25 executions of the algorithms. It means that, if (C(AMO-TRIBES,AX),C(AX,AMO-TRIBES)) is close to (1,0), MO-TRIBES provides a better approx-imation of the Pareto front than NSGA-II. Inversely, if (C(AMO-TRIBES,AX),

394 Y. Cooren et al.

Table 2 Test functions

D k Name Search Equation Reference

Space

2 2 Deb [0.1,1]2 f1(�x) = x1[12]

f2(�x) = 2−e−(

x2−0.20.004 )2 −0.8·e−(

x2−0.60.4 )2

x1

30 2 ZDT1 [0,1]30 f1(�x) = x1

g(�x) = 1 + 9∑D

i=2xi

D−1 [52]h(�x,f1, g) = 1 −

√f1(�x)g(�x)

f2(�x) = g(�x) · h(�x,f1, g)

30 2 ZDT2 [0,1]30 f1(�x) = x1

g(�x) = 1 + 9∑D

i=2xi

D−1 [52]h(�x,f1, g) = 1 − (

f1(�x)g(�x)

)2

f2(�x) = g(�x) · h(�x,f1, g)

30 2 ZDT3 [0,1]30 f1(�x) = x1

g(�x) = 1 + 9∑D

i=2xi

D−1 [52]h(�x,f1, g) = 1 − (

f1(�x)g(�x)

) · sin(10πf1(�x)) −√

f1(�x)g(�x)

f2(�x) = g(�x) · h(�x,f1, g)

10 2 ZDT6 [0,1]10 f1(�x) = 1 − e−4x1 sin6(6πx1)

g(�x) = 1 + 9(∑D

i=2xi

D−1 )0.25[52]

h(�x,f1, g) = 1 − (f1(�x)g(�x)

)2

f2(�x) = g(�x) · h(�x,f1, g)

2 3 MOP5 [−30,30]2 f1(�x) = 0.5(x21 + x2

2 ) + sin(x21 + x2

2 )

f2(�x) = (3x1−2x2+4)2

8 + (x1−x2+1)2

27 + 15 [9]

f3(�x) = 1x2

1+x22+1

− 1.1 · e−x21−x2

2

2 2 MOP6 [0,1]2 f1(�x) = x1

f2(�x) = (1 + 10x2)(1 − (x1

1+10x2)2 [9]

−(x1

1+10x2) sin(2π4x1))

Table 3 Comparison of algorithms using coverage metric

Deb ZDT1 ZDT2 ZDT3 ZDT6 MOP5 MOP6

NSGA-II (0.14, 0.18) (0.00, 0.85) (0.00, 0.35) (0.00, 0.76) (1.00, 0.00) (0.52, 0.07) (0.03, 0.03)

MOPSO (0.01, 0.27) (1.00, 0.00) (0.71, 0.05) (1.00, 0.00) (1.00, 0.00) (0.40, 0.03) (0.06, 0.01)

C(AX,AMO-TRIBES)) is close to (0,1), MO-TRIBES provides a worse approxima-tion of the Pareto front than NSGA-II.

MO-TRIBES, an adaptive multiobjective particle swarm optimization 395

Table 4 Comparison of algorithms using spacing metric

Deb ZDT1 ZDT2 ZDT3 ZDT6 MOP5 MOP6

NSGA-II 0.0313 0.0096 0.0089 0.0098 0.0067 0.3515 0.0076

MOPSO 0.0236 0.0072 0.0070 0.0084 0.1567 0.2839 0.0059

MO-TRIBES 0.0188 0.0047 0.0130 0.0336 0.0870 0.2237 0.0040

Table 5 Comparison of algorithms using maximum spread metric

Deb ZDT1 ZDT2 ZDT3 ZDT6 MOP5 MOP6

NSGA-II 6.41 1.41 1.41 1.96 1.12 83.35 1.69

MOPSO 6.41 1.43 1.40 2.02 8.34 62.78 1.69

MO-TRIBES 6.41 1.43 1.41 2.05 1.22 102.33 1.69

Spacing metric the distance-based metric S measures the distribution of the solu-tions stored in the archive along the estimated front. S(A) is defined by (14) and (15):

S(A) =√√√√ 1

A − 1·

|A|∑

i=1

(di − dmean)2 (14)

di = minsj ∈A∧sj �=si

(k∑

u=1

|fu(si) − fu(sj )|)

(15)

where A is the archive, |A| the number of solutions stored in the archive, k is thesize of the objective function space and dmean is the average distance. The higher thespacing metric is, the better the particles are spread along the Pareto front.

Table 4 gives the mean spacing metrics for MO-TRIBES, NSGA-II and MOPSOalgorithms over 25 executions of the algorithms.

Maximum Spread metric the Maximum Spread metric MS measures how the solu-tions stored in the archive are spread in the objective space. MS is defined by (16).

MS =√√√√

k∑

u=1

( |A|maxi=1

(fu(si)) −|A|

mini=1

(fu(si)))

(16)

Table 5 gives the maximum spread metrics for MO-TRIBES, NSGA-II andMOPSO algorithms over 25 executions of the algorithms. The higher the maximumspread metric is, the wider the approximation of the Pareto front is.

5.3 Discussion

The aim of a multiobjective algorithm is to approximate as precisely as possible thePareto front of a given problem, while maintaining a good diversity in the solution

396 Y. Cooren et al.

set. Figure 5 shows the approximations of the Pareto fronts given by NSGA-II andMO-TRIBES for the problems defined in Table 2.

In Table 3, it can be seen that MO-TRIBES gives competitive results in compar-ison with NSGA-II, which is known as one of the best multiobjective optimizationalgorithms. For ZDT6 problem, the whole set of nondominated solutions providedby MO-TRIBES dominates the one given by NSGA-II. MO-TRIBES is also far bet-ter than NSGA-II for MOP5 problem. At the opposite, for ZDT1, ZDT2 and ZDT3problems, MO-TRIBES seems to be less efficient than NSGA-II to approximate thePareto front. It may be due to the high dimension of the search space. Indeed, the lackof efficiency on high dimensional problems may be explained by the fact that MO-TRIBES starts with only one particle, while NSGA-II starts here with 100 particles.For Deb and MOP6 problems, the results are quite similar.

Table 3 shows that MO-TRIBES is a better multiobjective PSO algorithm thanMOPSO. MOPSO algorithm provides better results than MO-TRIBES only for Debproblem and similar ones for MOP6 problem. For the other problems, MO-TRIBESgives approximations of the Pareto fronts which dominate the ones given by MOPSO.

It can be deduced from Table 4 that the nondominated solutions given by MO-TRIBES are well spread along the Pareto front. Table 5 shows that MO-TRIBESprovides approximations of the Pareto fronts as satisfactory as the ones given byMOPSO and NSGA-II. Results are different only for MOP5 problem, which can beexplained by the higher dimension of the objective space.

Results given in Table 3 are confirmed by the visual results of Fig. 5. It can be seenthat the only problems for which MO-TRIBES appears visually worse are ZDT1 andZDT2. For ZDT3 problem, the difference is less obvious. The Pareto front given byMO-TRIBES for ZDT6 problem is better than the one given by NSGA-II. For MOP5problem, it appears that the nondominated solutions given by MO-TRIBES are betterspread in the f2–f3 plane than the ones given by NSGA-II.

6 Conclusion and future work

This paper presented MO-TRIBES, an adaptive multiobjective Particle Swarm Opti-mization algorithm. Using adaptive algorithms may be useful for the engineers, whichhave multiobjective optimization problems to tackle but who are not confirmed usersof the solving methods. It helps saving time to search the better set of parametervalues.

MO-TRIBES is a multiobjective PSO algorithm which uses an external archive tostore the nondominated solutions found all along the process. The size of this externalarchive is determined adaptively and the diversity in the archive is maintained usinga crowding distance based criterion. Diversity is maintained in the swarm using mul-tiple restarts of the algorithm. Comparisons with NSGA-II and MOPSO show that,even if its behavior is not driven by user-determined parameters, MO-TRIBES givesgood approximations of the Pareto front with a good distribution of the nondominatedsolutions along the approximated front.

As a conclusion, it can be said that MO-TRIBES is an efficient algorithm forsolving multiobjective problems, which permits to non-specialist users to avoid theparameter tuning step, without loss of efficiency.

MO-TRIBES, an adaptive multiobjective particle swarm optimization 397

Fig. 5 Estimated Pareto fronts for MO-TRIBES and NSGA-II algorithms. NSGA-II fronts are symbolizedby circles and MO-TRIBES fronts are symbolized by dots

398 Y. Cooren et al.

In perspective to this work, we propose a study of population growth and struc-tural distribution. Such a study will permit to understand precisely how MO-TRIBESbehaves and, then, to correct its drawbacks.

References

1. Adra, S.F., Griffin, I.A., Fleming, P.J.: An adaptive memetic algorithm for enhanced diversity. In:Parmee, I.C. (ed.) Proceedings of the Seventh International Adaptive Computing in Design and Man-ufacture Conference, 2006, pp. 251–254. Springer, Berlin (2006)

2. Battiti, R.: Reactive search: toward self tuning heuristics. In: Modern Heuristic Search Methods,pp. 61–83. Wiley, Hoboken (1996)

3. Bird, S., Li, X.: Adaptively choosing niching parameters in a PSO. In: Keizer, M. (ed.) Geneticand Evolutionary Computation Conference (GECCO’2006), vol. 1, pp. 3–9. ACM Press, New York(2006)

4. Chen, L., Xu, X.H., Chen, Y.X.: An adaptive ant colony clustering algorithm. In: Proceedings ofthe 3rd Conference on Machine Learning and Cybernetics, pp. 1387–1392. IEEE Press, Piscataway(2004)

5. Clerc, M., Kennedy, J.: The particle swarm: explosion, stability, and convergence in multi-dimensional complex space. IEEE Trans. Evol. Comput. 6, 58–73 (2002)

6. Clerc, M.: Binary particle swarm optimisers: toolbox, derivations, and mathematical insights (2005).https://hal.archives-ouvertes.fr/hal-00122809

7. Clerc, M.: Particle Swarm Optimization. International Scientific and Technical Encyclopaedia. Wiley,Hoboken (2006)

8. Coello Coello, C.A., Salazar Lechuga, M.: MOPSO: a proposal for multiple objective particle swarmoptimization. In: Proceedings of 2002 Congress on Evolutionary Computation (CEC’2002), pp. 1666–1670. IEEE Press, Piscataway (2002)

9. Coello Coello, C.A., Van Veldhuisen, D., Lamont, G.: Evolutionary Algorithms for Solving Multi-Objective Problems. Kluwer Academic, New York (2002)

10. Collette, Y., Siarry, P.: Multiobjective Optimization: Principles and Case Studies. Springer, Berlin(2003)

11. Corne, D.W., Knowles, J.D., Oates, M.J.: The Pareto envelope-based selection algorithm for multi-objective optimization. In: Proceedings of the Parallel Problem Solving from Nature VI Conference.LNCS, pp. 839–848. Springer, Berlin (2000)

12. Deb, K.: Multi-objective genetic algorithms: problem difficulties and construction of test problems.Evol. Comput. 7(3), 205–230 (1999)

13. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting ge-netic algorithm for multiobjective optimization: NSGA II. In: Proceedings of the Parallel Prob-lem Solving from Nature Conference, PPSN VI. LNCS, pp. 849–858. Springer, Berlin (2000).http://www.lania.mx/~ccoello/NSGAII.tar.gz

14. Devireddy, V., Reed, P.: Efficient and reliable evolutionary multiobjective optimization using epsilon-dominance archiving and adaptive population sizing. In: Deb, K. et al. (eds.) Genetic and EvolutionaryComputation-GECCO 2004, Proceedings of the Genetic and Evolutionary Computation Conference,Part II. Lecture Notes in Computer Science, vol. 3103, pp. 390–391. Springer, Berlin (2004)

15. Di Caro, G.: Ant colony optimization and its application to adaptive routing in telecommunicationsnetworks. PhD thesis, Université Libre de Bruxelles (2004)

16. Dréo, J., Pétrowski, A., Siarry, P., Taillard, E.: Metaheuristics for Hard Optimization: Methods andCase Studies. Springer, Berlin (2006)

17. Fonseca, C.M., Flemming, P.J.: On the performance assessment and comparison of stochastic mul-tiobjective optimizers. In: Proceedings of the Parallel Problem Solving from Nature IV Conference.Lecture Notes in Computer Science, pp. 584–593. Springer, Berlin (1996)

18. Förster, M., Bickel, B., Hardung, B., Kókai, G.: Self-adaptive ant colony optimisation applied tofunction allocation in vehicle networks. In: Proceedings of the 9th Annual Conference on Genetic andEvolutionary Computation, pp. 1991–1998. ACM Press, New York (2007)

19. Hu, X., Eberhart, R.C.: Multiobjective optimization using dynamic neighborhood particle swarm op-timization. In: Proceedings of 2002 Congress on Evolutionary Computation (CEC’2002), pp. 1677–1681. IEEE Press, Piscataway (2002)

MO-TRIBES, an adaptive multiobjective particle swarm optimization 399

20. Hu, X., Eberhart, R.C.: Adaptive particle swarm optimization: detection and response to dynamicsystems. In: Proceedings of 2002 Congress on Evolutionary Computation (CEC’2002), pp. 1666–1670. IEEE Press, Piscataway (2002)

21. Ingber, L.: Adaptive simulated annealing (ASA): lessons learned. Control Cybern. 25(1), 33–54(1996)

22. Kennedy, J., Eberhart, R.C.: Particle swarm optimisation. In: Proceedings of the IEEE InternationalConference on Neural Networks, pp. 1942–1948. IEEE Press, Piscataway (1995)

23. Knowles, J., Corne, D.: Properties of an adaptive archiving algorithm for storing nondominated vec-tors. In: IEEE Trans. Evol. Comput. 7(2), 100–116 (2003)

24. Kukkonen, S., Deb, K.: Improved pruning of non-dominated solutions based on crowding distancefor bi-objective optimization problems. In: Proceedings of the IEEE 2006 Congress on EvolutionaryComputation (CEC’2006), pp. 1179–1186. IEEE Press, Piscataway (2006)

25. Kursawe, F.: A variant of evolution strategies for vector optimization. In: Proceedings of ParallelProblem Solving for Nature Conference. Lecture Notes in Computer Science, vol. 496, pp. 193–197.Springer, Berlin (1990)

26. Laumanns, M., Deb, K., Thiele, L., Zitzler, E.: Scalable test problems for evolutionary multi-objectiveoptimization. Technical Report 112, Institut für Technische Informatik und Kommunikationsnetze,ETH Zürich, 8092 Zürich, July 2001

27. Mahfoud, M., Chen, M., Linkens, D.: Adaptive weighted particle swarm optimisation for multi-objective optimal design of alloy steels. In: Yao, X. et al. (eds.) Parallel Problem Solving from Nature,PPSN VIII. Lecture Notes in Computer Science, vol. 3242, pp. 762–771. Springer, Berlin (2004)

28. Murata, T., Ishibuchi, H.: MOGA: multi-objective genetic algorithms. In: Proceedings of the 2ndIEEE International Conference on Evolutionary Computation, pp. 289–294. IEEE Press, Piscataway(1995)

29. Murata, Y. et al.: Agent oriented self adaptive genetic algorithm. In: Proceedings of the IASTEDCommunications and Computer Networks, pp. 348–353. Acta Press, Calgary (2002)

30. Nakib, A., Cooren, Y., Oulhadj, H., Siarry, P.: Magnetic resonance image segmentation based on two-dimensional exponential entropy and a parameter free PSO. In: Proceedings of the 8th InternationalConference on Artificial Evolution. LNCS, pp. 50–61. Springer, Berlin (2007)

31. Nawrocki, M., Dohler, M., Aghvami, A.H.: Understanding UMTS radio network modelling. In: The-ory and Practice. Wiley, Hoboken (2006)

32. Niu, B., Zhu, Y., He, X., Henry, W.: MCPSO: a multi-swarm cooperative particle swarm optimizer.Appl. Math. Comput. 185(2), 1050–1062 (2005)

33. Okabe, T., Jin, Y., Senhoff, B.: A critical survey of performances indices for multi-objective optimiza-tion. In: Proceedings of the 2003 IEEE Congress on Evolutionary Computation, pp. 878–885. IEEEPress, Piscataway (2003)

34. Onwubolu, G.C., Babu, B.V.: TRIBES application to the flow shop scheduling problem. In: NewOptimization Techniques in Engineering, pp. 517–536. Springer, Berlin (2004), Chap. 21

35. Parmee, I.C.: Evolutionary and Adaptive Computing in Engineering Design. Springer, Berlin (2001)36. Parsopoulos, K.E., Tasoulis, D.K., Vrahatis, M.N.: Multiobjective optimization using parallel vector

evaluated particle swarm optimization. In: Proceedings of the IASTED International Conference onArtificial Intelligence and Applications, pp. 823–828. Acta Press, Calgary (2004)

37. Peer, E.S., Van den Bergh, F., Engelbrecht, A.P.: Using neighborhoods with the guaranteed conver-gence PSO. In: Proceedings of the IEEE Swarm Intelligence Symposium 2003 (SIS 2003), pp. 235–242. IEEE Press, Piscataway (2003)

38. Raquel, C.R., Naval, P.C.: An effective use of crowding distance in multiobjective particle swarmoptimization. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2005), pp. 257–264. ACM Press, New York (2005). www.engg.upd.edu.ph/~cvmig/mopsocd.html

39. Ray, T., Liew, K.M.: A swarm metaphor for multiobjective design optimization. Eng. Optim. 34(2),141–153 (2002)

40. Reyes-Sierra, M., Coello Coello, A.: Multiobjective particle swarm optimizers: a survey of the state-of-the-art. Int. J. Comput. Intell. Res. 2(3), 287–308 (2006)

41. Sawai, H., Adachi, S.: Genetic algorithm inspired by gene duplication. In: Proceedings of the 1999Congress on Evolutionary Computing, pp. 480–487. IEEE Press, Piscataway (1999)

42. Schnecke, V., Vornberger, O.: An adaptive parallel genetic algorithm for VLSI-layout optimization. In:Proceedings of the 4th International Conference on Parallel Problem Solving from Nature, pp. 859–868. Springer, Berlin (1996)

400 Y. Cooren et al.

43. Shi, Y., Eberhart, R.: Parameter selection in particle swarm optimization. In: Proceedings of the Sev-enth Annual Conference on Evolutionary Programming. LNCS, vol. 1447, pp. 591–600. Springer,Berlin (1998)

44. Shi, Y., Eberhart, R.C.: Fuzzy adaptive particle swarm optimization. In: Proceedings of 2001 Congresson Evolutionary Computation, pp. 101–106. IEEE Press, Piscataway (2001)

45. Suganthan, P.N.: Particle swarm optimisation with a neighbourhood operator. In: Proceedings of 1999Congress on Evolutionary Computation, pp. 1958–1962. IEEE Press, Piscataway (1999)

46. Trelea, I.C.: The particle swarm optimization algorithm: convergence analysis and parameter selec-tion. Inf. Process. Lett. 85, 317–325 (2003)

47. Van den Bergh, F.: An analysis of particle swarm optimizers. PhD thesis, Department of ComputerScience, University of Pretoria, Pretoria, South Africa (2002)

48. Yasuda, K., Iwasaki, N.: Adaptive particle swarm optimization using velocity information of swarm.In: Proceedings of the IEEE Conference on Systems, Man and Cybernetics, pp. 3475–3481. IEEEPress, Piscataway (2004)

49. Ye, X.F., Zhang, W.J., Yang, Z.L.: Adaptive particle swarm optimization on individual level. In: Pro-ceedings of the International Conference on Signal Processing (ICSP), pp. 1215–1218. IEEE Press,Piscataway (2002)

50. Zhang, W., Liu, Y., Clerc, M.: An adaptive PSO algorithm for real power optimization. In: Proceedingsof the APSCOM (Advances in Power System Control Operation and Management) Conference, S6:Application of Artificial Intelligence Technique (Part I), pp. 302–307. IEEE Press, Piscataway (2003)

51. Zheng, Y., Ma, L., Zhang, L., Qian, J.: On the convergence analysis and parameter selection in particleswarm optimization. In: Proceedings of International Conference on Machine Learning and Cyber-netics, 2003, pp. 1802–1807. IEEE Press, Piscataway (2003)

52. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical re-sults. Evol. Comput. 8(2), 173–195 (2000)