+ All Categories
Home > Documents > An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

Date post: 04-Jun-2018
Category:
Upload: nathan-lau
View: 224 times
Download: 0 times
Share this document with a friend
9
An Empirical Study on the Settings of Control Coefcients in Particle Swarm Optimization N. M. Kwok, D. K. Liu, K. C. Tan and Q. P. Ha  Abstract The effects of randomness of contr ol coefcien ts in Particle Swarm Optimization (PSO) are investigated through emp iri cal studi es. The PSO is vie wed as a met hod to sol ve a cov era ge pr obl em in the solution space when the global- best par tic le is re ported as the sol uti on. Ran domnes s of the control coefcients, therefore, plays a crucial role in providing an efci ent and effec tive algorithm. Compar isons of perf or- mances are made between the uniform and Gaussian distributed random coefcients in adjusting particle velocities. Alternative strat egie s are also teste d, they include: i) pre- assi gned ran- domne ss thro ugh the iter ation s, ii) sele ctiv e hybr id rando m adjustment based on the tness of the particles. Furthermore, the effect of velocity momentum factor is compared between a constant and rando m momen tum. Numerica l res ults show that performances of the proposed variations are comparable to the conventional implementation for simple test functions. However, enhanced performances using the selective and hybrid strategy are observed for complicate functions. I. I NTRODUCTION The par tic le swa rm opt imiza tio n (PS O) [1] is an age nt based searc h algor ithm for dif cul t opti miza tion probl ems. The algorithm emulates the social behavior of living species which is noticeable in bird ocks and sh schools [2]. Each agent is treated as a particle that moves across the solution space of the problem to be solved and the movement s of the particl es are govern ed by a set of cont rol coefci ents. The PSO, being classi ed in the family of meta -heurist ic algo rith ms, had been shown to be an attr acti ve alte rnat iv e to gradient-based optimization routines [3] where the need for dif fer ent iab le system models can be rel axe d. On the other hand, benets are brought about by hybridizing these approaches and are realized in the context of the differential evolution algorithm [4] and memetic algorithms [5]. Applications of PSO can be found in a wide domain in science and engineering practices. For example, in [6], the PSO was app li ed in the des ign of the cons tru cti ons of a magn etic devise call ed a Lone y’ s solenoid. A cant ile ver ed beam was designed by using the PSO [7] in the structural application domain. Settings for a PID controller parameters were dete rmin ed for robu stnes s agai nst adve rse opera ting conditions, see [8]. The PSO had also found its application in optimizing the economy in electricity distributions [9]. In radi o communica tion s, a phase d arra y was desi gned using N. M. Kwok , D. K. Liu and Q. P. Ha are wit h the ARC Centre of Excel lenc e in Auto nomo us Syst ems (CAS ), Facul ty of Engi neeri ng, Uni- versity of Technology, Sydney, Broadway, NSW 2007, Australia, ( email: [email protected]). K. C. Tan is with Dept. of Electrical and Com- pute r Engi neeri ng, Nati onal Uni vers ity of Sing apor e, Sing apore 1192 60, Republic of Singapore. the PSO [10] for a specied radiation pattern. The project sched uling problem with resou rce cons trai nts was tack led by employing the PSO and promising results were reported in [11]. Furthermore, the PSO is well applicable in multi- objec tiv e opti miza tion probl ems, e.g., [12]. The succ esses reported in the real-world applications are largely attributed to the impl ementation simplicity and exibili ty . Howe ver , furt her impr ove ment s in PSO perf orma nces may be anti c- ipated by critically setting the control coefcients. Ear ly de ve lop men ts in PSO were foc use d on the con- ver gence of part icle s to the optimal solutio n. Cont rol co- efcient s are inco rpor ated to adjus t the part icle position s in the con text of a yi ng ve loc it y . App rop ria te cho ice of coef cie nts will dri ve the part icle s into osci llat ions aroun d the optimum [13]. The treatment of PSO in complex spaces was discussed in [14] with attentions paid to the explosion, stab ilit y and agai n in the con ver gence characte rist ics. Fur- ther studies were conducted in [15] relating the coefcient settings to convergence. Apart from the research work in the PSO convergence behavior, the implementation architecture was recently considered in [16] with investigations into the exchange topology of information among the particles. The evaluation of the objective function was manipulated in [17] to avoid trapping in local optima. In general, these works had impl icated that cont rol coef cient sett ings are inte r-r elat ed and problem dependent. A varie ty of pro pos als aiming at impro vin g the PSO performance had also been presented in the literature. Ideas borrowed from genetic algorithms were integrated with PSO in the context of selection [18], breeding and sub-populations [19]. In [20], boundary conditions were imposed when parti- cles had traveled to the limits of the solution space. Velocity adjus tments acco rdin g to the exp ired number of iter atio ns were proposed in [21] which implicitly boosts convergence. The velo city contr ol coef cien t was purp osefu lly sele cted to guar ante e stab ilit y , see [22]. An iner tia weight was in- trod uced in [23] to adjus t the particl e mov emen ts thro ugh a vel ocit y cont rol coef cient. A neig hborh ood info rmat ion was used in [24] to guide the particle movements in addition to the global- and self-information gained by a particle. The division of labor concept was implemented in [25], where the mov emen ts of parti cles were adapt ed to en viro nment al changes. Although the cited variations to the PSO were rationally  justiable, the philosophy adopted was mostly to enhance the convergence capability. It is widely noted that the PSO main tain s a globa l-be st candi date solut ion durin g its iter - 0-7803-9487-9/06/$20. 00/©2006 IEEE 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 823
Transcript

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 1/8

An Empirical Study on the Settings of Control Coefficients in Particle

Swarm Optimization

N. M. Kwok, D. K. Liu, K. C. Tan and Q. P. Ha

 Abstract— The effects of randomness of control coefficientsin Particle Swarm Optimization (PSO) are investigated throughempirical studies. The PSO is viewed as a method to solvea coverage problem in the solution space when the global-best particle is reported as the solution. Randomness of thecontrol coefficients, therefore, plays a crucial role in providingan efficient and effective algorithm. Comparisons of perfor-mances are made between the uniform and Gaussian distributedrandom coefficients in adjusting particle velocities. Alternativestrategies are also tested, they include: i) pre-assigned ran-domness through the iterations, ii) selective hybrid randomadjustment based on the fitness of the particles. Furthermore,the effect of velocity momentum factor is compared between aconstant and random momentum. Numerical results show thatperformances of the proposed variations are comparable to theconventional implementation for simple test functions. However,enhanced performances using the selective and hybrid strategyare observed for complicate functions.

I. INTRODUCTION

The particle swarm optimization (PSO) [1] is an agent

based search algorithm for difficult optimization problems.

The algorithm emulates the social behavior of living species

which is noticeable in bird flocks and fish schools [2]. Each

agent is treated as a particle that moves across the solution

space of the problem to be solved and the movements of 

the particles are governed by a set of control coefficients.The PSO, being classified in the family of meta-heuristic

algorithms, had been shown to be an attractive alternative

to gradient-based optimization routines [3] where the need

for differentiable system models can be relaxed. On the

other hand, benefits are brought about by hybridizing these

approaches and are realized in the context of the differential

evolution algorithm [4] and memetic algorithms [5].

Applications of PSO can be found in a wide domain in

science and engineering practices. For example, in [6], the

PSO was applied in the design of the constructions of a

magnetic devise called a Loney’s solenoid. A cantilevered

beam was designed by using the PSO [7] in the structural

application domain. Settings for a PID controller parameters

were determined for robustness against adverse operating

conditions, see [8]. The PSO had also found its application

in optimizing the economy in electricity distributions [9]. In

radio communications, a phased array was designed using

N. M. Kwok ∗, D. K. Liu and Q. P. Ha are with the ARC Centre of Excellence in Autonomous Systems (CAS), Faculty of Engineering, Uni-versity of Technology, Sydney, Broadway, NSW 2007, Australia, (∗email:[email protected]). K. C. Tan is with Dept. of Electrical and Com-puter Engineering, National University of Singapore, Singapore 119260,Republic of Singapore.

the PSO [10] for a specified radiation pattern. The project

scheduling problem with resource constraints was tackled

by employing the PSO and promising results were reported

in [11]. Furthermore, the PSO is well applicable in multi-

objective optimization problems, e.g., [12]. The successes

reported in the real-world applications are largely attributed

to the implementation simplicity and flexibility. However,

further improvements in PSO performances may be antic-

ipated by critically setting the control coefficients.

Early developments in PSO were focused on the con-

vergence of particles to the optimal solution. Control co-

efficients are incorporated to adjust the particle positions

in the context of a flying velocity. Appropriate choice of 

coefficients will drive the particles into oscillations around

the optimum [13]. The treatment of PSO in complex spaces

was discussed in [14] with attentions paid to the explosion,

stability and again in the convergence characteristics. Fur-

ther studies were conducted in [15] relating the coefficient

settings to convergence. Apart from the research work in the

PSO convergence behavior, the implementation architecture

was recently considered in [16] with investigations into the

exchange topology of information among the particles. The

evaluation of the objective function was manipulated in [17]

to avoid trapping in local optima. In general, these works hadimplicated that control coefficient settings are inter-related

and problem dependent.

A variety of proposals aiming at improving the PSO

performance had also been presented in the literature. Ideas

borrowed from genetic algorithms were integrated with PSO

in the context of selection [18], breeding and sub-populations

[19]. In [20], boundary conditions were imposed when parti-

cles had traveled to the limits of the solution space. Velocity

adjustments according to the expired number of iterations

were proposed in [21] which implicitly boosts convergence.

The velocity control coefficient was purposefully selected

to guarantee stability, see [22]. An inertia weight was in-

troduced in [23] to adjust the particle movements througha velocity control coefficient. A neighborhood information

was used in [24] to guide the particle movements in addition

to the global- and self-information gained by a particle. The

division of labor concept was implemented in [25], where

the movements of particles were adapted to environmental

changes.

Although the cited variations to the PSO were rationally

 justifiable, the philosophy adopted was mostly to enhance

the convergence capability. It is widely noted that the PSO

maintains a global-best candidate solution during its iter-

0-7803-9487-9/06/$20.00/©2006 IEEE

2006 IEEE Congress on Evolutionary ComputationSheraton Vancouver Wall Centre Hotel, Vancouver, BC, CanadaJuly 16-21, 2006

823

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 2/8

ations, however, the convergence to the vicinity of this

temporary candidate may hamper the discovery of a better

solution in other sectors of the solution space. In other

words, exploration of the solution space is as important as

convergence. With regard to the PSO implementation, control

coefficients followed the early developments by injecting

randomness from a uniform distribution and has become a

convention.

In this work, the use of alternative random distribu-

tions, e.g. the Gaussian distribution, will be investigated

through empirical studies. This is motivated by the need to

compromise between exploring and exploiting the solution

space, such that an efficient PSO can be designed. The

Gaussian distribution will be applied in randomizing the

particle velocity, the global- and self-movements of the

particles. Further implementation variations in deterministic

and selective schemes will also be investigated.

The rest of the paper is organized as follows. In Section II,

the PSO background is briefly reviewed and the motivation

of the present work is revealed. Alternative implementation

strategies are proposed in Section III with the objective of enhancing the PSO performance by a study on the coverage

capability. Experimental settings are described in Section IV

and results are presented. A conclusion is drawn in Section

V together with directions for further research.

I I . PARTICLE S WARM O PTIMIZATION

 A. Mathematical Description

The particle swarm optimization algorithm can be viewed

as a stochastic search method for solving non-deterministic

optimization problems and can be described by the following

expression,

vik+1  =  wvik + c1(gbest,k − xik) + c2( pibest,k − xi

k)

xik+1  =  xi

k + vk+1, (1)

where   x   is the particle position in the solution space,   v   is

the velocity of the particle movement assuming a unity time

step, w   is the velocity control coefficient,  c1, c2  are the gain

control coefficients,  gbest   is the global-best position,  pbest   is

the position of a particular particle where the best fitness

is obtained so far, subscript   k   is the iteration index and

superscript  i  is the particle index.

Since the pioneering work in PSO [1], the gain control

coefficients  c1, c2  had been conventionally taken as random

numbers sampled from a uniform distribution, i.e.,

c1 ∼ U[0, c1,max], c2  ∼ U[0, c2,max],   (2)

where   U[·]   stands for a uniform distribution and   c1,max   is

the maximum value for  c1   and  c2,max   for  c2. Guidelines in

determining the gain control coefficients were suggested in

[15].

There have been proposals in determining the velocity

control coefficient w, see [23] [21], on the basis of governing

the movements of particles towards the global-best position.

This coefficient can be kept constant or made adaptive to

some criteria, e.g., decreases in proportion to the expired

iterations.

 B. Procedural Description

At the start of the algorithm, the particle positions are

generated to cover the solution space. Their positions may

be deterministically or randomly distributed and the number

of particles is pre-defined. In general, a small number reduces

the computational load but at the expense of extended itera-

tions required to obtain the optimum (but the optimal solution

is not known   a priori). The velocities   vi0   can also be set

randomly or simply assigned as zeros. A problem dependent

fitness function is evaluated and a fitness is assigned to each

particle. For the set of fitness, the one with the highest

value is taken as the global-best  gbest,0   (for a maximization

problem). This set of initial fitness values are denoted as the

particle-best pibest,0. The velocity is then calculated using the

random gain coefficients. The particle positions are updated

and the procedure repeats. Finally, at the satisfaction of some

termination criteria, the global-best particle is reported as the

solution to the problem.

C. Analysis

An analysis on the behavior of the PSO algorithm can beconducted by adopting developments from control engineer-

ing principles [26]. Denote the position error of a particular

particle (the particle index (i) is dropped) as

εk =  gbest,k − xk,   (3)

the velocity expression becomes

vk+1  =  wvk +  c1εk + c2 pbest,k − c2gbest,k + c2εk

= wvk + (c1 + c2)εk − c2(gbest,k − pbest,k).(4)

The particle update becomes

xk+1  =  xk + wvk + (c1 + c2)εk − c2(gbest,k − pbest,k),   (5)

Substituting the position error and assume that the globaloptimal remains constant, i.e.,  gbest,k+1  =  gbest,k, we have

εk+1  =  εk − wvk − (c1 + c2)εk + c2(gbest,k − pbest,k).   (6)

The particle position error and velocity can be written in

the state-space form as  εk+1vk+1

 =

  1 − c1 − c2   −ω

c1 + c2   ω

  εkvk

+

  c2−c2

(gbest,k − pbest,k),

(7)

or

zk+1  =  Azk + Buk,   (8)

where   zk+1  = [εk+1, vk+1]T  and  uk  =  gbest,k − pbest,k   and

matrices  A,  B  are self-explanatory.

It becomes clear that the requirement for convergence

implies   εk   →   0   and   vk   →   0   as time   k   → ∞. When the

best solution is found  gbest,k  becomes a constant and  pbest,k

will tend to   gbest,k   if the system is stable. The stability of 

a discrete control system can be ascertained by constraining

the magnitude of the eigenvalues  λ1,2  of the system matrix

A ∈ R2×2 to be less than unity, that is

λ1,2 ∈ {λ|λ2−(1+ω −c1−c2)λ +ω = 0},   |λ1,2| < 1.   (9)

824

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 3/8

By choosing the maximal random variables  c1   and  c2   to be

c1,max  = 2  and  c2,max = 2   and take the expectation values

from a uniform distribution, then the coefficients become

c1  = 1  and  c2 = 1. This case corresponds to a total feedback 

of the discrepancy of the particle positions from the desired

solution at  gbest,k. Writing out the eigenvalues, we have

λ1,2  =

 1

2

ω − 1 ± 

1 − 6ω + ω2

.   (10)

After some manipulations, it can be shown that  ω < 1  with

c1,max  =  c2,max = 2  will guarantee stability for the system,

hence, the particles will converge to  gbest,k.

 D. Motivation

The position of the global-best particle is derived from

the start of the algorithm, it is replaced only when a better

candidate solution is found. However, the availability of a

better solution depends on its position being discovered by a

particle when this is governed by the trajectory followed by

the particle. The control coefficients, namely,   w, c1   and   c2

therefore play crucial roles in the behavior of the algorithm.Each particle, under the influence of the control coefficients,

is steered towards or away from the optimal solution in the

context of a stable or unstable control system.

It should be noted here that, since the PSO algorithm

implicitly maintains the global-best solution during its itera-

tion, it requires the algorithm to explore the solution space.

Hence, exploiting the vicinity of the global solution becomes

a secondary objective. Alternative strategies will be proposed

in the following which are aimed at achieving the exploration

goal.

III. ALTERNATIVE S TRATEGIES

Variations to the conventional PSO implementation are

proposed with the following strategies. A combination of 

uniform and Gaussian distributed randomness is adopted

together with a selection process that results in a test for

exploring and exploiting the solution space.

 A. Strategy 1: Uniform Distributed Gain Control

This is the conventional PSO implementation and is used

as a reference for performance comparison. The control

gains   c1   and   c2   are given by Eq. 2. It should be noted

that the controls are renewed (re-sampled from the uniform

distribution) during each iteration.

 B. Strategy 2: Pre-assigned Uniform Distributed Gain Con-

trol

This strategy aims at testing the randomness over the

iteration horizon. The uniform distribution is adopted as in

Strategy 1, however, the random control coefficients gener-

ated at the start of the algorithm are kept un-changed during

the iterations, i.e., pre-assigned. This approach determines

whether a particular particle is initially made stable or

unstable (for searching) respectively.

C. Strategy 3: Uniform Distributed Gain Control with Se-

lection

The principle of division of labor is employed in this strat-

egy. Promising particles with a higher than average fitness

use the pre-assigned gain control coefficients (see Strategy

2) and are directed towards refining the solution. The below

average particles are re-assigned with a uniform random

coefficient generated in each iteration that are devoted tothe searching of the remaining solution space. The selection

is implemented as

if  f i >  f ,   then  ci1,k =  ci1,0, ci2,k =  ci2,0,

otherwise ci1,k ∼ U[0, c1,max], ci2,k ∼ U[0, c2,max],(11)

where   f i is the fitness of the   ith particle,  f   is the average

fitness.

 D. Strategy 4: Gaussian Distributed Gain Control

This is similar to the conventional approach (Strategy 1)

with the exception that the coefficients are sampled from a

Gaussian distribution. That is

c1  ∼ N(0.5, 1), c2  ∼ N(0.5, 1),   (12)

where the Gaussian distributions are characterized by a mean

of   0.5   and variance of   1. By this setting of the Gaussian,

approximately   30%   of the coefficients are negative and

particles will be  bounced  from the optimum and be devoted

to the search task. Furthermore, approximately another  30%of particles are intentionally made  un-stable and deployed in

exploring the solution space.

 E. Strategy 5: Pre-assigned Gaussian Distributed Gain Con-

trol

A similar approach as Strategy 2, in assigning the gain

coefficients, is adopted here with the use of a Gaussiandistribution (see Strategy 4) instead of a uniform distribution.

F. Strategy 6: Gaussian Distributed Gain Control with Se-

lection

This is a replica of Strategy 3 with the use of the Gaussian

distribution.

G. General Variations

The velocity control (coefficient  w   in Eq. 1) is kept con-

stant for all the strategies in a set of test runs. Furthermore, it

is a uniform distribution generated at the start of the iterations

in another set of tests. The effect of the size of the swarm

is also varied. The numbers of particles used in the tests are

50, 100 and 200 respectively.

IV. EXPERIMENTS AND R ESULTS

Numerical simulations are conducted to study the behavior

of the PSO in various test functions and randomize strategies

on control coefficients. Several test scenarios are formulated

and each case is run through a number of simulations in

order to collect statistics of the results. Performances are

then assessed on the basis of the expected values of the

optimization error and the probabilities of achieving specific

performance thresholds are reported.

825

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 4/8

 A. Test Functions

Let the optimization problem at hand be the maximization

of functions. Solution surfaces in the form of 2-dimensional

landscapes are constructed that range from a simple single

peak to complicate multiple peaks and deeps. The values

obtained from the landscapes are taken directly as the fitness

functions in the PSO algorithm. They are described and

illustrated in the following.1) Single Peak:  This test function is constructed from an

exponential function, see Fig. 1, contains a single peak over

the solution space. That is

f  = exp

−0.5(x2 + y2)

σ2

, σ = 2,

x ∈ [−10, 10], y ∈ [−10, 10], f max   at  x = 0, y = 0.

(13)

The support of the peak covers a relative large portion

of the overall landscape. A large support ensures a higher

probability for the existence of an initial particle falling in

the vicinity of the peak. The optimization for a single peak is

considered as a simple case for the PSO and will be servedas a bench mark for performance comparison.

Fig. 1. Landscape 1: single peak.

2) Single Peak with Multiple Ridges and Trenches:   A

single peak is derived from a sine function on the radius

r   from the origin and a small positive number     is added.

The function is

f  =  x2 + y2 + , f  = sin(2 f )

f ,

x ∈ [−10, 10], y ∈ [−10, 10], f max   at  x = 0, y = 0.(14)

The incorporation of the sine function produces a multitude

of ridges and trenches as depicted in Fig. 2. The peak 

maintains a relatively higher magnitude as compared to the

ridges but the support covers only a small portion of the

solution space. The small support hinders the finding of 

the optimum and the particles may be trapped in the local

ridges. The resulting degree of difficulty is thus increased as

compared to the previous test function.

Fig. 2. Landscape 2: single peak, multiple ridges and trenches.

3) Multiple Peaks with Multiple Deeps: This is a compli-

cate landscape consists of multiple peaks and deeps con-

structed from different scales, polarities and shifts of the

centers of exponential functions. That is

f  = 3(1 − x2) exp(−(x2 + (y + 1)2))

− 1

3 exp(1 − (x + 1)2 + y2)

− 10

x

5 − x3 − y3

exp(−(x2 + y2))

+ 8 exp

−0.5(x2 + y2)

0.332

,   x = 0.33x, y = 0.33y,

x ∈ [−10, 10], y ∈ [−10, 10], f max   at  x = 0, y  = 0.

(15)

The landscape is shown in Fig. 3. Wider supports with

lower peaks are located around the highest but narrow peak.Deeps are also found around the highest peak. Due to the

existences of the other peaks, particles may be attracted to

local optimum (the lower peaks). Furthermore, the narrow

peak may also impose difficulties for the initial particles

to cover the optimal solution. Therefore, this is a difficult

optimization problem.

Fig. 3. Landscape 3: multiple peaks, multiple deeps.

826

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 5/8

4) Remarks:   For the comparison of performances in the

sequel, the range of the solution spaces are made the same for

all the test functions. The fitness values are also normalized

such that the maximum fitness is unity. That is

f ik ←  f ik

maxi(f ik),   (16)

where   i = 1 · · · N ,  N  is the number of particles.

 B. Performance Assessment 

Since the PSO is a agent based or stochastic algorithm,

statistical tests are employed to assess the performance.

They include the calculation of the expected value and the

determination of performance index in terms of cumulative

probabilities against a confidence threshold.1) Expected Values:  Each test (totally,  T r  = 50   test runs)

goes through a specified number of iterations (N g  = 50) and

the terminating global-best particle position   xj (superscript

 j   = 1 · · · T r   is the test index) are stored. They are used

to construct a probability distribution function (pdf) or a

histogram containing H  = 50 bins spanning across the range

of the particle positional coordinates, see Fig. 4.

−6 −5 −4 −3 −2 −1 0 1 2 3 4

x 10−5

0

0.02

0.04

0.06

0.08

0.10

0.12

  x  −   E  r  r   P  r  o   b

−8 −6 −4 −2 0 2 4

x 10−5

0

0.05

0.10

0.15

0.20

  y  −   E  r  r   P  r  o   b

Fig. 4. Typical error probability distribution.

The statistical expected value is calculated from

x = E {x} =H k=1

hxyk   xk,   (17)

where hxyk   is the probability of the particle falling inside the

kth bin centered at   xk   in the corresponding histogram for

the x- and y-coordinate of the landscape. A typical result of 

the evolution of the global-best particle is shown in Fig. 5

where the convergence to the solution is indicated.2) Confidence Threshold:   During each test run, the nor-

malized fitness of the global-best particle against the overall

fitnessj f k = max

i(f ik)   (18)

evaluated at the  kth iteration in the   jth test run, are stored.

These traces of fitness are averaged, that is

f k  =  T −1r

T rj=1

j f k.   (19)

0 5 10 15 20 25 30 35 40 45 50

−3

−2

−1

0

1

2

3

  x  −  c  o  o  r   d

0 5 10 15 20 25 30 35 40 45 50

−2

−1

0

1

2

  y  −  c  o  o  r   d

Iteration

Fig. 5. Typical global-best particle position against iterations.

0 5 10 15 20 25 30 35 40 45 50

0

0.2

0.4

0.6

0.8

1

3   5

Iteration

   F   i   t  n  e  s  s

Fig. 6. Typical global-best fitness cumulative distribution.

Thresholds used in this work are the  95%  and  99%  con-

fident levels. The corresponding iteration indices satisfying

these thresholds are determined such that,

k95  ← arg mink

( f k) >  0.95,

k99  ← arg mink

( f k) >  0.99.(20)

A typical confidence result is sketched in Fig. 6. It is

shown that at the expiry of the  3rd iteration, the fitness of 

the best particle obtained so far approach  95%  of the final

fitness (see markers at the left-bottom of the plot). Moreover,

at the  5th iteration, the fitness of best particle reaches  99%

of the final value.

C. Test Cases

Test cases are combinations of the following conditions.

They include: i) landscape, ii) randomness for velocity ad-

 justment, iii) distribution of control coefficients, iv) control

strategy and v) number of particles. The combinations are

listed in the Table I.

The landscape and velocity adjustment are combined to

form   6   conditions while the distribution and control form

another   6   conditions. For each setting on the number of 

827

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 6/8

TABLE I

CONDITIONS FOR TEST CASES

Condit ion Descript ion

landscape single peak / ridge & trench / multi-peak & deepvel. adjustment constant / pre-assigned uniformdistribution uniform / Gaussiancontrol adaptive / pre-assigned / selectiveparticles 50 / 100 / 200

particles, these conditions give a total of   6 × 6 = 36   test

runs. The number of particles is also varied between 50, 100

and 200. Implementation strategies are defined according to

the presentation in Section III.

 D. Results

Test results are consolidated into the error between the

optimal and the reported solution (in x-/y-coordinates) from

different PSO strategies, see Table II. Moreover, the number

of iterations needed to reach the vicinity of the highest

fitness obtained, is given in Table III (where NA indicates

termination beyond test iteration limits of 50). The best

performing strategies are marked with a  (∗)  in the tables.

1) Effectiveness:   The effectiveness of the PSO algorithm

is reflected by the closeness of the reported solution to the

analytical optimum (which is known by the design of the

test functions but is not used in the algorithm). As shown

in Table II, the conventional implementation (Strategy 1) is

not always the best performing strategy. The majority of best

performer found is the selective strategy (Strategy 3) where

the allocation of randomness depends on the fitness of the

particles. This is expected from the principle of division of 

labor, that fitter particles are used in refining the solution

while the others are deployed in searching for un-explored

solution spaces. This results in a more effective use of computational resources.

For the simple test function (Landscape 1), performances

are relatively comparable over all strategies. This finding is

reasonable that the focus of attraction (the single peak) is

unique and there is no local optimum over the solution space.

With regard to the test function of moderate complexity

(Landscape 2), strategies employing uniformly distributed

random control coefficients generally perform better than

cases with a Gaussian distribution. By the characteristics

of the Gaussian distribution, only a small portion (within

the range [0,1] and accounts for approximately   40%   of 

the particles) makes the algorithm stable while the others

are devoted to searching. The results from the complicatetest function (Landscape 3) indicate that the strategy with

selection and Gaussian distribution performs better. This is

because the landscape contains several local peaks in the

vicinity of the global optimum, the consequence is that

particles are trapped at these local peaks. Fig. 7 shows

a typical case when the global-best particle is trapped in

the local optimum (solutions in y-coordinate are away from

zero).

The effects attributed to the velocity control coefficient

(constant vs. uniform random) are not noticeable. This is

0 5 10 15 20 25 30 35 40 45 50

−1

−0.5

0

0.5

1

  x  −  c  o  o  r   d

0 5 10 15 20 25 30 35 40 45 50

−2

−1

0

1

2

  y  −  c  o  o  r   d

Iteration

Fig. 7. Solution error due to local optima trap.

due to the fact that this coefficient is responsible to the

time rate of convergence and is not directly involved in

the search operation. A larger number of particles naturally

leads to a better coverage of the solution space at the initial

iterations. Thus, a temporal global-best is available with ahigher probability of being the actual optimum.

2) Efficiency:   The PSO efficiency is derived from the

number of iterations required to reach a specified level to the

highest fitness obtained in the test runs. In this work, the  95%and 99%  thresholds are used as the efficiency indicators. The

selective and uniform distributed control strategy (Strategy

3) is the most promising across the test settings. This result

coincides with that observed from the effectiveness results.

For the simple test function (Landscape 1), all strategies

perform similarly with small numbers of needed iterations to

reach the fitness threshold. This is because the test function

is so simple that the benefits derived from a better strategy

are not noticeable. The results from the test function with

moderate difficulty (Landscape 2) indicate that strategies

with the uniform distribution slightly outperforms their coun-

terparts using a Gaussian distribution. However, as the size

of particles increases (e.g., 200 particles), the difference in

efficiency is marginal. This can be explained by the fact that

once the global-best particle is found, subsequent efforts will

not further improve the solution. Hence, a complete coverage

of the solution space is always desirable. For the complicate

test function (Landscape 3), strategies using the uniform

distribution generally outperforms the Gaussians especially

when the number of particles used is also increased. A typical

result from the conventional PSO implementation (Strategy1) is shown in Fig. 8 where an extended reaching towards

the highest fitness is indicated (thick line).

As observed from the effectiveness results, the effect

of velocity control is also not noticeable in the context

of efficiency. Moreover, a larger number of particles used

obviously aids in improving the efficiency.

 E. Discussion

The overall performance combining effectiveness and effi-

ciency of the proposed strategies is difficult to assess. This is

828

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 7/8

0 5 10 15 20 25 30 35 40 45 50

0

0.2

0.4

0.6

0.8

1

Iteration

   F   i   t  n  e  s  s

Fig. 8. Degradation in fitness due to local optima trap.

due to the stochastic nature of the PSO algorithm. Because

the random numbers generated do not accurately follow the

specified distribution, in particular, for a small particle size.

Another ambiguity encountered is that a particular strategy

is applicable to a specific class of functions. This is a

consequence of the well known principle of no-free-lunch.

For the test cases considered in this work, the strategy with

selection and uniform distributed random control coefficients

performs better than the others, including the conventional

PSO implementation, in a majority of cases.

V. CONCLUSION

This paper had presented an empirical study on the ef-

fect of randomness on the control coefficients in the PSO

algorithm. The need for alternative randomization strategies

was revealed by analyzing the algorithm through a control

engineering perspective which had led to the proposal for

solution space coverage. Alternative strategies on parameter

randomness were tested on functions with different degrees

of landscape complexity. The strategies included the use of 

uniform and Gaussian distributed control coefficients and

the incorporation of selection schemes. Effectiveness and

efficiency were commented on the basis of the accuracy of 

the reported solution and the iterations required to obtain

a statistically ascertained level of confidence. Experimental

results had shown that for simple test functions, all pro-

posed strategies perform satisfactorily. But for complicate

functions, the selective and uniformly distributed random

coefficients, by invoking the principle of division-of-labor,

performs better. Further work are directed towards real-world

applications, e.g., model parameter identification, system

estimation and controller design.

REFERENCES

[1] J. Kennedy and R. Eberhart, ”Particle swarm optimization,”   Proc.

 IEEE Intl. Conf. on Neural Networks, Perth, Australia, Nov. 1995,pp. 1942-1948.

[2] T. Ray and K. M. Liew, ”Society and civilization: an optimizationalgorithm based on the simulation of social behavior,”  IEEE Trans. on

 Evolutionary Computation, vol. 7, no. 4, Aug. 2003, pp. 386-396.

[3] R. Salomon, ”Evolutionary algorithms and gradient search: similaritiesand differences,”   IEEE Trans. on Evolutionary Computation, vol. 2,no. 2, Jul. 1998, pp. 45-55.

[4] S. L. Cheng and C. Hwang, ”Optimal approximation of linear systemsby a differential evolution algorithm,”   IEEE Trans. on Systems, Man,and Cybernetics - Pt. A: Systems and Humans, vol. 31, no. 6, Nov.2001, pp. 698-707.

[5] N. Krasnogor and J. Smith, ”A tutorial for competent memeticalgorithms: model, taxonomy and design issues,”   IEEE Trans. on

 Evolutionary Computation, vol. 9, no. 3, Oct. 2005, pp. 474-488.

[6] G. Ciuprina, D. Ioan and I. Munteanu, ”Use of intelligent-particleswarm optimization in electromagnetics,”   IEEE Trans. on Magnetics,vol. 38, no. 2, Mar. 2002, pp. 1037-1040.

[7] G. Venter and J. S. Sobieski, ”Particle swarm optimization,”   Proc.43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamicsand Material Conf., Denver, Colorado, Apr. 2002, AIAA 2002-1235.

[8] Y. Zheng, L. Ma, L. Zhang and J. Qian, ”Robust PID controller designusing particle swarm optimizer,”  Proc. 2003 IEEE Intl. Symposium on

 Intelligent Control, Houston, Texas, Oct. 2003, pp. 974-979.[9] S. Naka, T. Genji, T. Yura and Y. Fukuyama, ”A hybrid particle swarm

optimization for distributed state estimation,”   IEEE Trans. on Power Systems, vol. 18, no. 1, Feb. 2003, pp. 60-68.

[10] D. W. Boeringer and D. H. Werner, ”Particle swarm optimizationversus genetic algorithms for phased array synthesis,”   IEEE Trans.on Antennas and Propagation, vol. 52, no. 3, Mar. 2004, pp. 771-779.

[11] H. Zhang, X. Li, H. Li and F. Huang, ”Particle swarm optimization-based schemes for resource-constrained project scheduling,”  Automa-

tion in Constructions, vol. 14, 2005, pp. 393-404.[12] S. L. Ho, S. Yang, G. Ni, E. W. C. Lo and H. C. Wong, ”A

particle swarm optimization-based method for multiobjective designoptimizations,”   IEEE Trans. on Magnetics, vol. 41, no. 5, May 2005,pp. 1756-1759.

[13] E. Ozcan and C. K. Mohan, ”Particle swarm optimization: surfing thewaves,” Proc. 1999 Congress on Evolutionary Computation, Washing-ton, DC, Jul. 1999, pp. 1939-1944.

[14] M. Clerc and J. Kennedy, ”The particle swarm - explosion, stability,and convergence in multidimensional complex space,”  IEEE Trans. on

 Evolutionary Computation, vol. 6, no. 1, Feb. 2002, pp. 58-73.[15] I. C. Trelea, ”The particle swarm optimization algorithm: convergence

analysis and parameter selection,”  Information Processing Letters, vol.85, 2003, pp. 317-325.

[16] R. Mendes, J. Kennedy and J. Neves, ”The fully informed particleswarm: simpler, maybe better,”   IEEE Trans. on Evolutionary Compu-tation, vol. 8, no. 3, Jun. 2004, pp. 204-210.

[17] K. E. Parsopoulos, V. P. Plagianakos, G. D. Magoulas and M. N.Vrahatis, ”Objective function stretching to alleviate convergence tolocal minima,”   Nonlinear Analysis, vol. 47, 2001, pp. 3419-3424.

[18] P. J. Angeline, ”Using selection to improve particle swarm optimiza-tion,”   Proc. 1998 IEEE Intl. Conf. on Evolutionary Computation,Anchorage, Alaska, May 1998, pp. 84-89.

[19] M. Lovbjerg, T. K. Rasmussen and T. Krink, ”Hybrid particle swarmoptimiser with breeding and subpopulations,” Proc. Genetic and Evolu-tionary Computation Conf. 2001, San Francisco, California, Jul. 2001,pp. 469-476.

[20] T. Huang and A. Sanagavarapu, ”A hybrid boundary condition forrobust particle swarm optimization,”   IEEE Antennas and WirelessPropagation Letters, vol. 4, 2005, pp. 112-117.

[21] H. Fan, ”A modification to particle swarm optimization algorithm,” Engineering Computations, vol. 19, no. 8, 2002, pp. 970-989.

[22] K. Yasuda, A. Ide and N. Iwasaki, ”Adaptive particle swarm optimiza-tion,” Proc. 2003 IEEE Intl. Conf. on Systems, Man and cybernetics,

Washington, DC, Oct. 2003, pp. 1554-1559.[23] Y. Shi and R. Eberhart, ”A modified particle swarm optimizer,”  Proc.

1998 IEEE Intl. Conf. on Evolutionary Computation, Anchorage,Alaska, May 1998, pp. 69-73.

[24] J. Xu and Z. Xin, ”An extended particle swarm optimizer,” Proc. 19th IEEE Intl. Parallel and Distributed Processing Symposium, Apr. 2005,pp. 193-197.

[25] A. Rodriguez and J. A. Reggia, ”Extending self-organizing particlesystems to problem solving,”  Artificial Life, vol. 10, 2004, pp. 379-395.

[26] Z. Gajic and M. Lelic, Modern Control Systems Engineering, London:Prentice Hall, 1996.

829

8/13/2019 An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimisation

http://slidepdf.com/reader/full/an-empirical-study-on-the-settings-of-control-coefficients-in-particle-swarm 8/8

TABLE II

TEST RESULTS FOR EFFECTIVENESS ( EXPECTED ERRORS IN

X-/ Y-COORDINATES)

Constant velocity coefficient, 50 particles

Landscape

Strategy 1 2 3

1 +0.000/-0.000 -0.000/+0.000 -0.028/+4.7922 +0.000/+0.000 + 0.000/-0.000 -0.022/+4.7923* -0.000/+0.000 +0.000/ -0.000 -0.023/ +3.1604 -0.015/+0.009 +0.010/+0.001 - 0.037/+3.4535 +0.000/+0.000 -0.000/-0.000 -0.019/+4.7926 -0.000/-0.000 +0.002/+0.002 -0.043/+4.506

Random velocity coefficient, 50 particles

Landscape

Strategy 1 2 3

1* -0.000/+0.000 -0.000/+0.000 -0.019/ -0.0132 -0.000/+0.000 -0.000/+0.000 -0.026/+2.6793 -0.000/-0.000 -0.000/+0.000 -0.022/+0.760

4 +0.006/+0.000 - 0.003/+0.001 -0.010/-0.0285 +0.000/+0.000 -0.004/-0.000 -0.022/+1.8266 +0.002/+0.003 + 0.005/-0.012 +0.047/+1.911

Constant velocity coefficient, 100 particles

Landscape

Strategy 1 2 3

1 -0.000/-0.000 +0.000/-0.000 -0.019/+0.7582 +0.000/-0.000 +0.000/-0.000 -0.023/+2.5833* +0.000/ +0.000 + 0.000/ -0.000 -0.018/ +0.0854 -0.005/+0.001 +0.001/+0.002 - 0.031/+2.2005 +0.000/-0.000 +0.000/-0.000 -0.028/+4.7926 -0.000/+0.000 +0.000/+0.000 - 0.020/+2.298

Random velocity coefficient, 100 particles

Landscape

Strategy 1 2 3

1 +0.000/-0.000 +0.000/-0.000 -0.020/-0.011

2 +0.000/+0.000 -0.000/-0.000 -0.017/+4.1203* -0.000/+0.000 +0.000/ +0.000 -0.019/ -0.0104 -0.000/-0.001 -0.001/+0.001 +0.005/+2.2025 +0.000/ +0.000 +0.000/ +0.000 -0.015/ +0.0876 -0.000/+0.000 -0.003/-0.008 +0.003/+1.737

Constant velocity coefficient, 200 particles

Landscape

Strategy 1 2 3

1* -0.000/+0.000 +0.000/ -0.000 -0.018/ -0.0102* -0.000/+0.000 +0.000/ +0.000 -0.018/ -0.0103 -0.000/-0.000 +0.000/-0.000 -0.020/+4.6004 -0.002/+0.001 -0.000/-0.000 -0.027/+0.0475 -0.000/+0.000 +0.000/+0.000 - 0.025/-0.0106 +0.000/-0.000 +0.000/+0.000 - 0.017/-0.018

Random velocity coefficient, 200 particles

Landscape

Strategy 1 2 31* +0.000/ -0.000 +0.000/ -0.000 -0.018/ -0.0102* -0.000/-0.000 +0.000/ +0.000 -0.018/ -0.0103* +0.000/ -0.000 +0.000/ +0.000 -0.018/ -0.0104 +0.000/-0.000 -0.000/+0.000 -0.008/-0.0195* +0.000/ +0.000 -0.000/-0.000 -0.018/ -0.0106 +0.000/ +0.000 +0.000/ +0.000 -0.021/ +0.001

TABLE III

TEST RESULTS FOR EFFICIENCY  ( ITERATIONS TO REACH 95%  A ND  99%

OF HIGHEST FITNESS)

Constant velocity coefficient, 50 particles

Landscape

Strategy 1 2 3

1 3 / 5 6 / 10 NA / NA2 2 / 5 5 / 8 NA / NA3* 2 / 5 5 / 8 33 / NA4 3 / 6 11 / 30 NA / NA5 3 / 5 8 / 11 NA / NA6 3 / 7 8 / 17 NA / NA

Random velocity coefficient, 50 particles

Landscape

Strategy 1 2 3

1 2 / 4 6 / 9 30 / NA2 2 / 4 5 / 7 38 / NA3* 2 / 4 5 / 9 26 / NA

4 3 / 6 9 / 16 NA / NA5 3 / 6 7 / 13 NA / NA6 3 / 6 9 / 21 NA / NA

Constant velocity coefficient, 100 particles

Landscape

Strategy 1 2 3

1 2 / 3 4 / 6 20 / NA2 2 / 3 3 / 6 24 / NA3* 2 / 3 3 / 6 17 / 504 2 / 4 5 / 11 34 / NA5 2 / 4 4 / 8 NA / NA6 2 / 4 5 / 10 NA / NA

Random velocity coefficient, 100 particles

Landscape

Strategy 1 2 3

1 2 / 3 4 / 6 16 / 48

2 2 / 3 4 / 5 13 / NA3* 2 / 3 3 / 6 18 / 464 2 / 4 5 / 10 48 / NA5 2 / 4 5 / 8 35 / NA6 2 / 4 5 / 11 43 / NA

Constant velocity coefficient, 200 particles

Landscape

Strategy 1 2 3

1 2 / 2 3 / 5 10 / 192* 2 / 2 2 / 5 6 / 193 2 / 2 2 / 4 9 / 244 2 / 3 3 / 7 21 / NA5 2 / 3 3 / 6 15 / NA6 2 / 3 3 / 6 28 / NA

Random velocity coefficient, 200 particles

Landscape

Strategy 1 2 31* 2 / 2 3 / 4 7 / 232 2 / 2 2 / 4 9 / 233 2 / 2 2 / 4 9 / 244 2 / 3 3 / 6 23 / NA5 2 / 3 3 / 5 12 / 406 2 / 3 3 / 5 14 / NA

830


Recommended