+ All Categories
Home > Documents > read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word ....

read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word ....

Date post: 08-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
31
Particle Swarm Optimization (PSO) 3 Particle Swarm Optimization (PSO) 3.1 Introduction Swarm Intelligence is a kind of Artificial Intelligence based on the behavior of the animals living in groups and having some ability to interact with one other and with the environment in which they are inserted [1]. Particle swarm optimization, abbreviated as PSO, is based on the behavior of a colony or swarm of insects, such as ants, termites, bees, and wasps; a flock of birds; or a school of fish . The particle
Transcript
Page 1: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

3

Particle Swarm Optimization (PSO)

3.1 Introduction

Swarm Intelligence is a kind of Artificial Intelligence based on the behavior

of the animals living in groups and having some ability to interact with one

other and with the environment in which they are inserted [1].

Particle swarm optimization, abbreviated as PSO, is based on the behavior

of a colony or swarm of insects, such as ants, termites, bees, and wasps; a flock

of birds; or a school of fish . The particle swarm optimization algorithm mimics the behavior of these social organisms[2]. The word particle denotes, for example, a bee in a colony or a bird in a flock. Each individual or particle in a swarm behaves in a distributed way using its own intelligence and the collective or group intelligence of the swarm. As such, if one particle discovers a good path to food, the rest of the swarm

Page 2: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

will also be able to follow the good path instantly even if their location is far away in the swarm [2][3][4]. The PSO algorithm was originally proposed by James Kennedy and Russell Eberhart in 1995 [3] . Each particle is assumed to have two characteristics: a position and a

velocity. Each particle wanders around in the design space and remembers the

best position (in terms of the food source or objective function value) it has

discovered. The particles communicate information or good positions to each

other and adjust their individual positions and velocities based on the

information received on the good positions [2][4] .

As an example, consider the behavior of birds in a flock. Although each bird

has a limited intelligence by itself, it follows the following simple rules:

1. It tries not to come too close to other birds.

2. It steers toward the average direction of other birds.

3. It tries to fit the “average position” between other birds with no wide gaps in

the flock.

Thus the behavior of the flock or swarm is based on a combination of three

simple factors:

1. Cohesion—stick together.

2. Separation—don’t come too close.

3. Alignment—follow the general heading of the flock.

Cohesion, Separation and Alignment are explained in figure (3.1).

The PSO is developed based on the following model:

1. When on one bird locates a target or food (or minimum of objective function) ,

it instantaneously transmits the information to all other birds. 2. All other birds gravitate to the target or food (or minimum of the objective

function), but not directly.

Page 3: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

3. There is a component of each bird’s own independent thinking as well as its

past memory. Thus the model simulates a random search in the design space for the minimum value of the objective function. As such, gradually over many iterations, the birds go to the target (or minimum of the objective function) [2][3][4].

a b c

Figure (3.1) a-Cohesion b- Separation c- Alignment

3.2 Basic elements of PSO technique The basic elements of PSO technique are briefly stated and defined as

follows:

·Particle, X i : It is a candidate solution represented by an m-dimensional vector,

where m is the number of optimized parameters. At time i, the jth particle X ji can be

described as X ji = [x j , 1

i , ..., x j , mi ], where xs are the optimized parameters and x j , k

i is the

position of the jth particle with respect to the kth dimension, i.e. the value of the kth

optimized parameter in the jth candidate solution.

·Population, pop(t),: It is a set of n particles at time i ,i.e. pop(i) = [X1i , ..., X n

i ]T.

Page 4: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

The number of particles in population between (20 to 30) .

·Swarm: It is an apparently disorganized population of moving particles that tend

to cluster together while each particle seems to be moving in a random

direction[5].

·Particle velocity, V i: It is the velocity of the moving particles represented by an m-

dimensional vector. At time i , the jth particle velocity V ji can be described as

V ji = [ V j ,1

i , ..., V j ,mi ], where V j ,k

i is the velocity component of the jth particle with

respect to the kth dimension.

· Inertia weight, w i : It is a control parameter that is used to control the impact of the

previous velocities on the current velocity. Hence, it influences the trade-off

between the global and local exploration abilities of the particles . For initial stages

of the search process, large inertia weight to enhance the global exploration is

recommended while, for last stages, the inertia weight is reduced for better local

exploration.

•Individual best, pi : As a particle moves through the search space, it compares its

fitness value at the current position to the best fitness value it has ever attained at any time

up to the current time. The best position that is associated with the best fitness

encountered so far is called the individual best, p ji . For each particle in the swarm, p j

i

can be determined and updated during the search. In a minimization problem with

objective function f, the individual best of the jth particle p ji is determined such that

f(p ji ) ≤ f(X e

i ) , e ≤ j . For simplicity, assume that f* = f(p ji ). For the jth particle,

individual best can be expressed as

p ji = [ p j ,1

i , ..., p j ,mi ].

· Global best, p jg : It is the best position among all individual best positions achieved

so far. Hence, the global best can be determined such that f (p j

g) ≤ f (p ji ), j=1……,n . for simplicity, assume that , f** = f )p j

g. (

·Stopping criteria: these are the conditions under which the search process will

terminate. The search will terminate if one of the following criteria is satisfied : (a)

Page 5: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

the number of iterations since the last change of the best solution is greater than a pre-

specified number or (b) the number of iterations reaches the maximum allowable

number [2][5].

3.3 PSO Methods There are several methods of PSO dependent on the shape of updated

velocity equation of the particle :

3.3.1 Original PSO Basic algorithm as proposed by Kennedy and Eberhart (1995)

xki : Particle position

vki : Particle velocity

pki : Best position found by jth particle (personal best)

pkg : Best position found by swarm (global best, best of personal bests)

c1,c2 : Cognitive and social parameters

r1,r2 :Random numbers between 0 and 1

Position of individual particles updated as follows:

xk+1i =xk

i + vk+1i k=1,…,n (3-1)

with the velocity calculated as follows:

vk+1i = vk

i + c1 r1 (pki - xk

i ) +c2r2 (pkg - xk

i ) k=1,…,n (3-2)

where c1 and c2 are the cognitive (individual) and social (group) learning

rates, respectively, and r1 and r2 are uniformly distributed random numbers in

the range 0 and 1. The parameters c1 and c2 denote the relative importance of

the memory (position) of the particle itself to the memory (position) of the

swarm. The values of c1 and c2 are usually assumed to be 2 [2][3][4][5][9].

Page 6: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

3.3.2 Inertia Weighted PSO

It is found that usually the particle velocities build up too fast and the

maximum of the objective function is skipped. Hence an inertia term, θ, is

added to reduce the velocity. Usually, the value of θ is assumed to vary linearly

from 0.9 to 0.4 as the iterative process progresses. The velocity of the jth

particle, with the inertia term, is assumed as :

vk+1i = w i vk

i + c1 r1 (pki - xk

i ) +c2r2 (pkg - xk

i ) k=1,…,n (3-3)

The inertia weight θ was originally introduced by Shi and Eberhart in 1998 [2]

[5] to dampen the velocities over time (or iterations), enabling the swarm to

converge more accurately and efficiently compared to the original PSO

algorithm with Eq. (3-2).

Equation (3.3) denotes an adapting velocity formulation, which improves its

fine tuning ability in solution search. Equation (3.3) shows that a larger value of

θ promotes global exploration and a smaller value promotes a local search. Thus

a large value of θ makes the algorithm constantly explore new areas without

much local search and hence fails to find the true optimum.

To achieve a balance between global and local exploration to speed up

convergence to the true optimum, an inertia weight whose value decreases

linearly with the iteration number has been used:

w i = wmax - ( wmax−wmin

imax ) * i (3-4)

y xk+1

vk

vk+1 vgbest

Figure (3.2) xk vpbest

Page 7: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

Concept of modification of a search point by PSO x

vgbest : velocity based on gbest vpbest : velocity based on pbest

where wmax and wmin are the initial and final values of the inertia weight,

respectively, and imax is the maximum number of iterations used in PSO. The

values of wmax = 0.9 and wmin = 0.4 are commonly used [2][5][9].

Figure (3.3)

Searching concept with agents in a solution space by PSO.

Page 8: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

Figure (3.4) The linearly decreasing of inertia weight (w) with the number of

iteration

3.3.3 Constriction factor approach

Though the earliest researchers recognized that some form of damping of

the dynamics of a particles (e.g., Vmax) was necessary, the reason for this was

not understood. But when the particle swarm algorithm is run without

restraining velocities in some way, these rapidly increase to unacceptable levels

within a few iterations. Kennedy (1998) noted that the trajectories of non

stochastic one-dimensional particles contained interesting regularities when

c1+c2 was between 0.0 and 4.0. Clerc’s analysis of the iterative system led him

to propose a strategy for the placement of “constriction coefficients” on the

terms of the formulas; these coefficients controlled the convergence of the

particle and allowed an elegant and well-explained method for preventing

explosion ,ensuring convergence, and eliminating the arbitrary Vmax parameter.

The analysis also takes the guesswork out of setting the values of c1 and c2.

Clerc and Kennedy (2002) noted that there can be many ways to

implement the constriction coefficient. One of the simplest methods of

incorporating it is the following:

(3-5)

Where φ = c1 + c2 , φ ≥ 4

The velocity of the jth particle is assumed as:

vk+1i = k ( vk

i + c1 r1 (pki - xk

i ) +c2r2 (pkg - xk

i )) (3-6)

The position of the jth particle assumed as :

xk+1i = xk

i + vk+1i k= 1,…….,n (3-7)

Page 9: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

When Clerc’s constriction method is used, φ is commonly set to 4.1, c1 = c2,

and the constant multiplier k is approximately 0.7298. This results in the

previous velocity being multiplied by 0.7298 and each of the two (pk-xk)terms

being multiplied by a random number limited by 0.7298×2.05 ≈ 1.49618.[9]

Note that a PSO with constriction is algebraically equivalent to a PSO with

inertia. Indeed, eq.(3-3) and eq.(3-6) can be transformed into one another via

the mapping ω ↔ k and ci ↔ k.ci . So, the optimal settings suggested by Clerc

correspond to ω = 0.7298 and c1 = c2 = 1.49618 for a PSO with inertia weight.

[6][7][8].

3.3.4 Fully informed particle swarm (FIPS) In the standard version of PSO, the effective sources of influence are in

fact only two: self and best neighbor. Information from the remaining neighbors

is unused. Mendes has revised the way particles interact with their neighbors

[10]. Whereas in the traditional algorithm each particle is affected by its own

previous performance and the single best success found in its neighborhood, in

Mendes’ fully informed particle swarm (FIPS), the particle is affected by all its

neighbors, sometimes with no influence from its own previous success. FIPS

can be depicted as follows:

vk+1i = k { vk

i + 1Ki ∑n=1

Ki

c1 r1.( pnbrn−xk

i )} (3-8)

xk+1i = xk

i + vk+1i (3-9)

where Ki is the number of neighbors for particle i, and nbrn is i’s nth

neighbor. It can be seen that this is the same as the traditional particle swarm if

only the self and neighborhood best in a Ki = 2 model are considered. With

good parameters, FIPS appears to find better solutions in fewer iterations than

the canonical algorithm, but it is much more dependent on the population

topology [9][10].

Page 10: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

3.4 Computational implementation of PSO The computational flow of PSO technique can be described in the following

steps:

Step 1 ( Number of Particles):

Assume the size of the swarm (number of particles) is N. To reduce the

total number of function evaluations needed to find a solution, we must assume

a smaller size of the swarm. But with too small a swarm size it is likely to take

us longer to find a solution or, in some cases, we may not be able to find a

solution at all. Usually a size of 20 to 30 particles is assumed for the swarm as

a compromise.

Step 2 (Initial position):

Generate the initial population of X in the range X (l) and X (u) randomly asX1,X2, . . . ,X N. Hereafter, for convenience, the particle (position of) j and

its velocity in iteration i are denoted as X ji and V j

i , respectively. Thus the

particles generated initially are denoted X1(0),X2

(0) , . . . ,X N(0). The vectors

X j(0))j = 1, 2, . . . ,N (are called particles or vectors of coordinates of

particles(similar to chromosomes in genetic algorithms) .

Step 3 (Initial velocity):

Find the velocities of particles. All particles will be moving to the optimal

point with a velocity. Initially, all particle velocities are assumed to be zero.

Step 4

Each particle in the initial population is evaluated using the objective function, f. For

each particle, set p j(0 )

= X j(0)

and f j

¿

= f j , j = 1,...,n. Search for the best value of the

Page 11: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

objective function f best Set the particle associated with f best as the global best, p jg , with

an objective function of f¿∗¿ ¿

. Set the initial value of the inertia weight w(0).

Step 5

Update the iteration number as i = i + 1.

Step 6 (Weight updating):

Update the inertia weight

w(i) = wmax - ( wmax−wmin

imax ) * i

Step 7 (velocity updating):

Using the global best and individual best of each particle, the jth particle velocity

in the kth dimension is updated according to the following equation :

v j+1i = w i v j

i + c1 r1 (p ji - x j

i) +c2r2 (p jg - x j

i)

where c1 and c2 are positive constants and r1 and r2 are uniformly distributed

random numbers in [0,1]. It is worth mentioning that the second term represents the

cognitive part of PSO where the particle changes its velocity based on its own thinking

and memory. The third term represents the social part of PSO where the particle changes

its velocity based on the social-psychological adaptation of knowledge. If a particle

violates the velocity limits, set its velocity equal to the limit.

Step 8 (Position updating):

Based on the updated velocities, each particle changes its position according to the

following equation:

x j+1i =x j

i + v j+1i

If a particle violates the its position limits in any dimension, set its position at the

proper limit.

Step 9 (individual best updating):

Page 12: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

Each particle is evaluated according to its updated position.

If , f j< f j¿ j=1,…….,N

then update individual best as:

p ji = X j

i and

f j¿ = f j

and go to step 10; else go to step 10.

Step 10 (Global best updating):

Search for the minimum value f min among f j¿ , where min is the index of the

particle with minimum objective function.

If f min < f best

then update global best

p jg = X min

(i) and

f best = f min

and go to step 11; else go to step 11.

Step 11 (Stopping criteria)

If one of the stopping criteria is satisfied then stop; else go to step 5.

Page 13: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

3.5 Flowchart of PSO algorithm

unsatisfied

Satisfied

Figure (3.5)

Flowchart of PSO

START

Create an initial swarm

Evaluate the fitness of each particle

Check and update personal best and global best

Update each individual velocity and position

Check stopping criteria

Success

END

Set new iteration

i= i+1

Page 14: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

3.6 The Categories of Optimization Techniques

Techniques used to solve the optimization problems (let's can be placed into

two categories : Local and Global optimization algorithm .

3.6.1 Local Optimization

A local minimizer ,X B¿ , of the region B , is defined so that

(3-10)

Where ,and S denotes the search space . More importantly, note that B

is a proper subset os S. A given search space S can contain multiple regions Bi

such that Bi∩B j =Ø when i ≠ j . It then follows that X Bi

¿ ≠ X B j

¿ , so that the

minimizer of each region Bi is unique . Any of the X Bi

¿ can be considered a

minimizer of B , although they are merely local minimizers. There is no

restriction on the value that the function can assume in minimizer, so that

f (X Bi

¿ ) = f (X B j

¿ ) is allowed . the value f (X Bi

¿ ) will be called the local minimum.

Most optimization algorithms require a starting point z0S . A local

optimization algorithm should guarantee that it will be able to find the

minimizer X Bi

¿ of the set B if z0B . Some algorithms satisfy a slightly weaker

constraint, namely that they guarantee to find a minimizer X Bi

¿ of some set Bi ,

not necessarily the one closest to z0 .[11]

3.6.2 Global Optimization

The global minimize ,X ¿ , is defined so that

(3-11)

Where S is the search space .The term global minimum will refer to the value

f (X ¿), and X ¿ will be called the global minimizer .

Page 15: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

A global optimization algorithm , like the local optimization algorithm

described above, also starts by choosing an initial starting position z0 S .

Contrary to the definition above in eq. (3.10), some texts define a global

optimization algorithm differently, namely an algorithm that is able to find a

(local) minimizer of B S, regardless of the actual position of z0. These

algorithms consist of two processes : "global" steps and "local" steps. Their local

steps are usually the application of a local minimization algorithm, and their

"global" steps are designed to ensure that the algorithm will move into a region Bi , from where the "local" step will be able to find the minimizer of Bi .

These methods will be referred to as globally convergent algorithms,

meaning that they are able to converge to a local minimizer regardless of their

starting position z0 . These methods are also capable of finding the global

minimizer, given that the starting position z0 is chosen correctly. There is no

known reliable, general way of doing this.

Figure (3.6) illustrates the difference between the local minimizer X B¿ and the

global minimizer X ¿ . A true global optimization algorithm will find X ¿

regardless of the choice of starting position z0 . [11][12]

Page 16: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

Figure (3- 6)

3.7 PSO Updates

Page 17: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

There are two types of particle swarm optimization updates:

1- Synchronous update2- Asynchronous update

3.7.1 Synchronous update

A synchronous method is obtained if all birds stop after flying in the same

time interval, and communicate each other to get the current global best

position. Once this communication is completed, the birds can update their

directions and speeds based on their own history and the globally best

information available, and fly again. This scheme will be called synchronous, as

all of the birds update their information at the same time. This type is better for

global optimization PSO .

The synchronous PSO algorithm updates all particle velocities and positions

at the end of every optimization iteration. The figure(3.8) bellow shows the

synchronous scheme.[13][14][15][16]

3.7.2 Asynchronous update

As mentioned above, PSO was inspired by flocks of birds. Suppose that a

group of birds is randomly searching food in an area where only one piece of

bread exists. Although none of the birds knows the exact location of the bread,

if they each have some idea of their proximity to the bread, they can search the

area cooperatively by communicating their positions and proximity to one

another.

The birds may fly in different ways to search in the area. In the asynchronous

scheme, the first bird flies a certain distance and in a certain direction based on

his own experience and the known location of the best position found so far.

When he is done flying, if his location is closer to the bread than the previous

globally optimal position, he communicates this information to the second bird.

(Otherwise, the global optimal location remains the same.) The second bird then

uses this information to update his speed and position, and so on. Since in this

Page 18: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

method of searching an area the birds all update their speeds and directions at

different times, it will be called asynchronous. The figure(3.9) bellow shows the

asynchronous scheme.[13][14][15][16]

Figure (3.7)

Pseudo-code listings for the (a) Asynchronous and (b) Synchronous PSO.

Start

Page 19: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

Figure (3.8)

Flowchart of synchronous

PSO

……………..

NO

YES

Figure (3.9)

Set k =1

Randomly initialize all

particle positions X ki

Randomly initialize all

particle velocities vki

f (x1) f (x2) f (x3) f (x4) f (xn)

Update particle and swarm

best values pki and pk

g

Update velocity vki

for all particles

Update position xki

for all particles

Increment K

Stop Output results

Stopping criterion satisfied?

Initialize algorithm

constants Kmax ,w,c1,c2

Start

Initialize algorithm

constants Kmax ,w,c1,c2

Page 20: read.pudn.comread.pudn.com/downloads648/ebook/2624255/Particle S…  · Web viewThe word . particle . denotes, for example, a . bee . in a colony or a . bird . in a flock. Each individual

Particle Swarm Optimization (PSO)

Flowchart asynchronous

PSO

YES

NO

NO

YES

Set k =1

Randomly initialize all

particle positions X ki

Randomly initialize all

particle velocities vki

Update particle and swarm

best values pki and pk

g

Update velocity vki

for all particles

Update position xki

for all particles

Stop Output results

Stopping criterion satisfied?

Increment K

Evaluate objective function

f (x) for particle i

i < total number of

particles?

Set i=1

Increment K


Recommended