+ All Categories
Home > Documents > An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

Date post: 25-Dec-2016
Category:
Upload: rohan
View: 217 times
Download: 1 times
Share this document with a friend
13
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON CYBERNETICS 1 An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments Swagatam Das, Senior Member, IEEE, Ankush Mandal, and Rohan Mukherjee Abstract —This article proposes a multipopulation-based adap- tive differential evolution (DE) algorithm to solve dynamic optimization problems (DOPs) in an efficient way. The algorithm uses Brownian and adaptive quantum individuals in conjunction with the DE individuals to maintain the diversity and exploration ability of the population. This algorithm, denoted as dynamic DE with Brownian and quantum individuals (DDEBQ), uses a neighborhood-driven double mutation strategy to control the perturbation and thereby prevents the algorithm from converging too quickly. In addition, an exclusion rule is used to spread the subpopulations over a larger portion of the search space as this enhances the optima tracking ability of the algorithm. Furthermore, an aging mechanism is incorporated to prevent the algorithm from stagnating at any local optimum. The performance of DDEBQ is compared with several state-of-the- art evolutionary algorithms using a suite of benchmarks from the generalized dynamic benchmark generator (GDBG) system used in the competition on evolutionary computation in dynamic and uncertain environments, held under the 2009 IEEE Congress on Evolutionary Computation (CEC). The simulation results indicate that DDEBQ outperforms other algorithms for most of the tested DOP instances in a statistically meaningful way. Index Terms—Differential evolution, diversity, double mutation strategy, dynamic optimization problems. I. Introduction D IFFERENTIAL evolution (DE) [1], [2] has emerged as one of the most powerful real-parameter optimizers currently in use. DE implements similar computational steps to that of standard evolutionary algorithms (EAs). How- ever, unlike traditional EAs, DE-variants perturb the current- generation population members with the scaled differences of randomly selected and distinct population members. There- Manuscript received July 30, 2012; revised July 11, 2013; accepted July 27, 2013. This paper was recommended by Associate Editor Y. Tan. This paper has supplementary downloadable multimedia material available at http://ieeexplore.ieee.org provided by the authors. The Supplementary document contains additional experimental and empirical evidences to validate the various algorithmic components of the proposed DDEBQ algorithm. It also provides empirical justification for the graded diversity preserving mechanisms induced by the quantum, Brownian, and Differential Evolution individuals. A complete pseudo-code of the algorithm has also been provided in this document. Moreover the choices of various control parameters of DDEBQ have been justified on the basis of extensive experimental results. This includes a PDF file, This material is 207 kB (0.2 MB) in size. S. Das is with the Electronics and Communication Sciences Unit (ECSU), Indian Statistical Institute (ISI), Kolkata 700108, India (e-mail- [email protected]). A. Mandal and R. Mukherjee are with the Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700108, India (e-mail: [email protected]; [email protected]). This paper has supplementary downloadable material available at http://ieeexplore.ieee.org, provided by the authors. Digital Object Identifier 10.1109/TCYB.2013.2278188 fore, no separate probability distribution (like the Gaussian distributions used in evolutionary programming (EP) and evo- lution strategies (ES) or the Cauchy distributions used in case of the fast EPs) is used to generate offspring. Several optimization problems in the real world are dynamic in nature. For these dynamic optimization problems (DOPs), the function landscape changes with time, i.e., optima of the problem to be solved change their locations over time and, thus, the optimizer should be able to track the optima continu- ally by responding to the dynamic environment [3], [4]. Practi- cal examples of such situations are price fluctuations, financial variations, stochastic arrival of new tasks in a scheduling problem, machine breakdown, or maintenance. Under dynamic environments, converging tendency of a conventional EA (implying the tendency of the population members of an EA to concentrate within a small basin of the search space as the iterations progress) imposes severe limitations on performance of the EA. If the population members of the EA converge rapidly, they will be unable to effectively respond to the environmental changes. Therefore, in case of DOPs the main challenge is to maintain a diverse population and at the same time to produce high quality solutions by tracking the moving optima. At this point, we would like to mention that there are also DOP instances where the optimal solution does not need to be tracked. For example, Allmendinger and Knowles [5] investigate DOPs where the constraints [ephemeral resource constraints (ERCs)] change over time but not the landscape and thus, also not the optimal solutions. In this paper, we focus on the real parameter-bound constrained DOPs where the objective function landscape explicitly changes with time and not on the problems with ERCs. Classical DE faces difficulties when applied to DOPs due to two main factors. Firstly, DE individuals have a tendency to converge prematurely into small basins of attraction sur- rounding the local and global optima as the search progresses [6]. Thereafter, if any change occurs in the position of the optima, DE starts lacking sufficient explorative power to track down the new optima due to the individuals being similar and the consequently small perturbations. Secondly, DE may occasionally stop proceeding toward the global optimum even though the population has not converged to a local optimum or any other point [2], [6]. Researchers have made some attempts to introduce suitable algorithmic modifications in DE for enabling it to continually track changing optima under dynamic conditions. A brief account of such approaches is presented in Section II-B. 2168-2267 c 2013 IEEE
Transcript
Page 1: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON CYBERNETICS 1

An Adaptive Differential Evolution Algorithm forGlobal Optimization in Dynamic Environments

Swagatam Das, Senior Member, IEEE, Ankush Mandal, and Rohan Mukherjee

Abstract—This article proposes a multipopulation-based adap-tive differential evolution (DE) algorithm to solve dynamicoptimization problems (DOPs) in an efficient way. The algorithmuses Brownian and adaptive quantum individuals in conjunctionwith the DE individuals to maintain the diversity and explorationability of the population. This algorithm, denoted as dynamicDE with Brownian and quantum individuals (DDEBQ), usesa neighborhood-driven double mutation strategy to control theperturbation and thereby prevents the algorithm from convergingtoo quickly. In addition, an exclusion rule is used to spreadthe subpopulations over a larger portion of the search spaceas this enhances the optima tracking ability of the algorithm.Furthermore, an aging mechanism is incorporated to preventthe algorithm from stagnating at any local optimum. Theperformance of DDEBQ is compared with several state-of-the-art evolutionary algorithms using a suite of benchmarks fromthe generalized dynamic benchmark generator (GDBG) systemused in the competition on evolutionary computation in dynamicand uncertain environments, held under the 2009 IEEE Congresson Evolutionary Computation (CEC). The simulation resultsindicate that DDEBQ outperforms other algorithms for most ofthe tested DOP instances in a statistically meaningful way.

Index Terms—Differential evolution, diversity, double mutationstrategy, dynamic optimization problems.

I. Introduction

D IFFERENTIAL evolution (DE) [1], [2] has emergedas one of the most powerful real-parameter optimizers

currently in use. DE implements similar computational stepsto that of standard evolutionary algorithms (EAs). How-ever, unlike traditional EAs, DE-variants perturb the current-generation population members with the scaled differences ofrandomly selected and distinct population members. There-

Manuscript received July 30, 2012; revised July 11, 2013; accepted July 27,2013. This paper was recommended by Associate Editor Y. Tan.

This paper has supplementary downloadable multimedia material availableat http://ieeexplore.ieee.org provided by the authors. The Supplementarydocument contains additional experimental and empirical evidences to validatethe various algorithmic components of the proposed DDEBQ algorithm.It also provides empirical justification for the graded diversity preservingmechanisms induced by the quantum, Brownian, and Differential Evolutionindividuals. A complete pseudo-code of the algorithm has also been providedin this document. Moreover the choices of various control parameters ofDDEBQ have been justified on the basis of extensive experimental results.This includes a PDF file, This material is 207 kB (0.2 MB) in size.

S. Das is with the Electronics and Communication Sciences Unit(ECSU), Indian Statistical Institute (ISI), Kolkata 700108, India ([email protected]).

A. Mandal and R. Mukherjee are with the Department of Electronics andTelecommunication Engineering, Jadavpur University, Kolkata 700108, India(e-mail: [email protected]; [email protected]).

This paper has supplementary downloadable material available athttp://ieeexplore.ieee.org, provided by the authors.

Digital Object Identifier 10.1109/TCYB.2013.2278188

fore, no separate probability distribution (like the Gaussiandistributions used in evolutionary programming (EP) and evo-lution strategies (ES) or the Cauchy distributions used in caseof the fast EPs) is used to generate offspring.

Several optimization problems in the real world are dynamicin nature. For these dynamic optimization problems (DOPs),the function landscape changes with time, i.e., optima of theproblem to be solved change their locations over time and,thus, the optimizer should be able to track the optima continu-ally by responding to the dynamic environment [3], [4]. Practi-cal examples of such situations are price fluctuations, financialvariations, stochastic arrival of new tasks in a schedulingproblem, machine breakdown, or maintenance. Under dynamicenvironments, converging tendency of a conventional EA(implying the tendency of the population members of an EAto concentrate within a small basin of the search space as theiterations progress) imposes severe limitations on performanceof the EA. If the population members of the EA convergerapidly, they will be unable to effectively respond to theenvironmental changes. Therefore, in case of DOPs the mainchallenge is to maintain a diverse population and at the sametime to produce high quality solutions by tracking the movingoptima. At this point, we would like to mention that there arealso DOP instances where the optimal solution does not needto be tracked. For example, Allmendinger and Knowles [5]investigate DOPs where the constraints [ephemeral resourceconstraints (ERCs)] change over time but not the landscapeand thus, also not the optimal solutions. In this paper, wefocus on the real parameter-bound constrained DOPs wherethe objective function landscape explicitly changes with timeand not on the problems with ERCs.

Classical DE faces difficulties when applied to DOPs dueto two main factors. Firstly, DE individuals have a tendencyto converge prematurely into small basins of attraction sur-rounding the local and global optima as the search progresses[6]. Thereafter, if any change occurs in the position of theoptima, DE starts lacking sufficient explorative power to trackdown the new optima due to the individuals being similarand the consequently small perturbations. Secondly, DE mayoccasionally stop proceeding toward the global optimum eventhough the population has not converged to a local optimumor any other point [2], [6]. Researchers have made someattempts to introduce suitable algorithmic modifications in DEfor enabling it to continually track changing optima underdynamic conditions. A brief account of such approaches ispresented in Section II-B.

2168-2267 c© 2013 IEEE

Page 2: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON CYBERNETICS

This article proposes a multipopulation-based adaptive DEwith Brownian and Quantum individuals (DDEBQ) to addressDOPs. In each subpopulation, one individual is an adaptivequantum individual, analogous to particles following the lawsof quantum mechanics and one individual is a Brownianindividual, whose trajectory is similar to the trajectory of anyparticle in Brownian motion. Other individuals in the subpop-ulation evolve in the DE framework with a neighborhood-based double mutation strategy. As quantum and Brownianindividuals do not follow the same DE rules as others, theyhelp in controlling the diversity of the population and thereby,in enhancing the search efficiency of the algorithm. A doublemutation strategy is developed under the DE-framework toprevent the algorithm from converging quickly. In order tocircumvent the problem of stagnation, an aging mechanismis integrated within DE. Also an exclusion scheme is usedso that subpopulations may distribute themselves evenly overthe entire search space. This increases the explorative powerand the ability of the algorithm to track the global optimum.The performance of the proposed algorithm is primarily testedon a suite of DOPs generated by the generalized dynamicbenchmark generator (GDBG) that was proposed for the spe-cial session and competition on “Evolutionary Computation inDynamic and Uncertain Environments,” held under the IEEECongress on Evolutionary Computation (CEC) 2009 [7]. Acomparison of DDEBQ with several state-of-the-art dynamicevolutionary optimizers reflects the statistical superiority ofthe algorithm over a wide variety of real-parameter DOPs. Alist of terminologies used to describe DDEBQ can be foundin Table IV of the appendix.

The rest of the paper is organized as follows. Section IIprovides a brief description of classical DE and also presentsa compact survey of the different modified DE schemespreviously used for solving DOPs. Section III describes theproposed DDEBQ algorithm with all its salient features insufficient detail. Section IV describes the benchmarks usedand explains the simulation strategies used for undertakingthe experiments reported in the subsequent sections. Resultsof comparing DDEBQ with several state-of-the-art EAs arepresented and discussed in Section V. Finally, conclusions aredrawn in Section VI.

II. Background

A. Classical DE

A generation of the classical DE algorithm consists of fourbasic steps—initialization, mutation, crossover, and selection,of which, only last three steps are employed into DE genera-tions. The generations continue till some termination criterion(such as exhaustion of maximum functional evaluations) issatisfied.

1) Initialization of Population:: DE searches for a globaloptimum within a continuous search space of dimensionalityD. It begins with an initial population of target vectors�Xi = [x1

i , x2i , ..., x

Di ], where i = 1, 2, 3 . . . .Np (Np is the

population size). The individuals of the initial populationare randomly generated from a uniform distribution within

the search-space. The search-space has maximum and min-imum bounds in each dimension and the bounds can beexpressed as

�Xmax =[x1

max, x2max, ..., x

Dmax

]and �Xmin =

[x1

min, x2min, ..., x

Dmin

].

The jth component of the ith individual is initialized in thefollowing way:

xji,0 = x

j

min + randji (0, 1) · (xj

max − xj

min), j ∈ {1, 2, . . ., D} (1)

where randji (0, 1) is a uniformly distributed random number

in (0, 1) and it is instantiated independently for each jthcomponent of the ith individual.

2) Mutation: After initialization, DE creates a donorvector �Vi,G corresponding to each population member or targetvector �Xi,Gin the current generation through mutation. Thethree most frequently referred mutation strategies for DE arelisted below as

DE/rand/1 : �Vi,G = �Xri1,G

+ F · ( �Xri2,G

− �Xri3,G

) (2)

DE/best/1 : �Vi,G = �Xbest,G + F · ( �Xri1,G

− �Xri2,G

) (3)

DE/current−to−best/1:�Vi,G = �Xi,G+F · ( �Xbest,G−�Xi,G)+F · ( �Xri

1,G− �Xri

2,G). (4)

The indices ri1, ri

2, and ri3 are mutually exclusive integers

randomly chosen from the range {1, 2, . . . , Np}, and all aredifferent from the base index i. These indices are randomlygenerated anew for each donor vector. The scaling factorF is a positive control parameter for scaling the differencevectors. �Xbest,G is the best individual vector with the bestfitness (i.e., having the highest objective function value fora maximization problem) in the population at generation G.The general convention used for naming the various offspringgeneration strategies of DE is DE/x/y/z, where x represents astring denoting the vector to be perturbed and y is the numberof difference vectors considered for perturbation of x. z standsfor the type of crossover being used (exp: exponential; bin:binomial).

3) Crossover: The donor vector mixes its componentswith the target vector �Xi,G under the crossover operation toform a trial vector of the same index denoted as �Ui,G =[u1

i,G, u2i,G, ....., uD

i,G]. The DE family of algorithms primarilyuses two kinds of crossover schemes— exponential (or two-point modulo) and binomial (or uniform) [2]. The binomialcrossover scheme is briefly explained below since it is usedin the proposed algorithm. Under this scheme the trial vectoris created as follows:

uji,G =

{v

ji,G if rand

ji (0, 1)≤CR or j = jrand

xji,G otherwise

(5)

where Cr is a user-specified parameter (crossover rate) inthe range [0, 1) and jrand ∈ {1, 2, ...., D} is a randomlychosen index, which ensures that the trial vector �Ui,G differsfrom its corresponding target vector �Xi,G by at least onecomponent.

Page 3: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DAS et al.: ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR GLOBAL OPTIMIZATION IN DYNAMIC ENVIRONMENTS 3

4) Selection: The next step of the algorithm calls forselection to determine which of the target or the trial vectorssurvives to the next generation, i.e., at G = G + 1. Fora maximization problem, if the objective function value ofthe trial vector is not less than that of the correspondingtarget vector, then the trial vector is selected for the nextgeneration; otherwise the target vector is selected for thenext generation. Obviously, for a minimization problem thecondition for selection is just the opposite.

B. Dynamic Optimization With DE—Brief Overview

Since the late 1990s, DE started to receive attention fromDOP researchers. Mendes and Mohais presented DynDE [8],a multipopulation DE algorithm, developed specifically tooptimize slowly time-varying objective functions. In DynDE,the diversity of the population is maintained in two ways:first, by reinitializing a population if the best individual ofthe population moves too close to the best individual ofanother population and secondly, by randomization of oneor more population vectors by adding a random deviation tothe components. The authors showed that DynDE is capableof solving the Moving Peaks Benchmark (MPB) problemsefficiently. Brest et al. [9] investigated a self-adaptive DEalgorithm (jDE), where the control parameters F and Cr areself-adapted and a multipopulation method with an agingmechanism is used to improve performance on DOPs. Thisalgorithm ranked first in the competition on “EvolutionaryComputation in Dynamic and Uncertain Environments” underIEEE CEC, 2009. Some other interesting research efforts onmodifying DE for optimizing in dynamic environments canbe found in [10]–[13]. Recently, Halder et al. [14] proposeda multipopulation DE for solving DOPs. In this proposal, theentire population is partitioned into several clusters accordingto the spatial locations of the trial solutions. The clusters areevolved separately using a standard DE algorithm. The numberof clusters is an adaptive parameter and its value is updatedafter a certain number of iterations.

Various niching strategies [15] have been proposed by theEA researchers to adapt an EA for detecting and maintain-ing multiple optima over a multimodal functional landscape.Niching also helps in preserving the population diversity inthe course of an EA and track moving peaks in dynamicoptimization. DE has been modified to induce efficient nichingbehavior on multimodal landscapes in some prominent works,such as bi-objective DE with mean distance-based selection[16], crowding-based DE [17], and DE-based multimodaloptimization using the principle of locality [18]. Parrott and Li[19] used the speciation technique to track multiple peaks ina dynamic environment. Subsequently in 2006, Li et al. [20]used speciation-based particle swarm optimization (SPSO) totackle DOPs by using detection and response. The methodis designed for solving problems with primarily unknownnumbers of peaks. Lung and Dumitrescu [21] used crowd-ing DE to maintain diversity and combined it with PSO,called collaborative evolutionary-swarm optimization (CESO)to solve dynamic optimization problems. In 2009, Lung andDumitrescu [22] further improved and extended their workby introducing one more crowing DE population that acted

as a memory for the main population. However, most of thedynamic niching techniques necessitate the use of nichingparameters, such as the niching radius or the crowding factor,which in turn require prior knowledge about the functionallandscape for proper tuning [15]. This may lead to poorperformance on complicated dynamic functions like thosedesigned with the GDBG system.

III. DDEBQ Algorithm

A. Dynamic DE Scheme

In order to maintain diversity of the population to a largerextent, DDEBQ introduces adaptive quantum and Brownianindividuals along with the DE individuals in the population.These quantum and Brownian individuals do not follow thesame rule as the DE individuals. Actually, within a subpopula-tion, two individuals are randomly chosen at each generation.The quantum individual generation rules are applied to oneof them and the Brownian individual generation rules tothe other. If one of the chosen individuals happens to bethe best individual of that subpopulation, then the choiceis discarded and another individual is randomly picked forsubjecting it to the Brownian or quantum individual generationprocesses.

1) Quantum Individuals: In quantum mechanics, due tothe uncertainty in position measurement, the position of aparticle is probabilistically defined. This idea is used hereto generate individuals within a specific region around thelocal best position. The steps for stochastically generating anindividual, whose position is inside a hyper-sphere of radiusR and centered at the local best position �Lb can be outlinedas follows.

1) Generate a radial distance randomly from a uniformdistribution within the range (0, R) as:r ∼ U(0, R). Thisimplies 0≤r≤R.

2) Generate a vector with each component being sampled atrandom from a normal distribution having zero mean andunity variance: �X = [x1, x2, ..., xD] ; xd = N(0, 1), where1≤ d ≤ D and N(μ, σ) denotes the normal distributionwith mean μ and standard deviation σ.

3) Compute the distance of the vector from the origin∥∥ �X∥∥ =√∑D

i=1 x2i .

4) The new quantum individual’s position will be

�Xq = �Lb +

(r∥∥ �X∥∥

)�X. (6)

In DDEBQ, the radius R within which the quantum indi-viduals are generated is adaptive in nature, i.e., the radius isautomatically updated according to certain conditions and withthe progress of the search. The adaptation of R is explainedin Section III-E as it depends on the control parameter C.

2) Brownian Individuals: Brownian motion is used todescribe the random movement of particles suspended in afluid. In mathematics, Brownian motion is described by theWiener process (a continuous-time stochastic process namedin honor of Norbert Wiener). The Wiener process Wtis char-acterized by the following three facts: 1)W0 = 0; 2) is almost

Page 4: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON CYBERNETICS

surely continuous; and 3) Wthas independent increments withdistribution, i.e., Wt − Ws ∼ N(0, t − s). for 0≤s≤t. DDEBQemploys a very simple method to simulate the Brownianmotion. New individuals are generated within a Gaussianhyper-ellipsoid centered at the local best position. If the localbest position is �Lb, then, the new Brownian individual’sposition will be

�XB = �Lb + �� (7)

where the Gaussian perturbation vector �� = [�1, �2, ..., �D];�d = N(0, σ), with 1 ≤ d≤D and σ is the standard devia-tion of the multivariate normal distribution from which eachcomponent of the perturbation is randomly sampled. Here, thevalue σ = 0.2 is used following [8] and considering the factthat this value gave the best results for most of the testedbenchmark instances.

We would like to mention here that Wong et al. [23]proposed a niching algorithm where, in the species-specificexploration stage, random individuals are generated to main-tain the diversity of the population for detecting multiplepeaks on a static landscape. However, the proposed Brownianand quantum individual generation schemes are considerablydifferent from what was done in [23].

3) DE Individuals: These individuals evolve followingthe standard DE algorithm. The donor vectors are generatedfollowing a new mutation scheme that is detailed below.However, these individuals follow the same binomial crossoverand selection process as that of the standard DE algorithm.

B. Double Mutation Strategy

In a dynamic environment, if the population is concentratedaround the global optimum, then the individuals will losetheir ability to detect the global optimum again when theposition of the latter changes. Thus, here the idea is to controlthe perturbation to slow down the searching process andmake the subpopulations evenly distributed over the entiresearch space. An exclusion rule is employed to meet thesecond objective and the rule is discussed in Subsection III-C.For the first objective, DDEBQ follows a double mutationscheme, which is conceptually motivated by the work of Daset al. [24] in a different context. Under this scheme, first amutant vector is generated according to a neighborhood-basedmutation scheme and then the final donor vector is producedas a linear combination of the mutant vector with the local bestvector (of the corresponding subpopulation) formed through aconstant weight factor.

1) Neighborhood-Based Mutation Strategy: In order toovercome the limitations of the fast but less reliable con-vergence characteristics of DE/current-to-best/ 1/bin, somechanges are introduced in the process of generating the dif-ference vectors. For the first difference vector, the originalscheme uses the difference between the global best individualand the current individual; however, in the modified scheme,the difference between the nearest memory individual andthe current individual is considered. The memory archivecontains a collection of the best individuals from the previ-ous subpopulations. This modification is done to control the

convergence of the population toward global optima and toencourage the subpopulations to explore the vicinity of thecorresponding local best positions. For the second differentialvector, instead of taking the difference between two randomlychosen individuals, DDEBQ uses the difference between thebest individual in the neighborhood and the worst individualin the neighborhood with respect to the current individual.This modification is likely to guide the mutant vector toexplore the neighborhood of the current individual within thesubpopulation.

Note that the concept of neighborhood in [24] is solelybased on the index graph of the DE vectors and two givenvectors are neighbors if they have adjacent indices, albeit theymay not be adjacent geographically or according to fitnessvalues. In this paper, the neighborhoods bear a completelydifferent meaning as will be evident from the followingdiscussion. The first mutation can be expressed as

vjmut,G = x

ji,G+Fj

mem·(xjmem,G−x

ji,G)+F

j

bw·(xj

n best,G−xjn worst,G)

(8)

where jε{1, 2, . . . , D} and xji,G is the jth component of �Xi,G

that is the current vector. Similarly xj

n best,Gis the jth compo-nent of �Xn best,G that is the best vector in the neighborhoodwith respect to the current vector. It is the vector withinthe corresponding subpopulation for which 1

rik

(f ( �Xk,G)f ( �Xi,G)

− 1)

(k = 1, 2, . . . , m, where m = number of individuals in the sub-population and k �= i) is maximum. Here, rik is the Euclideandistance between the vectors �Xi,G and �Xk,G. x

jn worst,G denotes

the jth component of �Xn worst,G, which is the worst vectorin the neighborhood with respect to the current vector. Forthis vector, 1

rik

(1 − f ( �Xk,G)

f ( �Xi,G)

)(k = 1, 2, . . . , m and k �= i) is

maximum among all individuals within the subpopulation.x

jmem,G denotes jth component of the nearest memory individ-

ual (memory individuals are the best individuals from the pre-vious subpopulations) to �Xi,G in terms of Euclidean distance.During the process of generating the mutant vector, for eachdimension of each difference vector, the respective scalingfactors are randomly generated from a uniform distributionwithin a range and this range is varied inversely with themagnitude of the differential vector along the correspondingdimension. DDEBQ generates the scaling factors for each jthcomponent in the following way:

Fjmem = 0.3 + 0.7 · rand

ji [0, 1] ·

(1 − |xj

mem,G − xji,G|

|SRj|

)(9a)

Fj

bw = 0.3 + 0.7 · randji [0, 1] ·

(1 − |xj

best,G − xjworst,G|

|SRj|

)(9b)

where |SRj| is the search range corresponding to the jthdimension. Clearly, as the difference increases, i.e., ap-proaches |SRj|, the value of the scaling factor reduces to 0.3.Zaharie [25] suggested that the values of F, which satisfy theequation, 2F 2−2/

m+Cr/m = 0 can be considered to be critical.

Here, m is the number of individuals in a subpopulation. In

Page 5: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DAS et al.: ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR GLOBAL OPTIMIZATION IN DYNAMIC ENVIRONMENTS 5

case of a single population algorithm, m should be replacedby Np. In DDEBQ, Cr is kept constant at 0.9 and m is six.Putting these values in the equation, the critical value for thescaling factor F becomes 0.285. Therefore, the lowest valueof the scaling factor is set to 0.3 for convenience. However,the above equation can be used only for the DE/rand/1/binscheme (2). For the DE-variants involving best individuals,the expression describing the influence of F and Cr becomesmore complicated. The above equation is used here only toprovide an indication of the actual critical value of F; it is notmeant to give a precise estimate.

2) Second Stage Mutation: A linear combination of themutant vector from the 1st stage of mutation to the local bestvector is formed by using a weight factor. This way the localbest vector is perturbed in a controlled manner. The secondmutation can be expressed as

�Vfinal,G = (1 − ω) · �Lb,G + ω · �Vmut,G (10)

where �Lb,G is the local best vector, i.e., the best vectorof the corresponding subpopulation, �Vmut,G is the mutantvector generated from 1st stage mutation and ω is the weightfactor.

C. Exclusion Rule

In DDEBQ, an exclusion rule is employed to ensure thatdifferent subpopulations are located around different basinsof attraction. However, this rule is slightly different from theexisting one [8] as it uses a new empirical formula to calculatethe marginal distance between two subpopulations. Here, thestrategy is to calculate the Euclidean distance between thebest individuals from two different subpopulations at eachgeneration. If the distance between the best individuals ofany two subpopulations falls below a marginal value, then thesubpopulation having the best individual of lower objectivefunction value (i.e., worse fitness for a maximization problem)between the two is marked for reinitialization. The marginalvalue of the distance is calculated according to the followingrule:

If there are D dimensions with search ranges SR and thereare Nsub subpopulations, then the marginal value for thedistance is

Dis marginal = SR/(Nsub · D). (11)

Here, the idea is to partition the search space almost equallyamong the Nsub subpopulations. Note that the DyneDE [8]algorithm uses the linear diameter of the basin of attractionas an indicator for this exclusion radius. Unlike DyneDE’sexclusion scheme [(1) of [8]], the formula given in (12)does not make implicit assumption that the peaks are evenlydistributed in the search space. It also eliminates the needfor knowing the number of peaks of the objective functionbeforehand.

D. Aging Mechanism

DDEBQ employs a simple aging mechanism to get rid ofthe individuals stagnating at some local optimum. Algorithm 1

Algorithm 1 Algorithm for Aging Mechanism: (Considering jthindividual of the ith subpopulation)

1. if the ith subpopulation contains the global best individual,then do not perform aging mechanism on the subpopulation.

2. else if the j-th individual is the best individual in the i-thsubpopulation,then Age−best(i, j) = Age−best(i, j) + 1.if Age−best(i, j) ≥ 30, then reinitialize the i-th subpop-ulation and reset Age−best(i,:) and Age−worst(i,:) entriesto 0.

3. else if j-th individual is the worst individual in the i-thsubpopulation, then Age−worst(i, j) = Age−worst(i, j) + 1.if Age−worst(i, j) ≥ 20, then reinitialize the individual andreset Age−worst(i, j) entries to 0 leaving other members ofthe subpopulation intact.

4. else the Age−worst(i, j) and Age−best(i, j) of the j-thindividual are reset to 0.

shows a schematic procedure to implement the aging mech-anism. Age−best and Age−worst are two matrices with di-mensions (Nsub, m), m being the number of individuals persubpopulation. The (i, j)th entry of Age−best matrix representshow many times consecutively the jth individual of ith subpop-ulation has been the best individual of the ith subpopulation.In the same way, the (i, j)th entry of Age−worst matrixrepresents how many times consecutively the jth individualof ith subpopulation has been the worst individual of the ithsubpopulation. If an individual is reinitialized owing to itsconsistently bad performance then the corresponding entry ofthe Age−worst matrix is reset to 0 but the Age−best matrixremains unaltered. If a subpopulation is reinitialized due tostagnating at any local optimum, then the corresponding rowentries of the Age−best and Age−worst matrices are all resetto 0. The reinitialization is done randomly covering the entiresearch space.

Aging is a heuristic method and its objective is to reinitializethe individuals that may be trapped at some local optimum.Except for the experimental results, it is difficult to justify thechoice for the thresholds. They should be set in such a fashionthat the reinitialization may occur only when stagnation isheuristically sensed. In DDEBQ, the aging thresholds are set to30 and 20 for Age−best and Age−worst, respectively, througha series of experiments carried on the available benchmarks.A lower aging threshold will mean more reinitializations thatmight be unnecessary whereas a high aging threshold willmean more wastage of FEs to achieve the same level ofaccuracy.

E. Adaptation of Control Parameter and Radius of GeneratingQuantum Individuals

In order to actuate the diversity within the subpopulations,a control parameter is introduced in DDEBQ. Dependingon the conditions, this parameter can take any value among0, 1, and 2. This parameter, denoted by C, helps the searchprocess to achieve better convergence characteristics. If Cbecomes one, then, the quantum individuals are not generated;if C becomes two, then, the Brownian individuals are not

Page 6: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON CYBERNETICS

generated, and if C becomes 0 then the search process pro-gresses in normal way, i.e., with quantum, Brownian, and DEindividuals.

As mentioned previously, when C is 0, the algorithmgenerates all the individuals (DE, Brownian, and quan-tum) to maintain the diversity at a higher level. When thepopulation concentrates around a global best position beforethe occurrence of a dynamic change, as determined by thecontrol parameter C, the diversity should be reduced separatelywithin each of the subpopulations to ensure high precision inlocating the global optima. If the diversity of the individualsin each of the subpopulations is reduced separately, thenirrespective of the whole population’s diversity, the subpop-ulation containing the global best individual converges to theglobal best position and the algorithm is likely to achieve ahigh degree of accuracy. In this way, while preserving thepopulation diversity as a whole, DDEBQ can also obtain highquality solutions. This is possible because the subpopulationsare located at different regions of the search space due tothe exclusion rule, which is described earlier. In DDEBQ,the diversity is reduced in two steps, first by stopping thegeneration of quantum individuals and then by stopping thegeneration of Brownian individuals and starting the generationof quantum individuals. As quantum individuals are likelyto possess less diversity than Brownian individuals [26],after the second step, the diversity is expected to decreasemore.

The value of C is chosen in the following way. First, thedifference of the global best objective function values beforeand after the first update interval (UI) generations is definedas PR. From this point onward, if the global best objectivefunction values over UI generations have a difference greaterthan PR, then the current value of PR is replaced by thisnew value. If the difference becomes less than (PR /10) butgreater than (PR /50), then C is set to one. If the differenceis less than (PR /50), then C is set to two. A value toolow as indicated by (PR /50), indicates that the algorithmhas not experienced severe explorations in last UI generationsand it can be concluded to be incisively searching around apossible optima. A higher value, even greater than (PR /10),can be referred to be in its explorative phase. A moderatevalue within these extremes can indicate an algorithm inits balanced explorative and exploitative phase. With respectto these values the control parameter C can be determined,which wheels the dynamics of the search process in DDEBQby controlling the generation of the Brownian and quantumindividuals. The strategy for adapting control parameter C ispresented as Algorithm 2.

Note that the adaptation of C depends on monitoring ofthe progress of search (in terms of the frequency of variationof the globally best individual) at regular intervals and thisbears some conceptual resemblance with the cooling schedulesused in adaptive simulated annealing (SA) algorithms [27].For example, the cooling schedule in hide-and-seek SA [28]depends on the best objective function values obtained up toa certain number of generations and an estimation of the un-known global optimum after the same number of generations.The performance is monitored after a specific UI (defined in

Algorithm 2 Algorithm for Control Parameter (C) Adaptation

1. Initialize generation counter G = 0 and calculateinitial Gbest fit0 = f ( �Xbest,0), C = 0.

// �Xbest,Gis the globally best solution atgeneration G and f (.) is the function under test.

2. Initialize counter k = 1.3. Start Loop4. Carry out the optimization steps of DDEBQ5. if mod (Gen, UI) == 06. Calculate new Gbest fitk = f ( �Xbest,G).7. Calculate PRkas:PRk = |Gbest fitk − Gbest fitk−1|8. if PRk > PRk−1

9. Update PR.10. if (PRk − PRk−1) < PRk−1/50, C = 211. else if PRk−1/50 < (PRk − PRk−1) < PRk−1/10, C =112. else C = 0.

13. k = k + 1.14. G=G+115. if termination condition satisfies break Loop, else

return to Step 4.

the terminology list of the appendix). Based on the rate ofchange of the globally best solution during the intervals ofthe UI number of generations, the diversity is controlled bygenerating either Brownian or adaptive quantum or both kindsof individuals. Hence, selecting a proper value for frequency ofupdate is decisive to performance of the algorithm. If the testis conducted too frequently, i.e., UI is very low, the searchagents may not get enough scope to thoroughly explore thespace. On the other hand, if UI is relatively high with respectto the frequency of occurrence of the dynamical changes, thedetection of proper stages of optimization may be hampered.For GDBG problems, where E is 100000 FEs, UI = 20 givesoptimal performance. In fact, the performance of the algorithmis not sensitive to UI values lying in the range of 15 to 35 andremains more or less consistent on different benchmarks fromthe GDBG suite. Our simulation experiments (not reported inthe paper for space economy) indicate that a lower value ofUI is suitable for lower change intervals while a higher valueof UI is suited to higher change intervals. From our detailedempirical study, a value of UI = 20 can provide optimizedperformance over a wide range of functions. It can be notedthat UI of 20 is in same range as aging thresholds age−bestand age−worst. In the experimentation part, UI is fixed to 20for all benchmark instances and no problem specific tuningswere allowed.

Adaptation of the radius R for the generation of controlparameters depend on C. If C is 0, then R is set to one. If C istwo, then R changes according to the following rule dependingupon the difference (Diff ) of global best objective functionvalues before and after UI generations

R = Diff · log10

(10 +

PR

50 · Diff

). (12)

Page 7: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DAS et al.: ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR GLOBAL OPTIMIZATION IN DYNAMIC ENVIRONMENTS 7

F. Dynamic Dimensional Change Addressing

In addition to changing the search landscape and therebychanging the functional values of the individuals, some chal-lenging benchmark problems, such as GDBG with changetype T7 [7], accompanies altering height, width, positionof the optima, varying dynamics in orientation, scalabilityof the problem as well as a dimensionality contrast after aspecific number of FEs. In that case, the algorithm needsto detect whether a dimensional change has occurred or not.The objective function of GDBG changes the dimension bysome rules within a limit. It modifies the current solutionvector by adding or deleting dimensions of the current solutionand returns the changed dimension of the problem alongwith the modified solution vector. Hence, the occurrence ofa dimensional change can be detected by examining thetest solution vector returned by the cost function in everygeneration. Whenever the dimension of the new test solutionreturned by the cost function does not match the dimensionof the previous one, it can be inferred that a dimensionalchange has occurred in the environment. If the dimension isincreased by one, then an extra dimension is added to theother individuals within the population. The values of extradimensions of the individuals are randomly sampled from auniform distribution within the corresponding bounds of thesearch space. If the dimension is decreased by one, then theadditional dimension of other individuals within the populationis eliminated.

G. Complexity Issues—Empirical Discussion

Apart from the computational burdens of evaluating theobjective function (measured in terms of the number of FEs),another aspect of complexity of the algorithm can arise fromthe calculations of Euclidian distances between the currentindividual and the memory individuals during constructionof the mutant vector for the current individual. This isbecause computing the Euclidean distances can demand aconsiderable amount of processor time. If each subpopulationcontains m number of individuals and the total populationsize is denoted by Np then the number of subpopulationsis Nsub = (Np/m)). As the memory archive contains thebest individuals from each subpopulation, the number ofmemory individuals is also(Np/m). Hence, the total numberof evaluations of Euclidian distances in one generation istotal population size × number of evaluations of Euclidiandistances for each individual. Therefore, the total numberof evaluations of Euclidian distances in one generation is(Np2

/m

). As can be observed, if the number of individuals

in each subpopulation is increased, i.e., as the multipopulationscheme approaches to a single-population scheme, part ofthe complexity of the algorithm decreases but it also losesthe effectiveness of having multiple subpopulations. On theother hand, if the number of subpopulations is increased, thenumber of individuals in each subpopulation decreases andthe complexity of the algorithm increases (the complexitygradually approachesO(Np2)), but the effectiveness of themultipopulation scheme increases. Note that, according toYang and Li [29], the timing complexity of the clustering

operation in the clustering PSO algorithm that also usesEuclidean distance calculations heavily is O(Np2), Np beingthe initial population size.

H. Repairing Rule

For every newly generated individual (whether it is a DE,Brownian, or adaptive quantum individual), the algorithmchecks whether any component of the new individual is outsidethe bounds. If any component is outside the bound, then it israndomly reinitialized by sampling from a uniform distributionwithin the bounds as per (1).

IV. Experimental Settings

A. Benchmark Problems

CEC 2009 benchmark problems for dynamic optimizationwere generated by using the GDBG system proposed in[7], which constructs dynamic environments for the location,height, and width of peaks. Li et al. [7] introduced a rotationmethod instead of shifting the positions of peaks as done inthe MPB [30] problems. The GDBG system poses greaterchallenges for optimization than the MPB problems due tothe rotation method, larger number of local optima, and higherdimensionalities. There are seven change types for each testfunctions in the GDBG system, which are small step change,large step change, random change, chaotic change, recurrentchange, recurrent change with noise, and dimensional change.

The test functions in real space instance are as follows:

F1: rotation peak function,F2: composition of sphere functions,F3: composition of Rastrigin’s functions,F4: composition of Griewank’s functions,F5: composition of Ackley’s functions andF6: hybrid composition functions. Only F1 is a maximiza-

tion problem and others are minimization problems. InF1, there are two tests, one using 10 peaks and anotherusing 50 peaks.

B. Simulation Strategies

Simulation environment (hardware and software) used forcarrying out the experiments described in the subsequentsections can be summarized as CPU: 3.2 GHz Intel Core i5,RAM: 2 GB DDR3, and MATLAB 2009b edition. The perfor-mance of DDEBQ is measured in terms of the mean error [7]and the adaptability metric [31] obtained in 20 independentruns. The mean error is calculated according to the followingexpression [7]:

Emean =1

(runs ∗ num change)

∑runs

i=1

∑num change

j=1Elast

i,j .

(13)Here, runs is the total number of runs, num−change is thenumber of dynamic changes that occur during each inde-pendent run, and Elast

i,j is the error recorded before the jthdynamic change of ith independent run. Note that the errorElast corresponds to the absolute fitness difference betweenthe best solution found by an EA (before a landscape change)and the known best solution (for that landscape), i.e., Elast(t) =

Page 8: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON CYBERNETICS

TABLE I

Experimentally Determined Best Parametric Settings

for DDEBQ

∣∣f ( �Xbest(t)) − f ( �X ∗ (t))∣∣. In all result tables, the best results

are marked in boldface.The adaptability metric measures a difference between the

value of the current best individual of each generation andthe optimum value averaged over the entire run and can beexpressed as

Ada =1

num change

num change∑i=1

⎡⎣1

τ

τ−1∑j=0

erri,j

⎤⎦ (14)

where τ is the number of generations between changes whenthe environment remains static. erri, j denotes the absolutedifference between the fitness values of the current best indi-vidual in the population of the jth generation and after the lastchange and the optimum value for the fitness landscape afterthe ith change. Evidently for both mean error and adaptability,the smaller the measured values are, the better the result is.

For results of the comparative studies, a nonparametricstatistical test, called the Wilcoxon’s rank sum test for in-dependent samples [32], is conducted at the 5% significancelevel, in order to judge the statistical significance of the bestresults obtained in each experimental scenario. The statisticaltest results are indicated within parentheses throughout all theresult tables as “+”, “ - ”, or “≈”, when the result of DDEBQ isstatistically significantly better than, worse than, or statisticallyequivalent to the corresponding result, respectively. The ranksum test is conducted between the results of DDEBQ and theother dynamic EAs considered.

C. Parameter Settings

Table 1 lists the parametric values that keep the perfor-mance of DDEBQ considerably good over a wide range ofbenchmarks. Please refer to the supplementary document for adetailed account of the simulation experiments that empiricallyvalidate these values. Also once set, the same parametervalues are used for DDEBQ on all the benchmark instancesof GDBG in Section V, where performance of the algorithmis compared with some of the best-known evolutionary DOPsolvers. No function-dependent tuning of the parameters isallowed anywhere for DDEBQ.

V. Results and Discussions

This section presents a comparative study of the perfor-mance of DDEBQ with several other state-of-the-art evolu-tionary dynamic optimizers on the GDBG benchmarks. Theperformance of DDEBQ is compared with the following sevenalgorithms by using the benchmark suite of the GDBG system:Differential Ant-Stigmergy Algorithm (DASA) [33], jDE [9],[33], DynDE [9], dopt-aiNET [34], CPSO [28], CESO [21],and PSO with Composite particles (PSO-CP) [35]. DASAis based on the classical ant colony optimization methods.CPSO uses a hierarchical clustering method to locate and trackmultiple peaks. In addition, CPSO incorporates a fast localsearch method to search for optimal solutions in a promisingsubregion found by the clustering method. PSO-CP uses theidea of composite particles from physics to maintain thediversity of the population through a scattering operator. Dopt-aiNet introduces a set of complementary mutation operatorsand a better mechanism to maintain the diversity of solutionsin the original opt-aiNet [36] algorithm, which was meant forsolving static and multimodal function optimization problems

For the competitor algorithms, the best parametric setupis employed in accordance with their respective literatures.An identical experimental condition guided by the technicalreport of [7] is maintained for all the algorithms compared.Tables II and III provide the simulation results obtained overall the test cases mentioned in [7] by using DDEBQ and sevenother algorithms in terms of the mean best-of the-run errorvalues and the adaptability metric values achieved over 20independent runs. The tables also show the average runtime (inseconds) consumed by all the algorithms compared. Sampleconvergence graphs are provided for functions F1 (numberof peaks = 10), F2, F3, F4, F5, and F6 with change type T7over 300,000 FEs in Fig. 1. In this case, dimension of thesearch space changes when the dynamic change occurs. Thischange type is similar to T3 (random change) except for thedimensional change. The y-axis of these plots contains therelative value r(t) that is calculated as f ( �Xbest(t))/f ( �X∗(t))for function F1 (as it is a maximization problem) and forother functions as f ( �X∗(t))/f ( �Xbest(t)). The highest possiblevalue of r(t) is one. As can be observed from the convergencegraphs, the relative value is lowest in case of function F3and it is highest in case of function F1. The convergencecharacteristics also indicate that as each dynamic changeoccurs, the relative value r(t) attains a sharp downfall.

A close scrutiny of Tables II and III reveals that DDEBQoutperforms all the seven evolutionary dynamic optimizers ina statistically significant fashion over 36 out of the 49 testinstances. It yielded statistically inferior results compared toany one of the competitor algorithms in four test instancesand statistically equivalent results with one or two competitorsover the rest nine instances. For function F3, the jDE algorithmcould attain lower best error values than DDEBQ over changetypes T2, T4, and T6. However, results of the Wilcoxon’s ranksum test reveals that for change types T1, T3, T5, and T7, thedifferences between the results of jDE and DDEBQ are notstatistically significant. Also DDEBQ exhibited a statisticallybetter performance than the other six EAs for all the change

Page 9: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DAS et al.: ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR GLOBAL OPTIMIZATION IN DYNAMIC ENVIRONMENTS 9

TABLE II

Mean Error Values, Ada Metric Values, and Average Runtime (in Seconds) Achieved by the Algorithms Compared for

Test Functions F1-F3 of the GDBG System. Wilcoxon’s Rank Sum Test Results of Comparing DDEBQ With the

Contender Algorithms Indicated in Parentheses.

Page 10: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON CYBERNETICS

TABLE III

Mean Error Values and Standard Deviations Achieved by DDEBQ and Other Algorithms for Test Functions F4-F6 of GDBG

System. Wilcoxon’s Rank Sum Test Results of Comparing DDEBQ With Contender Algorithms Indicated in Parentheses.

Page 11: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DAS et al.: ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR GLOBAL OPTIMIZATION IN DYNAMIC ENVIRONMENTS 11

Fig. 1. Sample convergence graphs for DDEBQ algorithm. (a) For F1 with T7. (b) For F2 with T7. (c) For F3 with T7. (d) For F4 with T7. (e) For F5 withT7. (f) For F6 with T7.

types of F3. For all the change types of the functions F1,F2, F4, and F5, and for change types T1, T4, T5, T7 ofF6, DDEBQ yielded statistically superior performance to jDE,which was the winner of 2009 IEEE CEC Competition onEvolutionary Dynamic Optimizers.

DDEBQ performed statistically better than CPSO in all testcases. There are two test instances where DDEBQ performedstatistically similar to CESO (F4 with T6, F6 with T1).DDEBQ performed worse than PSO-CP and comparable toDynDE in only one instance: F4 with change type T7. In 43out of the 49 tested instances, DDEBQ achieved the lowestvalues of the adaptability metric. This indicates that for ma-jority of the tested instances the best individual in the DDEBQpopulation remained closer the optimum for all generations,i.e., the optimum was better tracked by the proposed algorithm.For function F3 with change types T2, T4, and T6, jDE yieldedthe best adaptability metric values while DDEBQ attained thesecond best values. For function F6 with change types T3and T4, despite yielding the lowest mean errors, DDEBQ wasmarginally surpassed by jDE in terms of the adaptability val-ues. Note that for the instances where DDEBQ is statisticallyoutperformed by any one of the seven contender algorithms, itranked second best outperforming the other six algorithms. Noother evolutionary DOP solver considered in this article couldkeep such a consistent performance on the wide variety of thetested DOP instances. As the double mutation strategy preventsthe population from converging too quickly and the agingmechanism helps the population to get rid of local optima,DDEBQ is able to perform very well over such highly complexand multimodal functions. Extremely good performances overthe sphere function (F2), the Ackley’s function (F5), and thecomposition function (F6) have resulted from the incorporationof the dynamic DE scheme and exclusion principle. As thedynamic DE scheme maintains a good diversity level of thepopulation, DDEBQ is able to locate the global optimum afterany dynamic change more efficiently than other algorithms.The exclusion rule also helps the algorithm to explore muchgreater portion of the search space—a feature that leads to

high success rate in locating the global optimum. Also, a highdegree of precision in locating the global optimum observedin rotation peak function (F1 with number of peaks = 10,50) is a consequence of introducing the control parameterC that has an important role in controlling the diversityof the population and adaptively changing the radius withinwhich the quantum individuals are to be generated. Fromthe average runtimes listed in Tables II and III, it is evidentthat the runtime of DDEBQ is in several cases comparableto DyneDE, dopt-aiNet, and CESO. However, CPSO takeshigher average runtime on most of the functions due to theincurrence of several Euclidean distance calculations. PSO-CP also involves various computational overheads and ingeneral is slower or comparable to DDEBQ in majority ofthe cases. DASA and jDE appear to be marginally fasterthan DDEBQ. However, when the accuracy appears to bethe major bottleneck, DDEBQ has several advantages tooffer.

VI. Conclusion

In this paper, a variant of the DE algorithm referred to asDDEBQ is proposed to solve DOPs in a statistically efficientmanner. The proposed algorithm uses a dynamic DE schemethat obviously shares the traditional DE framework. In additionto DE individuals, it uses adaptive quantum and Brownianindividuals to increase the diversity and exploration abilityof the search process. A control parameter is introducedto control the diversity as necessary. The algorithm alsoemploys an aging mechanism to get rid of stagnation. TheDE individuals produce the donor vectors according to aneighborhood-based double mutation strategy to control theperturbation. An exclusion scheme is used so that the sub-populations become evenly distributed over the entire searchspace.

The statistical summary of the simulation results indicatesthat DDEBQ can provide consistently superior performanceas compared to the other state-of-the-art evolutionary dynamic

Page 12: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON CYBERNETICS

optimizers in terms of average level of accuracy. Future workmay focus on introducing more co-operation and informationexchange among the subpopulations in DDEBQ. It can alsobe fruitful to make the crossover probability adaptive to thecondition of the fitness landscape. Algorithmic componentsof DDEBQ can be integrated with some of the adaptive DEvariants ([37], [38]) to improve their performance on dynamiclandscapes as well.

Appendix

TABLE IV

List of Terminology Used in DDEBQ

References

[1] R. Storn and K. Price, “Differential evolution: A simple and efficientheuristic for global optimization over continuous spaces,” J. GlobalOptimization, vol. 11, no. 4, pp. 341–359, 1997.

[2] S. Das and P. N. Suganthan, “Differential evolution: A survey of thestate-of-the-art,” IEEE Trans. Evol. Comput., vol. 15, no. 1, pp. 4–31,Feb. 2011.

[3] Y. Jin and J. Branke, “Evolutionary optimization in uncertain environ-ments: A survey,” IEEE Trans. Evol. Comput., vol. 9, no. 3, pp. 303–317,Jun. 2005.

[4] K. Trojanowski and Z. Michalewicz, “Evolutionary optimization innonstationary environments,” J. Comput. Sci. Technol., vol. 1, no. 2,pp. 93–124, 2000.

[5] R. Allmendinger and J. Knowles “On-line purchasing strategies for anevolutionary algorithm performing resource-constrained optimization,”in Proc. PPSN XI , vol. II, LNCS 6239. 2010, pp. 161–170.

[6] J. Lampinen and I. Zelinka, “On stagnation of the differential evolutionalgorithm,” in Proc. 6th Int. Mendel Conf. Soft Comput., Jun. 2000, pp.76–83.

[7] C. Li, S. Yang, T. T. Nguyen, E. L. Yu, X. Yao, Y. Jin, H.-G. Beyer,and P. N. Suganthan, “Benchmark generator for CEC’2009 competitionon dynamic optimization,” Univ. Leicester, Univ. Birmingham, NanyangTechnol. Univ., Tech. Rep., Sep. 2008.

[8] R. Mendes and A. S. Mohais, “DynDE: A differential evolution fordynamic optimization problems,” in Proc. IEEE Congr. Evol. Comput.,vol. 2. Sep. 2005, pp. 2808–2815.

[9] J. Brest, A. Zamuda, B. Boskovic, M. S. Maucec, and V. Zumer,“Dynamic optimization using self-adaptive differential evolution,” inProc. IEEE Congr. Evol. Comput., May 2009, pp. 415–422.

[10] R. Angira and A. Santosh, “Optimization of dynamic systems: Atrigonometric differential evolution approach,” Comput. Chem. Eng., vol.31, no. 9, pp. 1055–1063, Sep. 2007.

[11] H.-Y. Fan and J. Lampinen, “A trigonometric mutation operation todifferential evolution,” J. Global Optimization, vol. 27, no. 1, pp.105–129, 2003.

[12] M. C. du Plessis and A. P. Engelbrecht, “Using competitive populationevaluation in a differential evolution algorithm for dynamic environ-ments,” Eur. J. Oper. Res., vol. 218, no. 1, pp. 7–20, Apr. 2012.

[13] V. Noroozi, A. B. Hashemi, and M. R. Meybodi, ”CellularDE: A cellularbased differential evolution for dynamic optimization problems,” in Proc.ICANNGA, part I, LNCS 6593. 2011, pp. 340–349.

[14] U. Halder, S. Das, and D. Maity, “A cluster-based differential evolutionalgorithm with external archive for optimization in dynamic environ-ments,” IEEE Trans. Cybern., vol. 43, no. 3, pp. 881–897, Jun. 2013.

[15] S. Das, S. Maity, B-Y Qu, and P. N. Suganthan, “Real-parameterevolutionary multimodal optimization: A survey of the state-of-the-art,”Swarm Evol. Comput., vol. 1, no. 2, pp. 71–88, Jun. 2011.

[16] A. Basak, S. Das, and K. C. Tan, “Multimodal optimization using a bi-objective differential evolution algorithm enhanced with mean distancebased selection,” IEEE Trans. Evol. Comput., vol. PP, no. 99, p. 1, Dec.2012.

[17] R. Thomsen, “Multimodal optimization using crowding-based differen-tial evolution,” in Proc. IEEE Congr. Evol. Comput., Jun. 2004, pp.1382–1389.

[18] K.-C. Wong, C.-H. Wu, R. K. P. Mok, C. Peng, and Z. Zhang,“Evolutionary multimodal optimization using the principle of locality,”Inf. Sci., vol. 194, pp. 138–170, Jul. 2012.

[19] D. Parrott and X. Li, “Locating and tracking multiple dynamic optima bya particle swarm model using speciation,” IEEE Trans. Evol. Comput.,vol. 10, no. 4, pp. 440–458, Aug. 2006.

[20] X. Li, J. Branke, and T. Blackwell, “Particle swarm with speciationand adaptation in a dynamic environment,” in Proc. GECCO, 2006, pp.51–58.

[21] R. Lung and D. Dumitrescu, “A collaborative model for tracking optimain dynamic environments,” in Proc. IEEE Congr. Evol. Comput., Sep.2007, pp. 564–567.

[22] R. Lung and D. Dumitrescu, “Evolutionary swarm cooperative opti-mization in dynamic environments,” Natural Comput., vol. 9, no. 1, pp.83–94, Mar. 2010.

[23] K.-C. Wong, K.-S. Leung, and M.-H. Wong, “An evolutionary algorithmwith species-specific explosion for multimodal optimization,” in Proc.Genetic Evol. Comput. Conf., Jul. 2009, pp. 923–930.

[24] S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differentialevolution using a neighborhood based mutation operator,” IEEE Trans.Evol. Comput., vol. 13, no. 3, pp. 526–553, Jun. 2009.

[25] D. Zaharie, “Critical values for the control parameters of differentialevolution algorithms,” in Proc. 8th Int. Mendel Conf. Soft Comput., 2002,pp. 62–67.

[26] T. M. Blackwell, “Particle swarm optimization in dynamic environ-ments,” in Evolutionary Computation in Dynamic and Uncertain Envi-ronments, S. Yang, Y. S. Ong, and Y. Jin Eds. Berlin, Germany: Springer-Verlag, 2007, ch. 1, pp. 29–49.

[27] L. Ingber, “Adaptive simulated annealing (ASA): Lessons learned,”Control Cybern., vol. 25, no. 1, pp. 33–54, 1996.

[28] H. E. Romeijn and R. L. Smith, “Simulated annealing for constrainedglobal optimization,” J. Global Optimization, vol. 5, no. 2, pp. 101–126,Sep. 1994.

[29] S. Yang and C. Li, “A clustering particle swarm optimizer for locatingand tracking multiple optima in dynamic environments,” IEEE Trans.Evol. Comput., vol. 14, no. 6, pp. 959–974, Dec. 2010.

[30] J. Branke, “Memory enhanced evolutionary algorithms for changingoptimization problems,” in Proc. Congr. Evol. Comput., vol. 3. 1999,pp. 1875–1882.

Page 13: An Adaptive Differential Evolution Algorithm for Global Optimization in Dynamic Environments

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

DAS et al.: ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR GLOBAL OPTIMIZATION IN DYNAMIC ENVIRONMENTS 13

[31] K. Trojanowski and Z. Michalewicz, “Searching for optima in nonsta-tionary environments,” in Proc. IEEE Congr. Evol. Comput., 1999, pp.1843–1850.

[32] J. Derrac, S. Garcıa, D. Molina, and F. Herrera, “A practical tutorial onthe use of nonparametric statistical tests as a methodology for comparingevolutionary and swarm intelligence algorithms,” Swarm Evol. Comput.,vol. 1, no. 1, pp. 3–18, Mar. 2011.

[33] J. Brest, P. Korosec, J. Silc, A. Zamuda, B. Boskovic, and M. SepesyMaucec, “Differential evolution and differential antstigmergy on dy-namic optimisation problems,” Int. J. Syst. Sci., vol. 44, no. 4, pp.663–679, 2013.

[34] F. O. de Franca and F. J. Von Zuben, “A dynamic artificial immunealgorithm applied to challenging benchmarking problems,” in Proc.Congr. Evol. Comput., 2009, pp. 423–430.

[35] L. Liu, D. Wang, and S. Yang, “Particle swarm optimization withcomposite particles in dynamic environments,” IEEE Trans. Syst., Man,Cybern. B, Cybern., vol. 40, no. 6, pp. 1634–1648, Dec. 2010.

[36] L. N. de Castro and J. Timmis, “An artificial immune network formultimodal optimization,” in Proc. Congr. Evol. Comput. Part IEEEWorld Congr. Comput. Intell., May 2002, pp. 699–704.

[37] S. M. Islam, S. Das, S. Ghosh, S. Roy, and P. N. Suganthan, “An adaptivedifferential evolution algorithm with novel mutation and crossoverstrategies for global numerical optimization,” IEEE Trans. Syst., Man,Cybern. B, vol. 42, no. 2, pp. 482–500, Apr. 2012.

[38] W. Gong, Z. Cai, C. X. Ling, and Hui Li, “Enhanced differentialevolution with adaptive strategies for numerical optimization,” IEEETrans. Syst., Man, Cybern. B, vol. 41, no. 2, pp. 397–413, Apr. 2011.

Swagatam Das (M’10–SM’12) is currently an As-sistant Professor with the Electronics and Commu-nication Sciences Unit, Indian Statistical Institute,Kolkata, India. He has published one research mono-graph, one edited volume, and over 150 researcharticles in peer-reviewed journals and internationalconferences. His current research interests includeevolutionary computing and pattern recognition.

Mr. Das is the Founding Co-Editor-in-Chief ofSwarm and Evolutionary Computation, an interna-tional journal from Elsevier. He also serves as an

Associate Editor of the IEEE Transactions on Systems, Man, and Cyber-

netics: Systems, Neurocomputing, Information Sciences, and EngineeringApplications of Artificial Intelligence. He is an Editorial Board memberof Progress in Artificial Intelligence (Springer), Mathematical Problems inEngineering, International Journal of Artificial Intelligence and Soft Comput-ing, and International Journal of Adaptive and Autonomous CommunicationSystems. He has been associated with international program committees andorganizing committees of several regular international conferences, includingIEEE WCCI, IEEE SSCI, SEAL, GECCO, and SEMCCO. He has acted as aGuest Editor for special issues in journals, such as the IEEE Transactions

on Evolutionary Computation, the ACM Transactions on Adaptive andAutonomous Systems, and the IEEE Transactions on System, Man, and

Cybernetics, Part C. He was a recipient of the 2012 Young Engineer Awardfrom the Indian National Academy of Engineering.

Ankush Mandal received the B.E. degree in elec-tronics and telecommunication engineering from Ja-davpur University, Kolkata, India, in 2012.

He is currently working as a Control and Instru-mentation Engineer at the Engineering and PlanningDepartment of the Damodar Valley Corporation,West Bengal, India. His current research interestsinclude evolutionary optimization in nonstationaryenvironments and evolutionary design of antennas.

Rohan Mukherjee was born in West Bengal, India,in 1992. He is currently pursuing the B.E. degreein electronics and telecommunication engineering atJadavpur University, Kolkata, India.

He has published research articles in peer-reviewedjournals and international conference proceedingsunder the guidance of his teacher Dr. SwagatamDas. He has acted as a reviewer for internationalconferences. His current research interests includesmart grids, game theoretic applications, power sys-tems, wireless communications, and evolutionary

computing.


Recommended