+ All Categories
Home > Documents > compact Differential evolution

compact Differential evolution

Date post: 30-Dec-2015
Category:
Upload: batchu-rajasekhar
View: 25 times
Download: 1 times
Share this document with a friend
Description:
differential evolution algorithm for constrained optimization problem based on multi-objective constraint handling is proposed in this paper
Popular Tags:
23
32 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011 Compact Differential Evolution Ernesto Mininno, Member, IEEE, Ferrante Neri, Member, IEEE, Francesco Cupertino, Member, IEEE, and David Naso, Member, IEEE Abstract —This paper proposes the compact differential evo- lution (cDE) algorithm. cDE, like other compact evolutionary algorithms, does not process a population of solutions but its statistic description which evolves similarly to all the evolutionary algorithms. In addition, cDE employs the mutation and crossover typical of differential evolution (DE) thus reproducing its search logic. Unlike other compact evolutionary algorithms, in cDE, the survivor selection scheme of DE can be straightforwardly encoded. One important feature of the proposed cDE algorithm is the capability of efficiently performing an optimization process despite a limited memory requirement. This fact makes the cDE algorithm suitable for hardware contexts characterized by small computational power such as micro-controllers and commercial robots. In addition, due to its nature cDE uses an implicit randomization of the offspring generation which corrects and improves the DE search logic. An extensive numerical setup has been implemented in order to prove the viability of cDE and test its performance with respect to other modern compact evolutionary algorithms and state-of-the-art population-based DE algorithms. Test results show that cDE outperforms on a regular basis its corresponding population-based DE variant. Experiments have been repeated for four different mutation schemes. In addition cDE outperforms other modern compact algorithms and displays a competitive performance with respect to state-of-the-art population-based algorithms employing a DE logic. Finally, the cDE is applied to a challenging experimental case study regarding the on-line training of a nonlinear neural- network-based controller for a precise positioning system subject to changes of payload. The main peculiarity of this control application is that the control software is not implemented into a computer connected to the control system but directly on the micro-controller. Both numerical results on the test functions and experimental results on the real-world problem are very promising and allow us to think that cDE and future developments can be an efficient option for optimization in hardware environments characterized by limited memory. Index Terms—Adaptive systems, compact genetic algorithms, differential evolution (DE), estimation distribution algorithms. Manuscript received November 27, 2009; revised March 8, 2010, May 18, 2010, June 6, 2010, and June 16, 2010. Date of publication December 23, 2010; date of current version February 25, 2011. This work was supported by the Academy of Finland, Akatemiatutkija 130600, Algorithmic Design Issues in Memetic Computing, and by Tekes (the Finnish Funding Agency for Technology and Innovation), under Grant 40214/08 (Dynergia). E. Mininno is with the Department of Mathematical Information Technology, University of Jyv¨ askyl¨ a, Jyv¨ askyl¨ a 40700, Finland (e-mail: ernesto.mininno@jyu.fi). F. Neri is with the Department of Mathematical Information Technology, University of Jyv¨ askyl¨ a, Jyv¨ askyl¨ a 40700, Finland, and also with the Academy of Finland, Helsinki FI-00501, Finland (e-mail: ferrante.neri@jyu.fi). F. Cupertino is with the Department of Electrical and Electronic En- gineering, Technical University of Bari, Bari 70100, Italy (e-mail: cuper- [email protected]). D. Naso is with the Department of Electrical and Electronic Engineering, Polytechnic Institute of Bari, Bari 70126, Italy (e-mail: [email protected]). Digital Object Identifier 10.1109/TEVC.2010.2058120 I. Introduction I N MANY real-world applications, an optimization problem must be solved despite the fact that a full power computing device may be unavailable due to cost and/or space limitations. This situation is typical of robotics and control problems. For example, a commercial vacuum cleaner robot is supposed to, over the time, undergo a learning process in order to locate where obstacles are placed in a room (e.g., a sofa, a table, and so on) and then perform an efficient cleaning of the accessible areas. Regardless of the specific learning process, e.g., a neural network training, the robot must contain a computational core but clearly cannot contain all the full power components of a modern computer, since they would increase the volume, complexity, and cost of the entire device. Thus, a traditional optimization meta-heuristic can be inadequate under these circumstances. In order to overcome this class of problems compact evolutionary algorithms (cEAs) have been designed. A cEA is an evolutionary algorithm (EA) belonging to the class of estimation of distribution algorithms (EDAs) (see [1]). The algorithms belonging to this class do not store and process an entire population and all its individuals therein but on the contrary make use of a statistic representation of the population in order to perform the optimization process. In this way, a much smaller number of parameters must be stored in the memory. Thus, a run of these algorithms requires much less capacious memory devices compared to their correspondent standard EAs. The first cEA was the compact genetic algorithm (cGA) introduced in [2]. The cGA simulates the behavior of a standard binary encoded genetic algorithm (GA). In [2] can be seen that cGA has a performance almost as good as that of GA. As expected, the main advantage of a cGA with respect to a standard GA is the memory saving. An analysis of the convergence properties of cGA by using Markov chains is given in [3]. In [4] (see also [5]) the extended compact genetic algorithm (ecGA) has been proposed. The ecGA is based on the idea that the choice of a good probability distribution is equivalent to linkage learning. The measure of a good distribution is based on minimum description length models: simpler distributions are better than the complex ones. The probability distribution used in ecGA is a class of probability models known as marginal product models. A theoretical analysis of the ecGA behavior is presented in [6]. A hybrid version of ecGA integrating the Nelder-Mead algorithm is proposed in [7]. A study on the scalability of ecGA is given in [8]. The cGA and its variants have been intensively used 1089-778X/$26.00 c 2010 IEEE
Transcript
Page 1: compact Differential evolution

32 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Compact Differential EvolutionErnesto Mininno, Member, IEEE, Ferrante Neri, Member, IEEE, Francesco Cupertino, Member, IEEE, and

David Naso, Member, IEEE

Abstract—This paper proposes the compact differential evo-lution (cDE) algorithm. cDE, like other compact evolutionaryalgorithms, does not process a population of solutions but itsstatistic description which evolves similarly to all the evolutionaryalgorithms. In addition, cDE employs the mutation and crossovertypical of differential evolution (DE) thus reproducing its searchlogic. Unlike other compact evolutionary algorithms, in cDE,the survivor selection scheme of DE can be straightforwardlyencoded. One important feature of the proposed cDE algorithmis the capability of efficiently performing an optimization processdespite a limited memory requirement. This fact makes the cDEalgorithm suitable for hardware contexts characterized by smallcomputational power such as micro-controllers and commercialrobots. In addition, due to its nature cDE uses an implicitrandomization of the offspring generation which corrects andimproves the DE search logic. An extensive numerical setuphas been implemented in order to prove the viability of cDEand test its performance with respect to other modern compactevolutionary algorithms and state-of-the-art population-basedDE algorithms. Test results show that cDE outperforms on aregular basis its corresponding population-based DE variant.Experiments have been repeated for four different mutationschemes. In addition cDE outperforms other modern compactalgorithms and displays a competitive performance with respectto state-of-the-art population-based algorithms employing a DElogic. Finally, the cDE is applied to a challenging experimentalcase study regarding the on-line training of a nonlinear neural-network-based controller for a precise positioning system subjectto changes of payload. The main peculiarity of this controlapplication is that the control software is not implementedinto a computer connected to the control system but directlyon the micro-controller. Both numerical results on the testfunctions and experimental results on the real-world problemare very promising and allow us to think that cDE and futuredevelopments can be an efficient option for optimization inhardware environments characterized by limited memory.

Index Terms—Adaptive systems, compact genetic algorithms,differential evolution (DE), estimation distribution algorithms.

Manuscript received November 27, 2009; revised March 8, 2010, May 18,2010, June 6, 2010, and June 16, 2010. Date of publication December 23,2010; date of current version February 25, 2011. This work was supportedby the Academy of Finland, Akatemiatutkija 130600, Algorithmic DesignIssues in Memetic Computing, and by Tekes (the Finnish Funding Agencyfor Technology and Innovation), under Grant 40214/08 (Dynergia).

E. Mininno is with the Department of Mathematical InformationTechnology, University of Jyvaskyla, Jyvaskyla 40700, Finland (e-mail:[email protected]).

F. Neri is with the Department of Mathematical Information Technology,University of Jyvaskyla, Jyvaskyla 40700, Finland, and also with the Academyof Finland, Helsinki FI-00501, Finland (e-mail: [email protected]).

F. Cupertino is with the Department of Electrical and Electronic En-gineering, Technical University of Bari, Bari 70100, Italy (e-mail: [email protected]).

D. Naso is with the Department of Electrical and Electronic Engineering,Polytechnic Institute of Bari, Bari 70126, Italy (e-mail: [email protected]).

Digital Object Identifier 10.1109/TEVC.2010.2058120

I. Introduction

IN MANY real-world applications, an optimization problemmust be solved despite the fact that a full power computing

device may be unavailable due to cost and/or space limitations.This situation is typical of robotics and control problems. Forexample, a commercial vacuum cleaner robot is supposed to,over the time, undergo a learning process in order to locatewhere obstacles are placed in a room (e.g., a sofa, a table, andso on) and then perform an efficient cleaning of the accessibleareas. Regardless of the specific learning process, e.g., a neuralnetwork training, the robot must contain a computational corebut clearly cannot contain all the full power components ofa modern computer, since they would increase the volume,complexity, and cost of the entire device. Thus, a traditionaloptimization meta-heuristic can be inadequate under thesecircumstances. In order to overcome this class of problemscompact evolutionary algorithms (cEAs) have been designed.A cEA is an evolutionary algorithm (EA) belonging to theclass of estimation of distribution algorithms (EDAs) (see[1]). The algorithms belonging to this class do not store andprocess an entire population and all its individuals therein buton the contrary make use of a statistic representation of thepopulation in order to perform the optimization process. In thisway, a much smaller number of parameters must be stored inthe memory. Thus, a run of these algorithms requires much lesscapacious memory devices compared to their correspondentstandard EAs.

The first cEA was the compact genetic algorithm (cGA)introduced in [2]. The cGA simulates the behavior of astandard binary encoded genetic algorithm (GA). In [2] canbe seen that cGA has a performance almost as good as that ofGA. As expected, the main advantage of a cGA with respectto a standard GA is the memory saving. An analysis of theconvergence properties of cGA by using Markov chains isgiven in [3]. In [4] (see also [5]) the extended compact geneticalgorithm (ecGA) has been proposed. The ecGA is based onthe idea that the choice of a good probability distributionis equivalent to linkage learning. The measure of a gooddistribution is based on minimum description length models:simpler distributions are better than the complex ones. Theprobability distribution used in ecGA is a class of probabilitymodels known as marginal product models. A theoreticalanalysis of the ecGA behavior is presented in [6]. A hybridversion of ecGA integrating the Nelder-Mead algorithm isproposed in [7]. A study on the scalability of ecGA is givenin [8]. The cGA and its variants have been intensively used

1089-778X/$26.00 c© 2010 IEEE

Page 2: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 33

to perform hardware implementation (see [9]–[11]). A cGAapplication to neural network training is given in [12].

In [13], a memetic variant of cGA is proposed in orderto enhance the convergence performance of the algorithm inthe presence of a high number of dimensions. Paper [14]analyzes analogies and differences between cGAs and (1 + 1)-ES and extends a mathematical model of ES [15] to cGAobtaining useful information on the performance. Moreover,[14] introduces the concept of elitism, and proposes twonew variants, with strong and weak elitism respectively, thatsignificantly outperform both the original cGA and (1 + 1)-ES. A real-encoded cGA (rcGA) has been introduced in [16].Some examples of rcGA applications to control engineeringare given in [17] and [18]. A simple real-encoded version ofecGA has been proposed in [19] and [20].

This paper proposes a compact differential evolution (cDE)algorithm. Although the general motivation behind the algo-rithmic design is similar to that of cGA and its variants,there are two important issues especially related to differentialevolution (DE) which are addressed in this paper. The firstone is the survivor selection scheme which employs the socalled one-to-one spawning logic, i.e., in DE the survivorselection is performed by performing a pair-wise compari-son between the performance of a parent solution and itscorresponding offspring. In our opinion this logic can benaturally encoded into a compact algorithm unlike the case of aselection mechanism typical of genetic algorithms (GAs), e.g.,tournament selection. In other words, we believe that a DE canbe straightforwardly encoded into a compact algorithm withoutlosing the basic working principles (in terms of survivorselection). The second issue is related to the DE searchlogic. A DE algorithm contains a limited amount of searchmoves which might contribute to jeopardizing the generationof high quality solutions which improve upon the currentbest performance (see [21]–[23]). In order to overcome thesealgorithmic limitations a popular modification of the basis DEscheme is by introducing some randomness into the searchlogic, for example in jitter and dither and in the jDE (see[24]). A cDE algorithm, due to its nature, does not holda full population of individuals but contains its informationin distribution functions and samples the individuals from itwhen necessary. Thus, unavoidably some extra randomness,with respect to original DE, is introduced. This fact is, inour opinion, beneficial to the algorithmic functioning andperformance. These two issues will be explained in greaterdetail in this paper.

The suitability of cDE to solve challenging problems inenvironments with limited computational resources is as-sessed by an experimental application on a challenging on-line optimization problem. The considered case study regardsa complex control scheme for a specific class of direct-drive linear motors with high positioning accuracy. Thesemotors are often directly coupled with their load, and theabsence of reduction/transmission gears makes the positioningperformance strongly influenced by the various uncertaintiesrelated to electro-mechanical phenomena (stiction, coggingforces), which therefore must be compensated with appropriatestrategies. In this paper, the control scheme is based on the

widely-adopted sliding mode design approach, which includesa nonlinear module used to estimate the equivalent effect ofdisturbances acting on the motor. The module is obtained witha recurrent neural network whose parameters are trained on-line by means of a cDE. The training algorithm is implementedon the same micro-controller running the control algorithms.The experimental results show that with a negligible increaseof computational costs caused by the cDE algorithm, the con-trol system is able to reach much better tracking performancesunder perturbed operating conditions.

The remainder of this paper is organized in the followingway. General descriptions of cGA and rcGA are given inSection II as well as a short description of DE. SectionII introduces also the notation used throughout this paper.Section III describes the cDE algorithm proposed in this paperand discusses its working principles and algorithmic details.Section IV displays the numerical results and subdivides theminto three parts: a comparative analysis of population-basedDE and its corresponding compact versions highlights therole of elitism and proves that cDE outperforms, in mostof the cases, its corresponding population based variant, acomparison of cDE against other modern compact algorithmsand EDAs shows that the cDE algorithm outperforms the otheralgorithms and thus can be considered as a very promisingcompact algorithm, the comparison with state-of-the-art DEbased algorithms shows that, notwithstanding the low memoryrequirements, cDE has a comparable performance for severalproblems and therefore can be an efficient solution when thehardware limitations forbid the use of a modern populationbased algorithm. Section V shows the applicability of cDEin a real world case and summarizes the results obtained onthe experimental case study, and finally Section VI gives theconclusive remarks of this paper.

II. Background

In order to clarify the notation used throughout this chapterwe refer to the minimization problem of an objective functionf (x), where x is a vector of n design variables in a decisionspace D. Without loss of generality, let us assume that param-eters are normalized so that each search interval is [−1, 1]. Inthe following sections, we indicate with bold font the vectorsand matrices while in italic scalar values.

A. Compact Genetic Algorithm

With the term cGA we will refer to the original paperalgorithm proposed in [2]. The cGA consists of the following.A binary vector of length n is randomly generated by assigninga 0.5 probability to each gene to take either the value 0 or thevalue 1. This describing the probabilities, initialized with n

values all equal to 0.5, is named as probability vector (PV).By means of the PV two individuals are sampled and theirfitness values are calculated. The winner solution, i.e., thesolution characterized by a higher performance, biases the PVon the basis of a parameter Np called virtual population. Morespecifically, if the winner solution in correspondence to itsith gene displays a 1 while the loser solution displays a 0the probability value in position ith of the PV is augmented

Page 3: compact Differential evolution

34 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Fig. 1. cGA pseudo-code.

by a quantity 1Np

. On the contrary, if the winner solution incorrespondence to its ith gene displays a 0 while the losersolution displays a 1 the probability value in position ith ofthe PV is reduced by a quantity 1

Np. If the genes in position

ith display the same value for both the winner and losersolutions, the ith probability of PV is not modified. Thisscheme is equivalent to (steady-state) pair-wise tournamentselection as shown in [2]. For the sake of clarity, the pseudo-code describing the working principles of cGA is displayedin Fig. 1. With the function compete we simply mean thefitness-based comparison.

B. Elitism in Compact Genetic Algorithms

Two novel versions of cGA have been proposed in [14].Both of these algorithms still share the same ideas proposedin [2] but proved to have a significantly better performancecompared to their corresponding earlier versions. These twoalgorithms, namely persistent elitist compact genetic algorithm(pe-cGA) and nonpersistent elitist compact genetic algorithm(ne-cGA), modify the original cGA in the following way.During the initialization, one candidate solution besides thePV, namely elite, is also randomly generated. Subsequently,only one (and not two as in cGA) new candidate solution isgenerated. This solution is compared with the elite. If the eliteis the winner solution, the elite biases the PV as shown forthe cGA and the elite is confirmed for the following solutiongeneration and consequent comparison. On the contrary, if thenewly generated candidate solution outperforms the elite, thePV is updated as shown for the cGA where the new solution isthe winner and the elite is the looser. Under these conditions,the elite is replaced by the new solution which becomes thenew elite. In the scheme of pe-cGA this replacement only

Fig. 2. pe-cGA pseudo-code.

occurs under the condition that the elite is outperformed.In the ne-cGA scheme, if an elite is still not replaced afterη comparisons, the elite is replaced by a newly generatedsolution regardless of its fitness value. It must be remarked thatwhether the persistent or nonpersistent scheme is preferableseems to be a problem dependent issue (see [14]). The pseudo-codes highlighting the working principles of pe-cGA and ne-cGA are given in Figs. 2 and 3, respectively.

C. Real-Valued Compact Genetic Algorithm

The real-valued compact genetic algorithm (rcGA) has beenintroduced in [16]. The rcGA is a compact algorithm inspiredby the cGA which exports the compact logic to a real-valueddomain thus obtaining an optimization algorithm with a highperformance despite the limited amount of employed memoryresources.

In rcGA the PV is not a vector but a n × 2 matrix

PVt =[µt, σt] (1)

where µ and σ are, respectively, vectors containing, for eachdesign variable, mean and standard deviation values of aGaussian probability distribution function (PDF) truncatedwithin the interval [−1, 1]. The height of the PDF has beennormalized in order to keep its area equal to 1. The apex t

indicates the generation (number of performed comparison).

Page 4: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 35

Fig. 3. ne-cGA pseudo-code.

At the beginning of the optimization process, for eachdesign variable i, µ1 [i] = 0 and σ1[i] = λ, where λ is a largepositive constant (λ = 10). This initialization of σ[i] values isdone in order to simulate a uniform distribution. Subsequently,one individual is sampled as elite exactly like in the case of pe-cGA or ne-cGA. A new individual is generated and comparedwith the elite. As for the cGA, in rcGA the winner solutionbiases the PV. The update rule for each element of µ valuesis given by

µt+1 [i] = µt [i] +1

Np

(winner [i] − loser [i]) (2)

where Np is virtual population size. The update rule for σ

values is given by(σt+1 [i]

)2=

(σt [i]

)2+

(µt [i]

)2 − (µt+1 [i]

)2

+1

Np

(winner [i]2 − loser [i]2

). (3)

Details for constructing (2) and (3) are given in [16]. It mustbe remarked that, in [16], both persistent and nonpersistentstructures of rcGA have been tested and it is shown that alsoin this case the best choice on the elitism seems to be problemdependent.

D. Differential Evolution

According to its original definition (see [21], [25]), theDE algorithm consists of the following steps. An initialsampling of Np individuals is performed pseudo-randomlywith a uniform distribution function within the decision spaceD. At each generation, for each individual xk of the Np,three individuals xr, xs, and xt are pseudo-randomly extractedfrom the population. According to the DE logic, a provisionaloffspring x′

off is generated by mutation as

x′off = xt + F (xr − xs) (4)

where F ∈ [0, 2] is a scale factor which controls the lengthof the exploration vector (xr − xs) and thus determines howfar from point xk the offspring should be generated. Themutation scheme shown in (4) is also known as DE/rand/1.Other variants of the mutation rule have been subsequentlyproposed in literature (see [25]).

1) DE/best/1: x′off = xbest+ F (xr − xs).

2) DE/cur-to-best/1: x′off = xk+ F (xbest − xk)+F (xr − xs).

3) DE/best/2: x′off = xbest+ F (xr − xs) + F (xu − xv).

4) DE/rand/2: x′off = xt+ F (xr − xs) + F (xu − xv).

5) DE/rand-to-best/1: x′off = xt+ F (xbest − xt)+F (xr− xs).

6) DE/rand-to-best/2: x′off = xt+ F (xbest − xt); +F (xr−xs)

+F (xu − xv)where xbest is the solution with the best performanceamong the individuals of the population where xu andxv are two additional pseudo-randomly selected individ-uals. It is worthwhile to mention the rotation invariantmutation (see [21], [25]).

7) DE/current-to-rand/1 xoff =xk +K (xt − xk)+F ′ (xr − xs)where K is the combination coefficient, which should bechosen with a uniform random distribution from [0, 1]and F ′ = K · F . Since this mutation scheme alreadycontains the crossover, the mutated solution does notundergo the crossover operation described below.

Recently, in [26], a new mutation strategy has been defined.This strategy, namely DE/rand/1/either-or, consists of thefollowing:

x′off =

{xt + F (xr − xs) if rand (0, 1) < pF

xt + K (xr + xs − 2xt) otherwise(5)

where for a given value of F , the parameter K is set equal to0.5 (F + 1).

When the provisional offspring has been generated bymutation, each gene of the individual x′

off is exchanged withthe corresponding gene of xi with a uniform probability andthe final offspring xoff is generated

xoff [i] =

{x′off [i] if rand (0, 1) ≤ Cr

xk, [i] otherwise(6)

where rand (0, 1) is a random number between 0 and 1; i is theindex of the gene under examination; Cr is a constant valuenamely crossover rate. This crossover strategy is well-knownas binomial crossover and indicated as DE/rand/1/bin.

Page 5: compact Differential evolution

36 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Fig. 4. DE/rand/1/bin pseudo-code.

For the sake of completeness, we mention that a few othercrossover strategies also exist, for example the exponentialstrategy (see [26]). However, in this paper we focus on thebinomial strategy since it is the most commonly used andoften the most promising.

The resulting offspring xoff is evaluated and, according toa one-to-one spawning strategy, it replaces xk if and only iff (xoff ) ≤ f(xk); otherwise no replacement occurs. It mustbe remarked that although the replacement indexes are saved,one by one, during the generation, the actual replacementsoccur all at once at the end of the generation. For the sake ofclarity, the pseudo-code highlighting the working principles ofthe DE/rand/1/bin is shown in Fig. 4.

III. Compact Differential Evolution

The cDE algorithm herein proposed combines the updatelogic of rcGA and integrates it within a DE framework. cDEis a simple algorithm which, despite its simplicity, can be avery efficient possibility for optimization in limited memoryenvironments.

The proposed algorithm consists of the following. A (2 × n)PV is generated as shown for the rcGA: µ values are set equalto 0 while σ values are set equal to a large number λ = 10. Asexplained in Section II-C, the value of λ is empirically set inorder to simulate a uniform distribution at the beginning of theoptimization process. A solution is sampled from PV and playsthe role of elite. Subsequently, at each step t, some solutionsare sampled on the basis of the selected mutation scheme. Forexample, if a DE/rand/1 mutation is selected, three individualsxr, xs, and xt are sampled from PV.

Fig. 5. Sampling mechanism.

More specifically, the sampling mechanism of a designvariable xr [i] associated to a generic candidate solution xr

from PV consists of the following steps. As mentioned above,for each design variable indexed by i, a truncated GaussianPDF characterized by a mean value µ [i] and a standarddeviation σ [i] is associated. The formula of the PDF is

PDFµ[i], σ [i] =e− (x−µ[i])2

2σ[i]2

√2π

σ [i](

erf(

µ[i]+1√2σ[i]

)− erf

(µ[i]−1√

2σ[i]

)) (7)

where erf is the error function (see [27]).From the PDF, the corresponding cumulative distribution

function (CDF) is constructed by means of Chebyshev poly-nomials according to the procedure described in [28]. It mustbe observed that the codomain of CDF is [0, 1]. In order tosample the design variable xr[i] from PV a random numberrand(0, 1) is sampled from a uniform distribution. The inversefunction of CDF, in correspondence of rand(0, 1), is thencalculated. This latter value is xr[i]. A graphical representationof the sampling mechanism is given in Fig. 5.

As mentioned in Section II-C, the sampling is performedon normalized values within [−1, 1]. It can be noticed thatin order to obtain the value in the original interval [a, b], thefollowing operation must be performed: xr[i]

(b−a)2 + a.

The mutation is then performed, e.g., according to (4), andthe provisional offspring is generated. A crossover, accordingto (6), between the elite and the provisional offspring isperformed in order to generate the offspring. The fitness valueof the offspring is then computed and compared with thatof the elite individual. The comparison allows the definitionof winner and loser solutions. Formulas (2) and (3) arethen applied to update the PV for the subsequent solutiongenerations. If the offspring outperforms the elite individual,the offspring replaces the elite.

Page 6: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 37

Fig. 6. pe-DE/rand/1/bin pseudo-code.

Clearly, within a cDE framework all the above mentionedmutation schemes (as well as others) can be easily im-plemented. Similarly, both exponential crossover, instead ofbinomial, can be integrated within the algorithm, if desired.In addition, both persistent and nonpersistent elitism can beadopted as elite strategies. Fig. 6 shows the pseudo-codeof a cDE employing rand/1 mutation, binomial crossoverand persistent elitism. This algorithm is indicated as pe-cDE/rand/1/bin.

A. Compact Differential Evolution: Algorithmic Philosophy

As shown above, cDE is an algorithm which combines thesearch logic of DE and the evolution structure of an EDA.More specifically, cDE, as well rcGA, shares some importantaspects with continuous-population based incremental learning(PBILc) and continuous-univariate marginal distribution algo-rithm (UMDAc). The PBILc algorithms (see [29], [30]) areextensions of a population-based algorithm originally devisedfor binary search spaces. One specific version of PBILc

uses a set of Gaussian distributions to model the vector ofproblem’s variables (see [31]). During the iterations, this PBILalgorithm only changes the mean values, while the standarddeviations are constant parameters that must be fixed a priori.In this sense cDE can be seen as a member of the family ofPBILc which integrates an adaptation on the standard deviationvalues. It must be remarked that in literature some examplesof simple standard deviation adaptation for PBILc algorithmshave been proposed (see [32]). Regarding the UMDAc, asshown in [33], the update of the PDFs occurs after a certainamount of pairwise comparisons have been performed. Due toits algorithmic structure UMDAc requires many comparisonsbefore a single PDF update. In this sense the cDE, as wellas the rcGA, can be seen as a UMDAc restricted to a singlecomparison. It must be finally remarked the similarity betweencDE and (1 + 1)-ES. A proof related to this topic is given in[14] and some considerations are reported in [16]. Similarlyto (1 + 1)-ES, cDE processes pairs of candidate solutions andsubsequently modifies a PDF. The main difference between

Page 7: compact Differential evolution

38 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

the two algorithms is that while (1 + 1)-ES encodes the searchstrategy for generating new candidate solutions, cDE directlyencodes the new solution to be generated.

Regarding the structure of cDE with respect to the othercEAs the following considerations can be done. With respectto the cGAs, cDE generates the offspring solutions by meansof the DE logic instead of simply generating the offspringdirectly from the PV. This fact guarantees higher explorationproperties since the combination of the solutions (by means ofDE mutation and crossover) allows the generation of candidatesolutions outside the PDF characterizing the PV. Thus, thecDE enhances the probability of detecting unexploredpromising areas of the decision space and consequentlyreduces the risk of premature convergence. In addition, thecDE, unlike cGAs, allows a rather straightforward encodingof the survivor selection scheme of the population-based DE.More specifically, DE employs one-to-one spawning alreadybased on simple pairwise comparisons and replacements. Onthe contrary, in efficient GAs, the proper selection of a newpopulation is based on the performance of an entire population,for example by means of a ranking procedure. The encodingof GAs within PV clearly limits and simplifies the survivorselection scheme, often reducing the performance of the com-pact algorithms with respect to their original population-basedvariants. In other words, the encoding of a GA into a compactalgorithm imposes the employment of a pair-wise tournamentselection (see [2]), which can be nt the optimal choice for a GAstructure. Within a DE scheme, the situation appears to be dif-ferent. The encoding into a compact algorithm does not in thiscase jeopardize the replacement logic, except for the fact thatin DE the population replacement is usually performed at theend of all the comparisons [we are referring to the most com-mon discrete survivor selection scheme (see [26])]. Apart fromthis fact, since one-to-one spawning structure characterizingthe DE survivor selection scheme can be seen as a pair-wisetournament selection, the cDE maintains the same featuresof the survivor selection logic of its population-based variant.

In addition, in order to understand the properties and work-ing principles of cDE, an analysis on the DE functioning mustbe carried out. From an algorithmic viewpoint, the reasonsfor the success of DE have been highlighted in [34]: thesuccess of DE is due to an implicit self-adaptation containedwithin the algorithmic structure. More specifically, since, foreach candidate solution, the search rule depends on othersolutions belonging to the population (e.g., xt, xr, and xs),the capability of detecting new promising offspring solutionsdepends on the current distribution of solutions within thedecision space. During the early stages of the optimizationprocess, the solutions tend to be spread out within the decisionspace. For a given scale factor value, this implies that mutationappears to generate new solutions by exploring the space bymeans of a large step size (if xr and xs are distant solutions,F (xr − xs) is a vector characterized by a large modulus).During the optimization process, solutions of the populationtend to concentrate in specific parts of the decision space.Therefore, step size in the mutation is progressively reducedand the search is performed in the neighborhood of thesolutions. In other words, due to its structure, a DE scheme is

highly explorative at the beginning of the evolution and sub-sequently becomes more exploitative during the optimization.

Although this mechanism appears, at first glance, veryefficient, it hides a limitation. If for some reason the algorithmdoes not succeed at generating offspring solutions whichoutperform the corresponding parent, the search is repeatedagain with similar step size values and will likely fail by fallinginto an undesired stagnation condition (see [35]). Stagnationis that undesired effect which occurs when a population-basedalgorithm does not converge to a solution (even suboptimal)and the population diversity is still high. In the case of DE,stagnation occurs when the algorithm does not manage toimprove upon any solution of its population for a prolongedamount of generations. In other words, the main drawback ofDE is that the scheme has, for each stage of the optimizationprocess, a limited amount of exploratory moves and if thesemoves are not enough to generate new promising solutions,the search can be heavily compromised.

In order to overcome these limitations, computer scientistshave been intensively proposing modifications of the origi-nal DE structure. For example it is worthwhile mentioningthe employment of extra or multiple mutation schemes, asfor example the trigonometric mutation, the self-adaptationwith multiple mutation schemes proposed, and the genera-tion of candidate solutions by means of an alternative rule(opposition-based) (see [36]). The offspring generation, bymeans of the composition of two contributions, the firstone resulting from the entire population, the second from asubset of it, is proposed in [37]. In other works, local searchalgorithms support the DE search (see [38]–[40]). In modernDE-based algorithms randomization seems to play a veryimportant role on the algorithmic performance. It is a well-known fact that a scale factor randomization tends to improveupon the algorithmic performance as in the case of jitter anddither (see [21], [25], and references therein). In a similarway, in [24] a controlled randomization of scale factor andcrossover rate is proposed. In [41], the concept of parameterrandomization is encoded within a sophisticated adaptive rulewhich is based on truncated Gaussian distributions. In [42],a randomized adaptation of the parameters is combined withmultiple mutation schemes.

The cDE clearly has something in common with theseapproaches. The main difference is that instead of imposing arandomization on the control parameters, the cDE imposesa randomization on the solutions which contribute to theoffspring generation. However, the cDE can be seen as aDE which introduces a randomization within the solutiongeneration and therefore introduces extra search moves whichassist the DE structure and attempt to improve upon itsperformance. A numerical validation of this intuition is givenin Section IV when the results are shown. Thus, with respectof other cEAs, cDE has the advantage/novelty of being astraightforward implementation of its population-based equiv-alent. With respect to DE, cDE employs a novel structure forselecting the individuals composing the provisional offspringby means of the sampling from a probability distribution.

As a final remark, it should be observed that cDE is notan improved version of DE but, on the contrary, is a light

Page 8: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 39

TABLE I

Average Final Fitness ± Standard Deviation for DE/rand/1/bin Schemes

Problem pe-cDE/rand/1/bin ne-cDE/rand/1/bin DE/rand/1/binn = 10

f1 2.683e-11 ± 1.54e-11 5.389e-10 ± 5.48e-10 3.083e-08 ± 2.43e-08f2 3.935e-03 ± 8.78e-03 1.898e+02 ± 3.53e+02 3.284e+02 ± 1.52e+02f3 2.608e+02 ± 1.05e+03 7.711e+02 ± 2.11e+03 1.312e+03 ± 1.20e+03f4 1.891e-06 ± 5.08e-07 7.758e-06 ± 3.28e-06 2.987e-04 ± 9.74e-05f5 1.976e+00 ± 1.98e+00 3.516e-01 ± 3.52e-01 4.336e+00 ± 1.16e+00f6 8.419e-03 ± 1.02e-02 1.131e-03 ± 3.85e-03 9.546e-03 ± 2.37e-03f7 2.400e-01 ± 2.40e-01 6.309e-02 ± 6.31e-02 8.079e-01 ± 3.04e-01f8 1.658e-01 ± 8.42e-03 6.039e+00 ± 1.13e-03 3.079e-14 ± 3.04e-14f9 3.093e+01 ± 1.24e+01 3.712e+01 ± 5.73e+00 5.261e+01 ± 6.51e+00f10 5.125e+00 ± 2.60e+00 1.725e+01 ± 1.33e+01 5.646e+00 ± 1.21e+00f11 1.365e-02 ± 1.68e-02 1.648e-01 ± 3.02e-01 1.273e-01 ± 6.81e-01f12 1.167e+02 ± 1.27e+02 4.295e+01 ± 5.26e+01 1.604e+02 ± 1.77e+01f13 7.813e+02 ± 1.74e+02 5.896e+02 ± 1.32e+02 5.800e+02 ± 5.41e+01f14 1.292e-06 ± 4.59e-07 4.215e-06 ± 1.90e-06 4.782e-05 ± 1.96e-06f15 −1.000e+02 ± 1.79e-08 −1.000e+02 ± 4.67e-04 −4.905e+01 ± 5.01e+00f16 5.532e-13 ± 4.48e-13 7.603e-12 ± 6.54e-12 9.119e-06 ± 5.43e-06f17 −1.150e+00 ± 1.02e-11 −1.150e+00 ± 8.20e-11 −1.150e+00 ± 2.65e-10f18 −5.887e+01 ± 4.91e+02 −2.523e+02 ± 1.51e+02 1.043e+03 ± 8.22e+02f19 9.673e+01 ± 1.10e+00 9.924e+01 ± 7.05e-01 9.932e+01 ± 6.07e-01f20 5.192e+01 ± 5.91e+02 9.240e+02 ± 1.03e+03 1.746e+03 ± 1.46e+03

n = 30f1 1.996e+02 ± 7.36e+02 4.828e-28 ± 9.03e-28 6.187e-07 ± 2.33e-08f2 1.282e+04 ± 3.13e+03 2.892e+04 ± 4.83e+03 3.640e+04 ± 2.84e+03f3 6.535e+06 ± 3.20e+07 5.294e+02 ± 1.19e+03 5.242e+04 ± 3.29e+04f4 1.140e+01 ± 1.09e+00 1.642e+01 ± 3.68e-01 1.863e+01 ± 2.07e-01f5 1.264e+01 ± 1.01e+00 1.650e+01 ± 4.27e-01 1.863e+01 ± 2.54e-01f6 1.634e-02 ± 2.99e-02 6.867e-02 ± 6.75e-02 2.319e+02 ± 2.07e+01f7 2.278e-01 ± 2.74e-01 1.867e-01 ± 2.20e-01 2.319e+02 ± 2.81e+01f8 7.384e+01 ± 1.23e+01 1.531e+02 ± 2.09e+01 3.211e-14 ± 6.25e-14f9 1.317e+02 ± 2.66e+01 2.655e+02 ± 2.28e+01 2.866e+02 ± 1.84e+01f10 8.067e+03 ± 4.90e+03 3.665e+04 ± 5.28e+03 1.632e+05 ± 1.77e+04f11 1.490e+03 ± 4.14e+02 1.939e+03 ± 6.69e+02 6.480e+03 ± 6.37e+02f12 8.962e+01 ± 6.44e+01 1.299e+02 ± 2.03e+01 3.423e+02 ± 3.60e+01f13 9.443e+02 ± 2.85e+01 9.707e+02 ± 2.96e+01 1.087e+03 ± 1.47e+01f14 5.220e+00 ± 2.62e+00 9.917e-01 ± 2.69e+00 1.065e+01 ± 1.02e+00f15 −9.954e+01 ± 1.10e+00 −9.930e+01 ± 6.93e-01 −2.181e+01 ± 3.14e+00f16 1.216e+00 ± 2.00e+00 4.803e-01 ± 6.72e-01 9.483e+01 ± 1.30e+01f17 1.511e-01 ± 1.35e+00 −3.378e-01 ± 1.25e+00 −1.574e-01 ± 3.00e-01f18 8.939e+03 ± 1.57e+03 1.003e+04 ± 1.22e+03 1.021e+04 ± 1.42e+03f19 1.248e+02 ± 2.65e+00 1.279e+02 ± 1.59e+00 1.300e+02 ± 1.54e+00f20 1.013e+05 ± 4.49e+04 1.204e+05 ± 4.09e+04 3.804e+05 ± 3.01e+04

n variousf21 5.296e-02 ± 7.79e-18 5.296e-02 ± 2.75e-12 5.296e-02 ± 4.31e-09f22 −1.067e+00 ± 4.22e-16 −1.067e+00 ± 1.71e-13 −1.067e+00 ± 1.36e-04f23 3.980e-01 ± 3.56e-04 3.983e-01 ± 5.19e-04 3.984e-01 ± 6.57e-04f24 −3.863e+00 ± 4.29e-11 −3.863e+00 ± 1.10e-08 −3.863e+00 ± 8.37e-05f25 −3.288e+00 ± 5.53e-02 −3.322e+00 ± 8.74e-04 −3.248e+00 ± 2.91e-02f26 −5.040e+00 ± 3.16e+00 −9.267e+00 ± 1.37e+00 −5.740e+00 ± 1.69e+00f27 −4.822e+00 ± 3.07e+00 −9.764e+00 ± 1.16e+00 −6.150e+00 ± 1.29e+00f28 −6.048e+00 ± 3.64e+00 −1.003e+01 ± 5.77e-01 −6.216e+00 ± 1.86e+00

Page 9: compact Differential evolution

40 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

version of DE. With light version we mean that cDE mimicsthe DE behavior and performance but imposes much lowermemory requirements. More specifically, while, according toa typical setting, DE requires the storage of Np = 2 × n

or even more individuals, cDE requires the storage of onlyfour individuals independently on the dimensionality of theproblem (see details in Section IV). This fact allows the cDEimplementation in hardware environments characterized bymemory limitations and, dually, a high performance notwith-standing modest hardware investments.

IV. Numerical Results

The following test problems have been considered in thispaper.

f1 Shifted sphere function: F1 from [43].f2 Shifted Schwefel’s problem 1.2: F2 from [43].f3 Rosenbrock’s function: f3 from [42].f4 Shifted Ackley’s function: f5 from [42].f5 Shifted rotated Ackley’s function: f6 from [42].f6 Shifted Griewank’s function: f7 from [42].f7 Shifted rotated Griewank’s function: f8 from [42].f8 Shifted Rastrigin’s function: F9 from [43].f9 Shifted rotated Rastrigin’s function: F10 from [43].f10 Shifted noncontinuous Rastrigin’s function: f11 from

[42].f11 Schwefel’s function: f12 from [42].f12 Composition function 1: CF1 from [44]. The function

f12 (CF1) is composed using ten sphere functions.f13 Composition function 6: CF6 from [44]. The function

f13 (CF6) is composed by using ten different benchmarkfunctions, i.e., two rotated Rastrigin’s functions, two rotatedWeierstrass functions, two rotated Griewank’s functions, tworotated Ackley’s functions, and two rotated Sphere functions.

f14 Schwefel problem 2.22: f2 from [45].f15 Schwefel problem 2.21: f4 from [45].f16 Generalized penalized function 1: f12 from [45].f17 Generalized penalized function 2: f13 from [45].f18 Schwefel’s problem 2.6 with Global Optimum on

Bounds: F5 from [43].f19 Shifted rotated Weierstrass function: F11 from [43].f20 Schwefel’s problem 2.13: F12 from [43].f21 Kowalik’s function: f15 from [46].f22 Six-hump camel-back function: f20 from [42].f23 Branin function: f17 from [45].f24 Hartman’s function 1: f19 from [46].f25 Hartman’s function 2: f20 from [46].f26 − f28 Shekel’s family: f21 − f24 from [46].The test problems have been selected by employing entirely

the benchmark used in [42] (f1–f17 and f21–f28). In addition,our benchmark has been expanded by adding a few extraproblem from [43] (f18−20). Some of the problems appearingin [42] were chosen from [43] and [46]. In the list above, weindicated the original papers where the problems have beendefined and proposed for the first time.

All the algorithms in this paper have been run for testproblems f1–f20 with n = 10 and n = 30. Test problemsf21–f28 are characterized by a unique dimensionality value.

These problems have been run with the original dimensionalityas shown in the papers mentioned above. Thus, totally 48 testproblems are contained in this paper. For each algorithm, 30independent runs have been performed. The budget of eachsingle run has been fixed equal to 5000×n fitness evaluations.Actual and virtual population sizes have been set equal toNp = 2 × n.

A. Validation of Compact Differential Evolution

This section presents the result of cDE with respect to itspopulation-based variant. The aim of this section is to provethat the proposed compact encoding does not deteriorate theperformance of the corresponding population based algorithm.In other words, this section shows that the proposed lightversion of DE, despite its minimal memory requirement isnot less performing than a heavy standard population basedDE. In order to pursue this aim, four DE schemes have beenconsidered.

1) DE/rand/1/bin: F = 0.9 and Cr = 0.9.2) DE/rand-to-best/1/bin: F = 0.5 and Cr = 0.7.3) DE/rand-to-best/2/bin: F = 0.5 and Cr = 0.7.4) The dithered version of DE (DE-dither) presented in [47]

with F = 0.5 (1 + rand (0, 1)) updated at each generationand Cr = 0.9.

For each scheme, the corresponding cDE algorithms, withpersistent and non persistent elitist strategies, have been tested.Each cDE algorithm employs the same parameter settingof the corresponding population based algorithm. Regardingthe nonpersistent elitist schemes, η has been set equal to0.5 × n throughout all the experiments presented in thispaper (including ne-cGA and ne-rcGA). Tables I shows theaverage of the final results detected by each DE/rand/1/bin-like algorithm ± the corresponding standard deviation values.The best results are highlighted in bold face.

In order to strengthen the statistical significance of theresults, the Wilcoxon Rank-Sum test has also been appliedaccording to the description given in [48], where the con-fidence level has been fixed to 0.95. Table II summarizesthe results of the Wilcoxon test for each version of cDEagainst its corresponding population-based algorithm. A “+”indicates the case in which cDE statistically outperforms, forthe corresponding test problem, its corresponding population-based algorithm; a “=” indicates that a pairwise comparisonleads to success of the Wilcoxon Rank-Sum test, i.e., the twoalgorithms have the same performance; a “-” indicates thatcDE is outperformed.

Results of the Wilcoxon test showed that both, persistentand nonpersistent elitist, cDE versions significantly outper-form their corresponding population-based DE. Regarding theschemes DE/rand/1/bin, DE/rand-to-best/1/bin, and DE/rand-to-best/2/bin, for almost all the problems analyzed we canconclude that cDE has a performance at least as good asDE. The only exception over the 144 cases analyzed (48 testproblems × 3 mutation schemes considered) is test problemf13. Only in this case population-based DE seems to be morepromising than the compact variants (regardless the mutationscheme). The reason of the overall success of cDE schemes

Page 10: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 41

TABLE II

Wilcoxon Test for the Validation Study

rand/1/bin rand-to-Best/1/bin rand-to-Best/2/bin Ditherpe-cDE vs. DE ne-cDE vs. DE pe-cDE vs. DE ne-cDE vs. DE pe-cDE vs. DE ne-cDE vs. DE pe-cDE vs. DE ne-cDE vs. DE

n = 10f1 + + + + + + + +f2 + + + + + + + =f3 + + + + + + = −f4 + + + = + + − −f5 + + + + + + − −f6 + + + + + + − −f7 + + + + + + − −f8 − − + + + + − −f9 + + + + + + = −f10 + + + + + + − −f11 + = + + = + + +f12 = + = + + + − −f13 − = − − − = − =f14 + + + + + + = =f15 + + + + + + − −f16 + + = = + + + +f17 = = + + + + = +f18 + + + + + + − =f19 + = + = + = + =f20 + + + + + + + =

n = 30f1 − + + + + + − +f2 + + + + − + − −f3 − + + + + + = −f4 + + + + + + = −f5 + + + + + + − −f6 + + + + + + = −f7 + + + + + + − −f8 − − + + + + − −f9 + + + + + + + −f10 + + + + + + − −f11 + + + + + + = −f12 + + + + − + − −f13 + + + + + + − −f14 + + + + + + + +f15 + + + + + + − −f16 + + + + + + − −f17 + + + + + + − −f18 + + + + + + − −f19 + + + + + + + +f20 + + + + + + = =

n variousf21 = = = = = = = =f22 = = + + = = + +f23 + = + + + + = =f24 = = + = + + = =f25 + + + + + + = +f26 = + = + = + − −f27 = + = + = + − −f28 = + = + = + − −“+” means that cDE outperforms DE, “−” means that cDE is outperformed, and “=” means that the algorithms have the same performance.

Page 11: compact Differential evolution

42 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Fig. 7. Performance trends cDE/rand/1/bin vs. DE/rand/1/bin. (a) f2 with n = 10. (b) f15 with n = 10. (c) f18 with n = 10.

with respect to population-based DE is, in our opinion, dueto the implicit randomization of the search logic within theDE structure. As explained above, a certain degree of ran-domization seems to be beneficial on DE performance andthe reported results seem to confirm this intuition. Finally, itshould be remarked the memory saving of cDE with respectto DE. A DE scheme requires Np permanent memory slots inorder to store the population and volatile memory slots, thefirst being reserved for offspring generation and the secondfor keeping track of the replacement to be performed at theend of each generation. A cDE scheme (regardless of theelitist strategy) requires three permanent memory slots, tworeserved for the PV and one for the elite and one slot ofvolatile memory for the offspring generation (four memoryslots in total). Since Np must be set in dependance to theamount of variables n of the problem, it is clear that ifthe problem is multi-variate there is a very relevant memorysaving. For example, if n = 10 [which can be the amount ofvariables in a simple micro-controller (see [22], [39])], DEwould require, e.g., Np = 2 × n = 20 permanent memory slotswhile cDE only four memory slots. Thus, with these smallmemory requirements, the optimization would be allowed ona modest and relatively cheap hardware (micro-controller) thusleading to a competitive product for the industrial market.Since the employment of compact algorithms does not cause aworsening in the DE performance (unlike what often happens

with GAs) the proposed cDE algorithms are, in our opinion,an appealing alternative for optimization in many industrialcontexts.

Regarding the dithered schemes, the population-based DEhas a better performance than the cDE variants but still thecompact algorithms obtain a respectable performance. Thisfact can be interpreted by considering that DE-dither alreadyemploys a certain degree a randomization which causes a goodperformance. On the contrary, the randomization introduced bythe compact algorithms within the sampling of the individualsdoes not lead to further advantages. However, taking intoaccount that the memory requirement for the cDE-ditheralgorithms is a way smaller than that for DE-dither, we canconclude that the results are overall satisfactory.

Fig. 7 shows average performance trends of cDE/rand/1/binand its population-based DE algorithm over a selection of thetest problems listed above.

B. Comparison with State-of-the-Art Compact EvolutionaryAlgorithms

In order to analyze the performance of the proposedcDE algorithms with respect to other cEAs, pe-cDE/rand-to-best/1/bin and ne-cDE/rand-to-best/1/bin have been comparedwith pe-cGA and ne-cGA proposed in [14] and pe-rcGA andne-rcGA proposed in [16]. The choice of the rand-to-bestmutation scheme is due to the fact that it displayed the best

Page 12: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 43

TABLE III

Average Final Fitness ± Standard Deviation for cDE Algorithms Against the State-of-the-Art CEAs

Problem pe-cGA [14] ne-cGA [14] pe-rcGA [16] ne-rcGA [16] pe-cDE/rand-to-best/1/bin ne-cDE/rand-to-best/1/binn = 10

f1 6.963e+02 ± 5.93e+02 6.996e+02 ± 5.56e+02 9.783e-26 ± 4.40e-25 1.898e-28 ± 2.42e-28 3.096e-03 ± 2.40e-03 2.021e-02 ± 1.22e-02f2 8.935e+03 ± 6.07e+03 1.049e+04 ± 7.25e+03 1.188e+03 ± 1.15e+03 3.350e+01 ± 5.46e+01 1.460e+00 ± 1.29e+00 7.179e+01 ± 9.77e+01f3 3.609e+07 ± 4.94e+07 3.143e+07 ± 3.00e+07 2.467e+02 ± 3.95e+02 8.098e+01 ± 1.67e+02 3.559e+01 ± 3.19e+01 6.370e+02 ± 1.27e+03f4 9.726e+00 ± 1.83e+00 9.667e+00 ± 2.65e+00 1.013e+01 ± 5.59e+00 1.925e-01 ± 4.40e-01 2.219e-02 ± 9.17e-03 7.846e-02 ± 2.87e-02f5 1.038e+01 ± 2.07e+00 1.112e+01 ± 2.64e+00 7.388e+00 ± 5.23e+00 3.453e-01 ± 7.20e-01 1.750e+00 ± 1.10e+00 2.218e-01 ± 3.11e-01f6 2.496e+02 ± 8.17e+00 2.496e+02 ± 1.00e+01 7.413e-02 ± 6.94e-02 1.119e-02 ± 1.58e-02 5.150e-02 ± 3.26e-02 2.922e-02 ± 1.39e-02f7 2.509e+02 ± 1.12e+01 2.514e+02 ± 8.67e+00 1.681e-01 ± 1.01e-01 7.789e-02 ± 7.81e-02 2.679e-01 ± 1.61e-01 3.852e-02 ± 3.38e-02f8 4.106e+01 ± 1.23e+01 4.419e+01 ± 1.43e+01 3.366e+01 ± 1.20e+01 8.291e+00 ± 3.47e+00 5.503e-03 ± 6.92e-03 2.916e+00 ± 1.31e+00f9 5.815e+01 ± 1.36e+01 5.365e+01 ± 1.30e+01 3.640e+01 ± 1.11e+01 1.244e+01 ± 3.76e+00 2.380e+01 ± 1.11e+01 3.104e+01 ± 5.21e+00f10 8.543e+03 ± 6.89e+03 8.078e+03 ± 3.73e+03 2.351e+01 ± 1.01e+01 8.996e+00 ± 3.79e+00 7.154e+00 ± 2.62e+00 5.726e+01 ± 1.61e+01f11 8.306e+02 ± 2.90e+02 8.164e+02 ± 3.07e+02 5.528e+02 ± 2.85e+02 2.702e+01 ± 5.39e+01 2.502e-02 ± 2.19e-02 2.344e+00 ± 4.07e+00f12 3.225e+02 ± 1.27e+01 3.238e+02 ± 1.47e+01 1.070e+01 ± 2.91e+01 6.016e-04 ± 2.94e-03 1.261e+02 ± 1.25e+02 2.744e+01 ± 5.50e+01f13 9.481e+02 ± 3.29e+01 9.449e+02 ± 3.15e+01 5.294e+02 ± 8.23e+01 5.637e+02 ± 9.79e+01 7.842e+02 ± 1.71e+02 7.239e+02 ± 1.78e+02f14 5.614e+00 ± 2.91e+00 5.239e+00 ± 2.21e+00 4.127e+00 ± 4.90e+00 1.091e-03 ± 5.34e-03 1.065e-02 ± 4.11e-03 3.094e-02 ± 8.61e-03f15 3.309e+01 ± 1.13e+01 3.428e+01 ± 1.35e+01 −1.000e+02 ± 5.06e-09 −1.000e+02 ± 5.25e-08 −1.000e+02 ± 7.17e-04 −9.999e+01 ± 6.29e-03f16 4.472e+04 ± 1.37e+05 1.053e+05 ± 3.37e+05 1.401e+00 ± 1.91e+00 2.724e-01 ± 9.73e-01 2.316e-04 ± 2.54e-04 1.002e-03 ± 1.05e-03f17 1.121e+06 ± 3.66e+06 3.765e+05 ± 4.11e+05 −7.869e-01 ± 8.94e-01 −1.095e+00 ± 2.55e-01 −1.148e+00 ± 2.18e-03 −1.146e+00 ± 3.59e-03f18 2.176e+03 ± 1.22e+03 1.919e+03 ± 1.05e+03 1.438e+02 ± 5.36e+02 −2.614e+02 ± 7.23e+01 −2.826e+02 ± 1.95e+01 −2.842e+02 ± 1.78e+01f19 9.961e+01 ± 1.06e+00 1.001e+02 ± 1.24e+00 9.650e+01 ± 1.64e+00 9.827e+01 ± 1.29e+00 9.650e+01 ± 1.08e+00 9.940e+01 ± 6.26e-01f20 5.786e+03 ± 4.96e+03 4.905e+03 ± 3.76e+03 3.256e+03 ± 5.16e+03 7.859e+03 ± 6.16e+03 6.142e+02 ± 1.24e+03 2.171e+03 ± 9.27e+02

n = 30f1 1.446e+04 ± 4.63e+03 1.253e+04 ± 3.52e+03 1.906e+04 ± 9.62e+03 8.075e+02 ± 1.00e+03 5.334e-17 ± 4.99e-17 1.816e-15 ± 9.57e-16f2 1.628e+06 ± 7.00e+05 1.582e+06 ± 5.49e+05 2.677e+04 ± 4.78e+03 3.082e+04 ± 6.51e+03 4.628e+03 ± 2.54e+03 1.183e+04 ± 4.15e+03f3 2.432e+09 ± 1.62e+09 2.784e+09 ± 1.43e+09 1.803e+09 ± 2.02e+09 2.861e+07 ± 5.83e+07 4.141e+02 ± 1.72e+03 5.218e+01 ± 4.75e+01f4 1.681e+01 ± 9.45e-01 1.609e+01 ± 1.61e+00 1.859e+01 ± 4.15e-01 1.196e+01 ± 2.24e+00 1.930e+00 ± 2.17e+00 1.351e-01 ± 3.67e-01f5 1.721e+01 ± 1.41e+00 1.708e+01 ± 1.25e+00 1.880e+01 ± 4.54e-01 1.214e+01 ± 2.18e+00 3.935e+00 ± 1.60e+00 6.886e-01 ± 7.34e-01f6 8.840e+02 ± 3.08e+01 8.951e+02 ± 4.65e+01 2.259e-03 ± 4.11e-03 1.026e-03 ± 3.77e-03 1.240e-02 ± 1.08e-02 8.663e-02 ± 3.95e-02f7 8.778e+02 ± 3.34e+01 8.791e+02 ± 4.26e+01 3.403e-02 ± 9.71e-02 4.380e-02 ± 9.03e-02 1.955e-01 ± 2.16e-01 5.561e-02 ± 6.78e-02f8 2.265e+02 ± 4.06e+01 2.107e+02 ± 2.91e+01 2.037e+02 ± 2.74e+01 9.556e+01 ± 2.06e+01 5.605e+01 ± 1.24e+01 1.203e+02 ± 2.53e+01f9 3.013e+02 ± 4.72e+01 2.992e+02 ± 4.29e+01 1.985e+02 ± 3.06e+01 1.642e+02 ± 3.85e+01 1.046e+02 ± 2.70e+01 2.202e+02 ± 3.02e+01f10 1.307e+05 ± 4.96e+04 1.295e+05 ± 3.77e+04 2.900e+03 ± 3.07e+03 3.108e+02 ± 1.70e+02 3.664e+02 ± 1.55e+02 3.799e+03 ± 1.43e+03f11 4.947e+03 ± 7.03e+02 4.844e+03 ± 7.40e+02 3.156e+03 ± 7.54e+02 1.231e+03 ± 4.78e+02 6.308e+02 ± 2.56e+02 5.684e+02 ± 2.89e+02f12 8.947e+02 ± 6.00e+01 9.024e+02 ± 7.53e+01 9.373e+01 ± 3.59e+01 6.181e+01 ± 1.82e+01 8.347e+01 ± 1.27e+02 4.179e+01 ± 7.78e+01f13 1.012e+03 ± 2.04e+01 1.020e+03 ± 2.33e+01 1.095e+03 ± 6.62e+01 9.379e+02 ± 1.61e+01 9.000e+02 ± 2.16e-01 8.958e+02 ± 2.04e+01f14 5.830e+01 ± 1.50e+01 5.168e+01 ± 1.05e+01 9.348e+01 ± 1.59e+01 2.259e+01 ± 1.18e+01 6.754e-02 ± 2.22e-01 6.180e-08 ± 1.21e-07f15 6.992e+01 ± 6.25e+00 7.024e+01 ± 6.14e+00 −6.334e+01 ± 3.10e+01 −9.773e+01 ± 3.58e+00 −1.000e+02 ± 2.40e-07 −9.996e+01 ± 1.88e-02f16 3.350e+07 ± 1.84e+07 2.734e+07 ± 1.93e+07 8.438e+05 ± 2.24e+06 1.716e+04 ± 8.39e+04 7.346e-02 ± 1.63e-01 4.323e-02 ± 1.10e-01f17 8.933e+07 ± 5.24e+07 9.072e+07 ± 6.46e+07 2.080e+07 ± 2.89e+07 3.755e+03 ± 1.19e+04 −1.052e+00 ± 3.08e-01 −1.148e+00 ± 4.56e-03f18 1.172e+04 ± 2.57e+03 1.184e+04 ± 2.15e+03 8.975e+03 ± 2.38e+03 6.679e+03 ± 1.32e+03 5.662e+03 ± 1.58e+03 4.817e+03 ± 1.98e+03f19 1.307e+02 ± 2.84e+00 1.321e+02 ± 2.17e+00 1.242e+02 ± 2.76e+00 1.282e+02 ± 2.89e+00 1.214e+02 ± 2.41e+00 1.303e+02 ± 1.14e+00f20 2.958e+05 ± 1.05e+05 2.241e+05 ± 6.04e+04 3.089e+05 ± 1.38e+05 3.359e+05 ± 1.08e+05 3.535e+04 ± 2.37e+04 1.045e+05 ± 3.35e+04

n variousf21 5.296e-02 ± 9.80e-09 5.296e-02 ± 3.52e-08 5.296e-02 ± 2.32e-17 5.296e-02 ± 4.80e-18 5.296e-02 ± 9.65e-13 5.296e-02 ± 1.39e-11f22 −9.632e-01 ± 4.22e-02 −9.733e-01 ± 2.60e-02 −1.067e+00 ± 4.09e-16 −1.067e+00 ± 4.54e-16 −1.067e+00 ± 1.17e-07 −1.067e+00 ± 1.36e-06f23 2.337e+01 ± 1.14e+00 2.307e+01 ± 9.49e-01 3.979e-01 ± 9.70e-13 3.980e-01 ± 3.61e-04 3.979e-01 ± 1.03e-07 3.979e-01 ± 2.67e-07f24 −3.760e+00 ± 1.95e-02 −3.760e+00 ± 2.07e-02 −3.863e+00 ± 1.94e-15 −3.863e+00 ± 2.25e-15 −3.863e+00 ± 5.40e-09 −3.863e+00 ± 1.59e-09f25 −4.819e-01 ± 4.27e-02 −4.795e-01 ± 5.60e-02 −3.238e+00 ± 5.53e-02 −3.268e+00 ± 6.07e-02 −3.286e+00 ± 5.61e-02 −3.322e+00 ± 1.63e-03f26 −2.773e+00 ± 1.13e+00 −2.458e+00 ± 1.00e+00 −6.458e+00 ± 2.47e+00 −5.281e+00 ± 1.21e+00 −5.366e+00 ± 3.36e+00 −9.948e+00 ± 3.41e-01f27 −2.628e+00 ± 9.63e-01 −3.001e+00 ± 1.09e+00 −7.258e+00 ± 3.01e+00 −6.598e+00 ± 3.14e+00 −6.639e+00 ± 3.18e+00 −1.012e+01 ± 8.63e-01f28 −2.764e+00 ± 9.00e-01 −2.485e+00 ± 8.24e-01 −6.940e+00 ± 3.19e+00 −6.262e+00 ± 2.61e+00 −5.991e+00 ± 3.53e+00 −1.040e+01 ± 1.35e-01

performance over the other mutation schemes for the testproblems under consideration. Results of the original cGAare not reported here because it is well-known that the elitistvariants proposed in [14] outperform the original cGA. The48 test problems described above have been considered forcomparison. The same previously shown setup in terms ofcomputational budget and virtual population size has beenemployed for this comparison.

Numerical results are reported in Table III while results ofthe related Wilcoxon tests are listed in Tables IV and V. Fig.8 shows some performance trends for this set of numericalexperiments.

Numerical results show that the cDE schemes significantlyoutperform cGAs for all the considered test problems. Thecomparison between cDE and rcGA show that cDE algorithmsperform in most cases better than rcGAs and in some casesthey have comparable performance. Two important consid-

erations must be carried out. The first is that rcGAs tendto be competitive with cDE algorithms for low dimensionalproblems (in this case n = 10) while cDE schemes appear to bemore promising when n = 30. This fact can be, in our opinion,explained as a consequence of the DE search logic. Morespecifically, the rcGAs generate candidate solutions directlyby means of the PV and compare them one by one withthe current elite. In the late stage of the optimization thismechanism can lead to the generation of offspring solutionsvery similar to the elite and thus can result in a convergence.In relatively high dimensional cases (already for n = 30)the convergence is likely to happen prematurely on solutionswhich are not necessarily characterized by a high performance.On the contrary, cDE algorithms generate the offspring afterhaving manipulated several solutions sampled from the PV.This fact allows a more explorative behavior and mitigatesthe risk of premature convergence. In [16] it was shown

Page 13: compact Differential evolution

44 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Fig. 8. Performance trends cDE/rand-to-best/1/bin vs. the-state-of-the-art cEAs. (a) f1 with n = 30. (b) f16 with n = 30. (c) f3 with n = 30.

that rcGAs were behaving promisingly, with respect to cGAsin the presence of (relatively) highly multi-variate fitnessproblems. On the basis of the results with n = 30, cDEschemes seem to be definitely more promising for relativelylarge scale problems (i.e., n = 30) compared to all the othercompact algorithms present in literature. The second importantconsideration is that rcGAs display their most promisingbehavior when a high exploitation is required, as for examplefor sphere problems. In these cases, the algorithm needs toexploit the basin of attraction in order to quickly reach theglobal optimum. For these problems, cDE schemes (as wellas population based DE) can be too explorative and thus tooslow at reaching high quality solutions. In order to correctthis drawback it will be important, in the future, to perform theintegration of local search algorithms within cDE structures bycoordinating the various algorithmic components in the fashionof Meta-Lamarckian learning [49].

C. Comparison with the State-of-the-Art Estimation ofDistribution Algorithms

The persistent and nonpersistent versions of cDE/rand/1/binhave been tested against two modern EDAs.

1) Estimation of distribution algorithm with multivariateGaussian model (EDAmvg) proposed in [50]. EDAmvg

has been run with learning rate α = 0.2, population

size Np = 4, selection ratio τ = 1, and maximumamplification value Q = 1.5.

2) Histogram population-based incremental learning forcontinuous problems (HPBILc) proposed in [51].HPBILc has been run with a learning rate α = 0.2, 10divisions of domain, number of promising individualsS = 5, population size Np = 4.

It must be remarked that in this set of experiments, theconsidered EDAs have been run with memory requirementscomparable to cDE. Numerical results are given Tables VIand in Fig. 9. With pe-E and pe-H we mean the Wilcoxontest between pe-cDE/rand/1/bin and EDAmvg and HPBILc,respectively. Analogous, with ne-E and ne-H we mean theWilcoxon test between ne-cDE/rand/1/bin and EDAmvg andHPBILc, respectively.

Numerical results show that cDE algorithms display a goodperformance with respect to EDAs and, in the case of both theelitist schemes, outperform on average the EDAs consideredin this paper. In conclusion, cDE algorithms appear to be verypromising if compared with other algorithms having similarmemory employment.

D. Comparison with the State-of-the-Art Population-BasedAlgorithms

The cDE has been compared with some modern algo-rithms. More specifically, ne-cDE/rand-to-best/1/bin has been

Page 14: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 45

Fig. 9. Performance trends cDE/rand/1/bin vs. the state-of-the-art EDAs. (a) f7 with n = 10. (b) f8 with n = 30. (c) f20 with n = 30.

compared with the following DE based modern algorithms(see [25], and references therein):

1) j-differential evolution (jDE) with Fl = 0.1, Fu = 0.9,and τ1 = τ2 = 0.1 (see [24]);

2) J-adaptive differential evolution (JADE) with c = 0.1(see [41]);

3) differential evolution with global and local neighbor-hoods (DEGL) with α = β = F = 0.7 and CR = 0.3(see [37]);

4) self-adaptive differential evolution (SADE) (see [42]).In addition, pe-cDE/rand-to-best/1/bin has been compared withthe covariance matrix adaptation evolution strategy (CMA-ES) proposed in [52]. For all the considered algorithms, thepopulation size Np (actual or virtual) has been set equal to2×n. Table VII displays the numerical results of cDE againstthe modern algorithms mentioned above.

Numerical results displayed in Tables VII and VIII showthat ne-cDE/rand-to-best/1/bin can be competitive, at leastfor some of the considered problems, with the state-of-the-art population algorithms taken into account. Clearly, cDEalgorithms are not expected to outperform modern complexalgorithms because of the following two reason: first, cDEemploys a memory structure which is times smaller than thealgorithms taken into account in this section (especially JADE

and SADE which make use of an archive), second, the ne-cDE/rand-to-best/1/bin algorithm is supposed to be a light ver-sion of DE/rand-to-best/1/bin and does not employ any extrasearch component. Since it is well known that modern DEbased algorithms outperform a standard DE/rand-to-best/1/bin,it is obvious that the “cheap and light” version of a standardDE cannot outperform complex and modern algorithms whichhave been designed to be performing and not to be light.In this sense, the numerical results in this section shouldbe read in the following way: notwithstanding its significantdisadvantage, ne-cDE/rand-to-best/1/bin is not only capable tooutperform, on a regular basis, its population based version butalso displays a respectable performance when compared withcomplex, modern, and memory consuming algorithms. Futurestudies will consider the integration into the presented compactframework of more advanced search techniques.

V. Case of Study: Optimal Control of a Tubular

Linear Synchronous Motor

To illustrate the usefulness of cDE in environments withlimited computational resources, this section considers itsapplication to a challenging online optimization problem.More specifically, the optimization algorithm is used toautomatically design a component of a closed loop position

Page 15: compact Differential evolution

46 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

TABLE IV

Wilcoxon Test pe-cDE/rand-to-best/1/bin vs. the

State-of-the-Art cEAs

Problem pe-cGA ne-cGA pe-rcGA ne-rcGA[14] [14] [16] [16]

n = 10f1 + + − −f2 + + + +f3 + + + +f4 + + + +f5 + + + −f6 + + = =f7 + + = −f8 + + + +f9 + + + −f10 + + + +f11 + + + +f12 + + − −f13 + + − =f14 + + + −f15 + + = =f16 + + + +f17 = + = +f18 + + + +f19 + + = +f20 + + + +

n = 30f1 + + + +f2 + + + +f3 + + + +f4 + + + +f5 + + + +f6 + + = −f7 + + − −f8 + + + +f9 + + + +f10 + + + =f11 + + + +f12 + + = =f13 + + + +f14 + + + +f15 + + + +f16 + + = +f17 + + + +f18 + + + +f19 + + + +f20 + + + +

n variousf21 + = = =f22 + + = =f23 + + = =f24 + + = =f25 + + + =f26 + + = =f27 + + − =f28 + + = =

“+” means that cDE outperforms the compact opponent, “−” means thatcDE is outperformed, and “=” means that the algorithms have the sameperformance.

TABLE V

Wilcoxon Test ne-cDE/rand-to-best/1/bin vs. the

State-of-the-Art cEAs

Problem pe-cGA ne-cGA pe-rcGA ne-rcGA[14] [14] [16] [16]

n = 10f1 + + − −f2 + + + =f3 + + = −f4 + + + +f5 + + + +f6 + + + =f7 + + + +f8 + + + +f9 + + + =f10 + + − −f11 + + + +f12 + + − −f13 + + = =f14 + + + −f15 + + = =f16 + + + +f17 + + + +f18 + + + +f19 = + = =f20 + + = +

n = 30f1 + + + +f2 + + + +f3 + + + +f4 + + + +f5 + + + +f6 + + − −f7 + + = =f8 + + + =f9 + + = =f10 + + = −f11 + + + +f12 + + + =f13 + + + +f14 + + + +f15 + + + +f16 + + = =f17 + + + =f18 + + + +f19 + + = =f20 + + + +

n variousf21 + = = =f22 + + = =f23 + + − =f24 + + = =f25 + + + +f26 + + + +f27 + + + +f28 + + + +

“+” means that cDE outperforms the compact opponent, “−” means thatcDE is outperformed, and “=” means that the algorithms have the sameperformance.

Page 16: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 47

Fig. 10. Scheme of a TLSM.

control system for a permanent-magnet tubular linearsynchronous motor (TLSM). In few words, a TLSM is athree-phase linear motor, which includes a mover containingthe three phase windings, and a tubular rod containingthe permanent magnets (see 10). The permanent magnetsare cylindrically shaped, axially magnetized and uniformlydistributed so as to form an alternate sequence of magnetsand spacers. The three-phase windings are wrapped aroundthe rod and the mover does not contain magnetic material.This permits to exploit the magnetic flux with good efficiencyand to avoid cogging forces. The motor is driven by a currentcontrolled pulsewidth modulation voltage source inverter.This type of motor is often adopted for high precisionapplications, as it can guarantee position resolutions of ordersof micrometers and below.

TLSM are often directly coupled with their load (reductionor rotary-to-linear conversion gears are unnecessary) but theabsence of reduction gears makes their performances stronglyinfluenced by the uncertainties regarding electro-mechanicalparameters [53]. Furthermore, TLSM exhibit mechanicalresonances, especially at high acceleration and decelerationregimes, which vary with different operating conditions andfrom machine to machine [54]. Therefore, as confirmedby recent literature on the subject [53]–[55] advancedfeedback control strategies capable to cope with the effect ofuncertainties and the complexity of tuning procedures are asubject of particular interest.

The control scheme presented in this paper is based ona combination of sliding mode control with recurrent neuralnetworks. The sliding mode design theory is used to obtain ageneral scheme in which stability of the closed loop can beproven independently of the particular value of the payloadmass. The design of this controller is based on the criteriadeveloped in [56]. In the original scheme, a linear system isused to estimate an equivalent disturbance when the controlleroperates in a predefined region of the error plane. Since thedisturbance is caused by inherently nonlinear or time-varyingphenomena such as stiction or payload changes, this sectiondescribes a potential enhancement of the scheme in which thelinear system is replaced by a recurrent neural network, whoseweights are tuned with the cDE according to the procedure thatis summarized in the following.

A. Problem Statement

In the typical d-q reference frame used to control syn-chronous motors, the mover equation of a TLSM syn-

Fig. 11. Block diagram of the position controller.

chronously moving at x m/s is as follows:

vdq = Ldidq

dt+ Ridq + jx

π

τp

Lidq + jxπ

τp

λf (8)

where v is the mover voltage vector, i is the mover currentvector, R is the mover resistance, L is the mover inductance,x is the speed of the mover, λf if the mover magnet fluxlinkage and τp the pole pitch (corresponding to π electricaldegrees), while subscripts d and q indicate the two axes ofvector control.

The electromagnetic force is proportional to the q-axiscurrent and does not depend on the d-axis current

Fe =3π

2τp

λf iq = Kf iq. (9)

According to standard practice in TLSM, the id and iqcontrol loops are controlled by two identical PI controllersthat make the current transients negligible with respect tothe mechanical dynamics. Therefore, it will make assumedhereafter that references are equal to the actual values duringspeed and position transients (i.e., i∗d = id = 0 and i∗q = iq).Thus, the current iq will be regarded as the actual controlsignal. Since the d-axis current does not contribute to theforce production, it is controlled to zero. The q-axis currentreference is the output of the position controller whose blockdiagram is reported in Fig. 11.

The mathematical model of TLSM is completed by themechanical equation

Mx = Fe − F (10)

where M is the mover mass, and F is the unknown forcecaused by friction, load forces and other uncertain phenomena.According to standard sliding mode design arguments, thecontroller must be designed so as to make the trajectory ofthe system in error coordinates to reach the line (the slidingmanifold) defined by the following equation:

sx(x, t) = xx + λxex = 0 (11)

where ex = x∗ − x is the tracking error, x∗ is the positionreference, and λx > 0 is a designer parameter. Once en-tered the sliding manifold, the trajectory of the system can

Page 17: compact Differential evolution

48 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

TABLE VI

Average Final Fitness ± Standard Deviation and Wilcoxon Tests for cDE Algorithms Against the State-of-the-Art EDAs

Problem pe-cDE/rand/1/bin ne-cDE/rand/1/bin EDAmvg [50] HPBILc [51] pe-E pe-H ne-E ne-H[50] [51] [50] [51]

n = 10f1 2.683e-11 ± 1.54e-11 5.389e-10 ± 5.48e-10 0.000e+00 ± 0.00e+00 6.705e+02 ± 1.69e+02 − + − +f2 3.935e-03 ± 8.78e-03 1.898e+02 ± 3.53e+02 1.284e-15 ± 6.27e-15 1.538e+03 ± 4.22e+02 − + − +f3 2.608e+02 ± 1.05e+03 7.711e+02 ± 2.11e+03 4.499e+05 ± 8.43e+05 2.846e+06 ± 1.88e+06 + + + +f4 1.891e-06 ± 5.08e-07 7.758e-06 ± 3.28e-06 8.882e-16 ± 0.00e+00 9.577e+00 ± 4.37e-01 − + − +f5 1.976e+00 ± 9.75e-01 3.516e-01 ± 6.81e-01 2.120e+00 ± 8.05e-01 9.333e+00 ± 6.26e-01 = + + +f6 8.419e-03 ± 1.02e-02 1.131e-03 ± 3.85e-03 6.807e-01 ± 1.90e-01 6.330e+00 ± 1.33e+00 + + + +f7 2.400e-01 ± 1.50e-01 6.309e-02 ± 8.05e-02 1.609e+01 ± 6.43e+00 5.889e+00 ± 1.63e+00 + + + +f8 1.658e-01 ± 3.79e-01 6.039e+00 ± 2.92e+00 3.711e+01 ± 6.56e+00 3.913e+01 ± 3.52e+00 + + + +f9 3.093e+01 ± 1.24e+01 3.712e+01 ± 5.73e+00 2.484e+01 ± 8.00e+00 4.325e+01 ± 5.44e+00 − + − +f10 5.125e+00 ± 2.60e+00 1.725e+01 ± 1.33e+01 3.616e+01 ± 7.61e+00 1.194e+04 ± 4.09e+03 + + + +f11 1.365e-02 ± 1.68e-02 1.648e-01 ± 3.02e-01 1.608e+03 ± 1.78e+02 1.393e+03 ± 1.62e+02 + + + +f12 1.167e+02 ± 1.27e+02 4.295e+01 ± 5.26e+01 1.794e+02 ± 1.72e+01 1.320e+02 ± 2.94e+01 + = + +f13 7.813e+02 ± 1.74e+02 5.896e+02 ± 1.32e+02 8.290e+02 ± 1.43e+02 7.443e+02 ± 1.35e+02 = = + +f14 1.292e-06 ± 4.59e-07 4.215e-06 ± 1.90e-06 0.000e+00 ± 0.00e+00 7.145e+00 ± 7.73e-01 − + − +f15 −1.000e+02 ± 1.79e-08 −1.000e+02 ± 4.67e-04 −1.000e+02 ± 0.00e+00 −8.228e+01 ± 2.14e+00 − + − +f16 5.532e-13 ± 4.48e-13 7.603e-12 ± 6.54e-12 4.712e-32 ± 5.59e-48 8.960e+00 ± 2.34e+00 − + − +f17 −1.150e+00 ± 1.02e-11 −1.150e+00 ± 8.20e-11 −6.565e-01 ± 9.90e-02 7.947e+02 ± 2.31e+03 + = + =f18 −5.887e+01 ± 4.91e+02 −2.523e+02 ± 1.51e+02 5.033e+03 ± 1.70e+03 4.322e+03 ± 7.04e+02 + + + +f19 9.673e+01 ± 1.10e+00 9.924e+01 ± 7.05e-01 9.306e+00 ± 5.64e-01 9.593e+00 ± 7.89e-01 − − − −f20 5.192e+01 ± 5.91e+02 9.240e+02 ± 1.03e+03 3.668e+04 ± 4.14e+04 2.707e+04 ± 9.92e+03 + + + +

n = 30f1 1.996e+02 ± 7.36e+02 4.828e-28 ± 9.03e-28 3.549e+03 ± 7.10e+03 1.168e+04 ± 1.31e+03 + + + +f2 1.282e+04 ± 3.13e+03 2.892e+04 ± 4.83e+03 1.424e+04 ± 2.78e+04 3.283e+04 ± 4.99e+03 = + − +f3 6.535e+06 ± 3.20e+07 5.294e+02 ± 1.19e+03 7.196e+08 ± 2.14e+09 8.977e+08 ± 2.06e+08 = + = +f4 1.140e+01 ± 1.09e+00 1.642e+01 ± 3.68e-01 7.858e+00 ± 5.80e+00 1.616e+01 ± 4.60e-01 − + − =f5 1.264e+01 ± 1.01e+00 1.650e+01 ± 4.27e-01 3.366e+00 ± 6.74e-01 1.604e+01 ± 4.45e-01 − + − −f6 1.634e-02 ± 2.99e-02 6.867e-02 ± 6.75e-02 5.543e+02 ± 6.48e+01 1.016e+02 ± 1.34e+01 + + + +f7 2.278e-01 ± 2.74e-01 1.867e-01 ± 2.20e-01 2.630e+02 ± 4.48e+01 9.805e+01 ± 1.36e+01 + + + +f8 7.384e+01 ± 1.23e+01 1.531e+02 ± 2.09e+01 2.109e+02 ± 1.10e+01 2.506e+02 ± 1.49e+01 + + + +f9 1.317e+02 ± 2.66e+01 2.655e+02 ± 2.28e+01 1.809e+02 ± 1.19e+01 2.584e+02 ± 1.53e+01 + + − =f10 8.067e+03 ± 4.90e+03 3.665e+04 ± 5.28e+03 1.148e+05 ± 8.44e+04 5.473e+05 ± 7.16e+04 + + + +f11 1.490e+03 ± 4.14e+02 1.939e+03 ± 6.69e+02 1.087e+04 ± 1.12e+03 7.595e+03 ± 3.37e+02 + + + +f12 8.962e+01 ± 6.44e+01 1.299e+02 ± 2.03e+01 1.328e+02 ± 1.02e+02 1.908e+02 ± 2.69e+01 = + = +f13 9.443e+02 ± 2.85e+01 9.707e+02 ± 2.96e+01 9.510e+02 ± 1.28e+01 9.986e+02 ± 7.57e+00 = + − +f14 5.220e+00 ± 2.62e+00 9.917e-01 ± 2.69e+00 1.329e+01 ± 1.47e+00 6.052e+01 ± 7.36e+00 + + + +f15 −9.954e+01 ± 1.10e+00 −9.930e+01 ± 6.93e-01 −9.774e+01 ± 1.36e-01 −3.826e+01 ± 6.52e+00 + + + +f16 1.216e+00 ± 2.00e+00 4.803e-01 ± 6.72e-01 8.782e+06 ± 2.80e+07 1.209e+06 ± 3.52e+05 = + = +f17 1.511e-01 ± 1.35e+00 −3.378e-01 ± 1.25e+00 4.388e+05 ± 2.14e+06 1.404e+07 ± 3.80e+06 = + = +f18 8.939e+03 ± 1.57e+03 1.003e+04 ± 1.22e+03 1.478e+04 ± 4.75e+03 2.077e+04 ± 2.92e+03 + + + +f19 1.248e+02 ± 2.65e+00 1.279e+02 ± 1.59e+00 4.024e+01 ± 8.99e-01 4.007e+01 ± 9.83e-01 − − − −f20 1.013e+05 ± 4.49e+04 1.204e+05 ± 4.09e+04 6.825e+05 ± 5.70e+05 1.227e+06 ± 2.01e+05 + + + +

n variousf21 5.296e-02 ± 7.79e-18 5.296e-02 ± 2.75e-12 5.296e-02 ± 6.20e-09 5.296e-02 ± 5.77e-09 + + + +f22 −1.067e+00 ± 4.22e-16 −1.067e+00 ± 1.71e-13 −1.067e+00 ± 3.26e-05 −1.067e+00 ± 4.62e-05 + + + +f23 3.980e-01 ± 3.56e-04 3.983e-01 ± 5.19e-04 3.979e-01 ± 4.92e-05 3.980e-01 ± 7.27e-05 = = − −f24 −3.863e+00 ± 4.29e-11 −3.863e+00 ± 1.10e-08 −3.861e+00 ± 1.08e-03 −3.863e+00 ± 1.45e-04 + + + +f25 −3.288e+00 ± 5.53e-02 −3.322e+00 ± 8.74e-04 −3.242e+00 ± 3.70e-02 −3.234e+00 ± 4.85e-02 + + + +f26 −5.040e+00 ± 3.16e+00 −9.267e+00 ± 1.37e+00 −5.786e+00 ± 3.54e+00 −7.257e+00 ± 2.70e+00 = − + +f27 −4.822e+00 ± 3.07e+00 −9.764e+00 ± 1.16e+00 −6.653e+00 ± 3.62e+00 −8.303e+00 ± 1.47e+00 = − + +f28 −6.048e+00 ± 3.64e+00 −1.003e+01 ± 5.77e-01 −7.758e+00 ± 3.69e+00 −8.813e+00 ± 1.21e+00 = − + +

“+” means that cDE outperforms the EDA, “−” means that cDE is outperformed, and “=” means that the algorithms have the same performance.

Page 18: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 49

TABLE VII

Average Final Fitness ± Standard Deviation for cDE Against the State-of-the-Art Population-Based Algorithms

Problem jDE [24] JADE [41] DEGL [37] SADE [42] CMA-ES [52] ne-cDE/rand-to-best/1/binn = 10

f1 3.154e-24 ± 4.99e-24 7.465e-49 ± 2.41e-48 6.498e-60 ± 3.12e-59 1.487e-27 ± 2.83e-27 4.346e-252 ± 0.00e+00 2.021e-02 ± 1.22e-02f2 8.131e-05 ± 7.75e-05 1.631e-25 ± 7.82e-25 2.744e-20 ± 6.69e-20 6.390e-27 ± 3.08e-26 2.993e-73 ± 1.32e-72 7.179e+01 ± 9.77e+01f3 1.864e+00 ± 1.09e+00 8.667e-01 ± 1.68e+00 4.983e-01 ± 1.35e+00 1.661e-01 ± 8.14e-01 6.012e-02 ± 8.14e-03 6.370e+02 ± 1.27e+03f4 9.099e-13 ± 5.11e-13 4.595e-15 ± 7.41e-16 4.441e-15 ± 0.00e+00 1.584e-14 ± 2.11e-14 1.004e-01 ± 3.33e-01 7.846e-02 ± 2.87e-02f5 3.136e-12 ± 4.11e-12 4.441e-15 ± 0.00e+00 4.441e-15 ± 0.00e+00 1.984e-14 ± 2.50e-14 4.813e-02 ± 2.36e-01 2.218e-01 ± 3.11e-01f6 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 2.922e-02 ± 1.39e-02f7 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 0.000e+00 ± 0.00e+00 3.852e-02 ± 3.38e-02f8 0.000e+00 ± 0.00e+00 8.927e-01 ± 8.85e-01 1.112e+01 ± 3.34e+00 1.741e+00 ± 2.16e+00 1.168e+01 ± 5.68e+00 2.916e+00 ± 1.31e+00f9 1.417e+01 ± 4.44e+00 6.831e+00 ± 3.16e+00 2.568e+01 ± 4.42e+00 8.913e+00 ± 4.28e+00 1.401e+01 ± 1.01e+01 3.104e+01 ± 5.21e+00f10 3.482e-12 ± 7.17e-12 1.091e+00 ± 1.09e+00 1.035e+01 ± 2.47e+00 1.368e+00 ± 1.57e+00 1.843e+01 ± 8.29e+00 5.726e+01 ± 1.61e+01f11 1.273e-04 ± 1.90e-13 1.476e+02 ± 7.63e+01 3.138e+02 ± 1.76e+02 −3.251e+45 ± 1.45e+46 4.150e+03 ± 1.20e-12 2.344e+00 ± 4.07e+00f12 1.304e+01 ± 3.44e+01 3.913e+01 ± 6.56e+01 7.500e+01 ± 9.89e+01 1.250e+01 ± 3.38e+01 4.783e+01 ± 5.11e+01 2.744e+01 ± 5.50e+01f13 6.304e+02 ± 1.74e+02 6.740e+02 ± 1.93e+02 7.458e+02 ± 1.91e+02 5.667e+02 ± 1.40e+02 9.000e+02 ± 0.00e+00 7.239e+02 ± 1.78e+02f14 1.615e-14 ± 1.31e-14 9.552e-25 ± 1.29e-24 1.606e-32 ± 3.93e-32 1.784e-15 ± 2.98e-15 5.015e-117 ± 1.99e-116 3.094e-02 ± 8.61e-03f15 −1.000e+02 ± 0.00e+00 −1.000e+02 ± 0.00e+00 −1.000e+02 ± 0.00e+00 −1.000e+02 ± 0.00e+00 −1.000e+02 ± 0.00e+00 −9.999e+01 ± 6.29e-03f16 2.610e-25 ± 3.71e-25 4.712e-32 ± 5.60e-48 4.712e-32 ± 5.59e-48 1.849e-28 ± 5.56e-28 1.352e-02 ± 6.48e-02 1.002e-03 ± 1.05e-03f17 −1.150e+00 ± 6.81e-16 −1.150e+00 ± 6.81e-16 −1.150e+00 ± 6.73e-16 −1.150e+00 ± 6.80e-16 −1.052e+00 ± 1.50e-01 −1.146e+00 ± 3.59e-03f18 −3.100e+02 ± 8.92e-12 −3.100e+02 ± 1.05e-12 3.790e-14 ± 1.86e-13 −3.100e+02 ± 2.83e-12 2.027e+02 ± 3.94e+02 −2.842e+02 ± 1.78e+01f19 9.953e+01 ± 5.16e-01 9.911e+01 ± 1.21e+00 9.141e+00 ± 9.20e-01 9.940e+01 ± 5.52e-01 9.454e+01 ± 5.55e+00 9.940e+01 ± 6.26e-01f20 1.921e+03 ± 1.35e+03 4.735e+03 ± 1.65e+03 1.517e+04 ± 1.02e+04 2.121e+03 ± 3.97e+03 −1.101e+02 ± 5.71e+02 2.171e+03 ± 9.27e+02

n = 30f1 3.725e-31 ± 9.22e-31 1.544e-13 ± 5.16e-13 9.489e-75 ± 4.02e-74 2.313e-34 ± 1.13e-33 8.751e-302 ± 0.00e+00 1.816e-15 ± 9.57e-16f2 1.189e+01 ± 8.96e+00 1.496e+02 ± 1.46e+02 3.027e-03 ± 4.39e-03 1.527e-01 ± 1.86e-01 1.719e-28 ± 2.91e-28 1.183e+04 ± 4.15e+03f3 2.696e+01 ± 2.28e+01 3.877e+01 ± 3.05e+01 9.967e+00 ± 1.76e+00 2.334e+01 ± 2.36e+01 7.118e+00 ± 1.07e+00 5.218e+01 ± 4.75e+01f4 7.253e-15 ± 1.47e-15 1.861e+00 ± 6.73e-01 3.151e-01 ± 5.11e-01 4.813e-02 ± 2.36e-01 8.290e-15 ± 1.45e-15 1.351e-01 ± 3.67e-01f5 7.401e-15 ± 1.35e-15 2.475e+00 ± 7.25e-01 3.880e-02 ± 1.90e-01 1.021e-14 ± 4.30e-15 7.994e-15 ± 0.00e+00 6.886e-01 ± 7.34e-01f6 0.000e+00 ± 0.00e+00 2.179e-02 ± 2.97e-02 8.502e-03 ± 1.40e-02 4.107e-04 ± 2.01e-03 0.000e+00 ± 0.00e+00 8.663e-02 ± 3.95e-02f7 1.068e-02 ± 3.65e-02 1.686e-01 ± 1.79e-01 0.000e+00 ± 0.00e+00 4.897e-02 ± 1.08e-01 0.000e+00 ± 0.00e+00 5.561e-02 ± 6.78e-02f8 7.462e-01 ± 7.90e-01 2.027e+01 ± 4.38e+00 6.892e+01 ± 5.10e+01 2.176e+01 ± 1.33e+01 3.462e+01 ± 1.42e+01 1.203e+02 ± 2.53e+01f9 4.024e+01 ± 9.71e+00 2.654e+01 ± 7.28e+00 1.817e+02 ± 7.98e+00 3.254e+01 ± 8.62e+00 3.802e+01 ± 1.32e+01 2.202e+02 ± 3.02e+01f10 7.877e-01 ± 8.79e-01 2.781e+01 ± 4.43e+00 1.105e+02 ± 4.54e+01 1.662e+01 ± 1.32e+01 5.688e+01 ± 1.44e+01 3.799e+03 ± 1.43e+03f11 1.190e+02 ± 9.88e+01 2.830e+03 ± 2.25e+02 1.842e+03 ± 5.94e+02 2.142e+02 ± 7.98e+02 1.245e+04 ± 0.00e+00 5.684e+02 ± 2.89e+02f12 0.000e+00 ± 0.00e+00 1.250e+01 ± 3.38e+01 5.833e+01 ± 1.02e+02 1.250e+01 ± 4.48e+01 3.594e-26 ± 2.45e-26 4.179e+01 ± 7.78e+01f13 9.000e+02 ± 0.00e+00 9.000e+02 ± 2.84e-13 9.000e+02 ± 1.01e-13 9.000e+02 ± 1.42e-13 9.000e+02 ± 0.00e+00 8.958e+02 ± 2.04e+01f14 1.891e-18 ± 1.24e-18 1.301e-08 ± 3.20e-08 5.422e-38 ± 1.54e-37 2.159e-20 ± 4.47e-20 7.905e-38 ± 3.87e-37 6.180e-08 ± 1.21e-07f15 −1.000e+02 ± 4.47e-08 −9.086e+01 ± 1.92e+01 −1.000e+02 ± 9.47e-14 −1.000e+02 ± 1.25e-02 −3.069e+01 ± 5.39e+01 −9.996e+01 ± 1.88e-02f16 2.700e-32 ± 1.60e-32 1.728e-02 ± 3.95e-02 8.255e-02 ± 2.65e-01 1.571e-32 ± 5.59e-48 8.639e-03 ± 2.93e-02 4.323e-02 ± 1.10e-01f17 −1.150e+00 ± 4.54e-16 −1.084e+00 ± 2.98e-01 −1.149e+00 ± 5.13e-03 −1.150e+00 ± 4.77e-16 −8.262e-01 ± 2.35e-01 −1.148e+00 ± 4.56e-03f18 1.055e+03 ± 5.83e+02 4.106e+03 ± 8.23e+02 3.259e+02 ± 5.69e+02 2.011e+03 ± 6.58e+02 1.893e+03 ± 8.73e+02 4.817e+03 ± 1.98e+03f19 1.307e+02 ± 1.02e+00 1.303e+02 ± 1.11e+00 1.309e+02 ± 1.28e+00 1.289e+02 ± 4.34e+00 1.118e+02 ± 2.20e+01 1.303e+02 ± 1.14e+00f20 1.052e+05 ± 3.28e+04 1.933e+05 ± 2.91e+04 7.909e+05 ± 1.36e+05 3.949e+04 ± 3.46e+04 1.097e+03 ± 2.10e+03 1.045e+05 ± 3.35e+04

n variousf21 5.296e-02 ± 1.38e-10 5.296e-02 ± 1.20e-10 5.296e-02 ± 7.23e-11 5.296e-02 ± 2.23e-14 5.296e-02 ± 5.54e-18 5.296e-02 ± 1.39e-11f22 −1.067e+00 ± 4.54e-16 −1.067e+00 ± 4.54e-16 −1.067e+00 ± 4.32e-16 −1.067e+00 ± 4.54e-16 −1.067e+00 ± 4.64e-16 −1.067e+00 ± 1.36e-06f23 3.979e-01 ± 0.00e+00 3.979e-01 ± 0.00e+00 3.979e-01 ± 0.00e+00 3.979e-01 ± 0.00e+00 3.979e-01 ± 0.00e+00 3.979e-01 ± 2.67e-07f24 −3.863e+00 ± 2.27e-15 −3.863e+00 ± 2.27e-15 −3.863e+00 ± 2.27e-15 −3.863e+00 ± 2.27e-15 −3.863e+00 ± 1.56e-15 −3.863e+00 ± 1.59e-09f25 −3.296e+00 ± 5.03e-02 −3.276e+00 ± 5.95e-02 −3.277e+00 ± 5.88e-02 −3.317e+00 ± 2.43e-02 −3.281e+00 ± 5.81e-02 −3.322e+00 ± 1.63e-03f26 −1.015e+01 ± 5.25e-15 −9.934e+00 ± 1.80e+00 −1.015e+01 ± 5.44e-15 −5.055e+00 ± 3.28e-16 −9.148e+00 ± 3.71e-01 −9.948e+00 ± 3.41e-01f27 −1.040e+01 ± 4.97e-15 −1.017e+01 ± 1.10e+00 −1.040e+01 ± 4.61e-06 −1.040e+01 ± 4.63e-15 −5.088e+00 ± 2.50e-15 −1.012e+01 ± 8.63e-01f28 −1.054e+01 ± 3.57e-15 −1.054e+01 ± 3.63e-15 −1.026e+01 ± 1.37e+00 −1.054e+01 ± 3.72e-15 −5.128e+00 ± 1.37e-15 −1.040e+01 ± 1.35e-01

be driven to the origin of the error plane by applying adiscontinuous control action. In order to avoid the undesirablechattering effects associated with such a discontinuous action,the control law is modified with a variant that generatessmoother control actions at the expense of the loss of guaran-tees of ideal convergence to zero of the pure sliding modedynamics sx = 0. This variant is referred to as boundarylayer [57], and is described by the following differentialequation:

sx = −imaxq sat(sx)

Kf

M+ deq

x (12)

where imaxq is the maximum allowable motor current, M is the

rated mass value, deqx = −

(Kf

M− Kf

M

)iq + F

Mis the equivalent

disturbance, the saturation function is defined as follows:

sat(sx) =

{sign(sx) |sx| > �

sx

�� > |sx| (13)

where � is the width of the boundary layer. A steady stateerror occurs due to the equivalent disturbance, and thereforean observer is used to estimate its amplitude for compensation,and guarantee zero steady state error. The variable sx isused as the input of the proposed disturbance observer. Thisproduces an additional control action iOBS

q to ensure that thestate reaches the sliding surface. The disturbance observeris obtained as the parallel between a standard discrete-timePI and a recurrent neural network (RNN) whose structure issummarized in Fig. 12.

More specifically, the RNN receives the current and pastsamples of the variable sx, and produces an additional control

Page 19: compact Differential evolution

50 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Fig. 12. Neural network block diagram in Simulink.

action that is summed to the output of a PI controller. Theparallel between the PI and the RNN defines a nonlinearhybrid estimator in which the linear action is used to preserveacceptable closed loop performances, especially during theinitial training stages in which the RNN has a virtuallyrandom behavior, and therefore acts as an unpredictable dis-turbance. This circumstance is particularly critical for thoseRNN schemes in which training is based on a stochastic searchalgorithm (such as a GA or a SPSA [58]). In these algorithms,initialization is generally obtained with randomly-generatedsolutions that produce very poor (or even unstable) results.Therefore, it is generally preferred to partition the controllerin two modules, a linear law designed to hold the controlloop in stable conditions, and a nonlinear compensator that istrained online (see [59]) for a specific control goal.

B. Neural Network Training with cDE

There are several ways to train an NN in a control loop.A large part of the literature [60] focuses on NNs that arelinear in the unknown parameters that can be easily trainedwith a variety of algorithms derived from Lyapunov stabilitytheory. Several extensions to nonlinear in-the-parameters NNshave also been proposed, which include stochastic gradient-free optimization algorithms, such as the SPSA [58] or GeneticAlgorithms [61] to address specific problems, such trainingwith noisy measurements, training recurrent networks, oravoiding local minima of the objective function. This paperconsiders the case in which the RNN is trained using theproposed cDE algorithm. More specifically, the NN traininghas been performed by means of ne-cDE/rand-to-best/1/binwith F = 0.9 and Cr = 0.9. The results obtained by ne-cDE/rand/1/bin have been compared with those obtained byne-rcGA [16]. For both the algorithms, η = 0.5 × Np andNp = 2 × n, where n = 24.

For NN training purposes, the TLSM is requested to track aperiodic position trajectory as shown in Fig. 14. The trajectoryis obtained by filtering a square wave of the same frequencywith a nonlinear filter that shapes its output so as to keep themaximum speed and acceleration within the selected limits[62]. Each period of the reference is viewed as a separatefitness evaluation. At time t = 1.5 s a load force is appliedusing a second motor connected to the controlled plant. The

Fig. 13. Experimental test bench.

load force has a profile proportional to the plant acceleration(measured) so as to emulate a payload mass variation in thesecond half of each fitness evaluation. In particular a 25 kgmass increase was emulated in all the presented experiments.The fitness function is the integral absolute value of the errorbetween the position reference and the actual position of theTLSM over the observation interval (one period). Thus, at theend of each period, new weights for the NN are generated bythe cDE and passed to the actual controller.

C. Summary of Experimental Results

The test bench utilizes two identical TLSMs (the first oneused as a motor, the second one as a load) having the followingrated specifications: rated isq current 2.0 A, coil resistanceR = 12.03, coil inductance L = 7.8 mH, τp = 25.6 mm,Kf = 31.2 N/A, mass of the mover 2.75 kg (see Fig. 13). Allthe experimental investigations presented in this section areperformed by using a dSPACE 1103 micro-controller boardbased on a Motorola Power PC microprocessor.

The performance of the proposed control scheme has beencompared with that obtained using the same position controllerrepresented in Fig. 11 but without the contribution of theneural network. In order to obtain a fair comparison theparameters of the PI controller inside the disturbance observerwere accurately tuned via trial and error during a test in whichthe trajectory shown in Fig. 14 was followed, but withoutmass change during the experiment. In particular, the PI gainswere increased as much as possible so to reduce the fitness

Page 20: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 51

TABLE VIII

Wilcoxon Test for cDE Against the State-of-the-Art

Population-Based Algorithms

Problem jDE JADE DEGL SADE CMA-ES[24] [41] [37] [42] [52]

n = 10f1 − − − − −f2 − − − − −f3 − − = − −f4 − − − − =f5 − − − − −f6 − − − − −f7 − − − − −f8 − − + = +f9 = − = − =f10 − − + − =f11 − + + = +f12 = = = = =f13 = = = − +f14 − − − − −f15 = = − = =f16 − − − − +f17 = = − = +f18 = = = = +f19 = = − = −f20 = + + = −

n = 30f1 − = = − −f2 = = − − −f3 = = = = −f4 = + − = =f5 − + − − −f6 − = = − −f7 − + = = −f8 − − = − −f9 − − + − −f10 − − − − −f11 − + + = +f12 − = = = −f13 = = = = =f14 − = − − −f15 − + = − +f16 = = − = =f17 − = = − +f18 − = − − −f19 = = = = =f20 = + + − −

n variousf21 = = = = =f22 = = = = =f23 = = = = =f24 = = = = =f25 + + + = +f26 = = = = +f27 = = = = +f28 = = = = +

“+” means that cDE outperforms the population-based opponent, “−” meansthat cDE is outperformed, and “=” means that the algorithms have the sameperformance.

Fig. 14. Trajectory used during training.

value. This is a standard procedure that could be realized bya skilled operator in industrial practice. At the end of tuning,the performances of the obtained control scheme were testedincluding the emulation of the mass variation. The obtainedposition error is reported in Fig. 16. The position error isbelow 100 µm during the first movement but increases up to450 µm when the mass is increased. At the end of training,the neural network clearly improves the performances reducingthe effects of the mass change. The obtained position error isreported in Fig. 18 evidencing how the performances are onlyslightly improved during the first movement but the systembecomes much more robust to the mass variations. The peakerror is below 80 µm and 200 µm during the two consecutivemovements. The similarity of the position error responses inthe first half of the experiment confirms that the positioncontroller was tuned so to obtain optimal performances whenthe payload is absent. The performances of both the controlschemes were also evaluated using a position trajectory (shownin Fig. 15) different from the one adopted during training. Thetrajectory used for validation includes movements of differentamplitudes so to stress the effect of static friction on motorperformances. Also in this case the load emulated an 25 kgadditional mass during the second half of the experiment.The parameters of the position controller, including the neuralnetwork, were kept constant and equal to the values used inthe first test. Figs. 17 and 19 report the position errors obtainedwithout and with the neural network, respectively. Also in thiscase the neural network reduces the effects of the mass changeduring the transients and guarantees to reach zero error fasterwhen the set-point is kept constant. It should be remarked thatthe training requires between 20 and 30 min to reach satis-factory results. Most of the time is devoted to fitness functionevaluation, while the increase of computational cost due to thereal time implementation of the cDE is negligible. Consideringthat the sampling time is 200 µs and that about 3/4 of thistime is devoted to position, speed, and current control, in orderto avoid a slowing down of the training process, the handlingof the optimization algorithm should not exceed 50 µs. WhilecDE (as well as other compact algorithms) require approxi-mately 20 µs per sampling step regardless the dimensionalityof the problem and the virtual population size, a population-based algorithm which attempts to optimize this 24 variable

Page 21: compact Differential evolution

52 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

Fig. 15. Trajectory used during the validation.

Fig. 16. Position error during the training test without the neural network.

Fig. 17. Position error during the validation test without the neural network.

study case would require at least 20 individuals which wouldresult into a slowing down of the real-time process.

Finally, Fig. 20 shows the average value of the fitnessfunction and the standard deviation calculated over ten runof the neural network training using the cDE algorithm andrcGA. It can be observed that the results obtained by cDE aresignificantly more satisfactory than those obtained by rcGA.While the rcGA prematurely converges to a suboptimal solu-tion, the proposed cDE algorithms continues the optimizationand detects high quality solutions.

Fig. 18. Position error during the training test with the neural network.

Fig. 19. Position error during the validation test with the neural network.

Fig. 20. Mean and standard deviation of the fitness value of the best solutionover ten run.

VI. Conclusion

This paper introduced the concept of compact differentiale volution and proposed two algorithmic variants based onthis novel idea. The first variant employs the persistent elitismwhile the second employs nonpersistent elitism. Both of thesevariants do not require powerful hardware in order to displaya high performance. On the contrary, the proposed algorithmsmake use of a limited amount of memory in order to performthe optimization. As confirmed by our implementation ona challenging on-line optimization problem in the field ofprecision motion control, this feature makes the proposed

Page 22: compact Differential evolution

MININNO et al.: COMPACT DIFFERENTIAL EVOLUTION 53

approach suitable for commercial devices and industrial appli-cations which have cost and space limitations. Despite its smallhardware demand the proposed approach seems to outperformclassical population-based differential evolution algorithms.This finding appears to be due to the fact that randomizationof compact schemes is beneficial for the differential evolutionsearch logic. The comparison with other compact evolutionaryalgorithms and other estimation of distribution algorithms,recently presented in literature, shows that compact differentialevolution is a competitive approach which often leads to asignificantly better performance, in particular when the searchspace dimension is large. Finally, the comparison with the-state-of-the-art complex population based algorithms showsthat the proposed approach, despite its simplicity and lowmemory requirements, is competitive for several problems.

Future work will focus on memetic extensions of the workcarried out, definition of parallel compact differential evolutionsystems, as well as the implementation of adaptive schemesaiming at the reduction of parameters to set. A simple memeticapproach, employing cDE as an evolutionary framework anda low memory local search algorithm has been introduced inorder to solve a specific control problem with reference torobotics (see [63]). Although this memetic extension of thecDE algorithm has been already published, its design is subse-quent to the cDE proposed in this paper. In addition, it must beremarked that the algorithm, aim of this paper, and optimiza-tion problems in [63] are significantly different with respect tothe present paper. More specifically, while the present paperproposes cDE as a new general purpose algorithm and testsits potential against other compact and population-based algo-rithms, in [63] cDE is used only as a component of a memeticalgorithm which is specifically tailored to the optimization ofthe control system for an industrial Cartesian robot.

References

[1] P. Larranaga and J. A. Lozano, Estimation of Distribution Algorithms:A New Tool for Evolutionary Computation. Boston, MA: Kluwer, 2001.

[2] G. R. Harik, F. G. Lobo, and D. E. Goldberg, “The compact geneticalgorithm,” IEEE Trans. Evol. Comput., vol. 3, no. 4, pp. 287–297, Nov.1999.

[3] R. Rastegar and A. Hariri, “A step forward in studying the compactgenetic algorithm,” Evol. Comput., vol. 14, no. 3, pp. 277–289, 2006.

[4] G. Harik, “Linkage learning via probabilistic modeling in the ECGA,”Univ. Illinois at Urbana-Champaign, Urbana, Tech. Rep. 99 010, 1999.

[5] G. R. Harik, F. G. Lobo, and K. Sastry, “Linkage learning via proba-bilistic modeling in the extended compact genetic algorithm (ECGA),”in Scalable Optimization via Probabilistic Modeling (Studies in Compu-tational Intelligence, vol. 33), M. Pelikan, K. Sastry, and E. Cantu-Paz,Eds. Berlin, Germany: Springer, 2006, pp. 39–61.

[6] K. Sastry and D. E. Goldberg, “On extended compact genetic algorithm,”Univ. Illinois at Urbana-Champaign, Urbana, Tech. Rep. 2 000 026,2000.

[7] K. Sastry and G. Xiao, “Cluster optimization using extended compactgenetic algorithm,” Univ. Illinois at Urbana-Champaign, Urbana, Tech.Rep. 2 001 016, 2001.

[8] K. Sastry, D. E. Goldberg, and D. D. Johnson, “Scalability of a hybridextended compact genetic algorithm for ground state optimization ofclusters,” Mater. Manuf. Processes, vol. 22, no. 5, pp. 570–576, 2007.

[9] C. Aporntewan and P. Chongstitvatana, “A hardware implementation ofthe compact genetic algorithm,” in Proc. IEEE Congr. Evol. Comput.,vol. 1. 2001, pp. 624–629.

[10] J. C. Gallagher, S. Vigraham, and G. Kramer, “A family of compactgenetic algorithms for intrinsic evolvable hardware,” IEEE Trans. Evol.Comput., vol. 8, no. 2, pp. 111–126, Apr. 2004.

[11] Y. Jewajinda and P. Chongstitvatana, “Cellular compact geneticalgorithm for evolvable hardware,” in Proc. Int. Conf. Electr.

Eng./Electron. Comput. Telecommun. Inform. Technol., vol. 1. 2008,pp. 1–4.

[12] J. C. Gallagher and S. Vigraham, “A modified compact genetic algorithmfor the intrinsic evolution of continuous time recurrent neural networks,”in Proc. Genet. Evol. Comput. Conf., 2002, pp. 163–170.

[13] R. Baraglia, J. I. Hidalgo, and R. Perego, “A hybrid heuristic for thetraveling salesman problem,” IEEE Trans. Evol. Comput., vol. 5, no. 6,pp. 613–622, Dec. 2001.

[14] C. W. Ahn and R. S. Ramakrishna, “Elitism based compact geneticalgorithms,” IEEE Trans. Evol. Comput., vol. 7, no. 4, pp. 367–385,Aug. 2003.

[15] G. Rudolph, “Self-adaptive mutations may lead to premature conver-gence,” IEEE Trans. Evol. Comput., vol. 5, no. 4, pp. 410–414, Aug.2001.

[16] E. Mininno, F. Cupertino, and D. Naso, “Real-valued compact geneticalgorithms for embedded microcontroller optimization,” IEEE Trans.Evol. Comput., vol. 12, no. 2, pp. 203–219, Apr. 2008.

[17] F. Cupertino, E. Mininno, and D. Naso, “Elitist compact genetic algo-rithms for induction motor self-tuning control,” in Proc. IEEE Congr.Evol. Comput., 2006, pp. 3057–3063.

[18] F. Cupertino, E. Mininno, and D. Naso, “Compact genetic algorithmsfor the optimization of induction motor cascaded control,” in Proc. IEEEInt. Conf. Electr. Mach. Drives, vol. 1. 2007, pp. 82–87.

[19] L. Fossati, P. L. Lanzi, K. Sastry, and D. E. Goldberg, “A simple real-coded extended compact genetic algorithm,” in Proc. IEEE Congr. Evol.Comput., Sep. 2007, pp. 342–348.

[20] P. Lanzi, L. Nichetti, K. Sastry, and D. E. Goldberg, “Real-codedextended compact genetic algorithm based on mixtures of models,”in Linkage in Evolutionary Computation (Studies in ComputationalIntelligence, vol. 157). Berlin, Germany: Springer, 2008, pp. 335–358.

[21] F. Neri and V. Tirronen, “Recent advances in differential evolution: Areview and experimental analysis,” Artif. Intell. Rev., vol. 33, nos. 1–2,pp. 61–106, 2010.

[22] A. Caponio, A. Kononova, and F. Neri, “Differential evolution withscale factor local search for large scale problems,” in ComputationalIntelligence in Expensive Optimization Problems (Studies in Evolution-ary Learning and Optimization, vol. 2), Y. Tenne and C.-K. Goh, Eds.Berlin, Germany: Springer, 2010, ch. 12, pp. 297–323.

[23] M. Weber, F. Neri, and V. Tirronen, “Distributed differential evolutionwith explorative-exploitative population families,” Genet. ProgrammingEvolvable Mach., vol. 10, no. 4, pp. 343–371, 2009.

[24] J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: A comparativestudy on numerical benchmark problems,” IEEE Trans. Evol. Comput.,vol. 10, no. 6, pp. 646–657, Dec. 2006.

[25] S. Das and P. N. Suganthan, “Differential evolution: A survey of thestate-of-the-art,” IEEE Trans. Evol. Comput., 2011, to be published.

[26] K. V. Price, R. Storn, and J. Lampinen, Differential Evolution: A Practi-cal Approach to Global Optimization. Berlin, Germany: Springer, 2005.

[27] W. Gautschi, “Error function and fresnel integrals,” in Handbook ofMathematical Functions with Formulas, Graphs, and MathematicalTables, M. Abramowitz and I. A. Stegun, Eds. New York: DoverPublications, Inc., 1972, ch. 7, pp. 297–309.

[28] W. J. Cody, “Rational Chebyshev approximations for the error function,”Math. Comput., vol. 23, no. 107, pp. 631–637, Jul. 1969.

[29] M. Gallagher, “An empirical investigation of the user-parameters andperformance of continuous PBIL algorithms,” in Proc. IEEE SignalProcess. Soc. Workshop Neural Netw., Dec. 2000, pp. 702–710.

[30] B. Yuan and M. Gallagher, “Playing in continuous spaces: Some analysisand extension of population-based incremental learning,” in Proc. IEEECongr. Evol. Comput., vol. 1. Dec. 2003, pp. 443–450.

[31] M. Sebag and A. Ducoulombier, “Extending population-based incre-mental learning to continuous search spaces,” in Proc. Parallel ProblemSolving Nature, LNCS 1498, A. E. Eiben, T. Back, M. Schoenauer, andH.-P. Schwefel, Eds. Berlin, Germany: Springer, 1998, pp. 418–427.

[32] M. Schmidt, K. Kristensen, and T. Randers Jensen, “Adding geneticsto the standard PBIL algorithm,” in Proc. IEEE Congr. Evol. Comput.,vol. 2. Jul. 1999, pp. 1527–1534.

[33] C. Gonzailez, J. A. Lozano, and P. Larranaga, “Mathematical modelingof umdac algorithm with tournament selection: Behavior on linear andquadratic functions,” Int. J. Approximate Reasoning, vol. 31, no. 3, pp.313–340, 2002.

[34] V. Feoktistov, Differential Evolution in Search of Solutions. Berlin,Germany: Springer, 2006.

[35] J. Lampinen and I. Zelinka, “On stagnation of the differential evolutionalgorithm,” in Proc. 6th Int. Mendel Conf. Soft Computing, 2000, pp. 76–83.

Page 23: compact Differential evolution

54 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011

[36] S. Rahnamayan, H. R. Tizhoosh, and M. M. Salama, “Opposition-baseddifferential evolution,” IEEE Trans. Evol. Comput., vol. 12, no. 1, pp.64–79, Feb. 2008.

[37] S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differentialevolution with a neighborhood-based mutation operator,” IEEE Trans.Evol. Comput., vol. 13, no. 3, pp. 526–553, Jun. 2009.

[38] V. Tirronen, F. Neri, T. Karkkainen, K. Majava, and T. Rossi, “Anenhanced memetic differential evolution in filter design for defectdetection in paper production,” Evol. Comput., vol. 16, no. 4, pp. 529–555, 2008.

[39] A. Caponio, F. Neri, and V. Tirronen, “Super-fit control adapta-tion in memetic differential evolution frameworks,” Soft Comput.-AFusion Found., Methodol. Applicat., vol. 13, no. 8, pp. 811–831,2009.

[40] N. Noman and H. Iba, “Accelerating differential evolution using anadaptive local search,” IEEE Trans. Evol. Comput., vol. 12, no. 1, pp.107–125, Feb. 2008.

[41] J. Zhang and A. C. Sanderson, “JADE: Adaptive differential evolutionwith optional external archive,” IEEE Trans. Evol. Comput., vol. 13,no. 5, pp. 945–958, Oct. 2009.

[42] A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolu-tion algorithm with strategy adaptation for global numerical optimiza-tion,” IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 398–417, Apr.2009.

[43] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger,and S. Tiwari, “Problem definitions and evaluation criteria for theCEC 2005 special session on real-parameter optimization,” NanyangTechnol. Univ. KanGAL, Singapore, IIT Kanpur, Kanpur, India, Tech.Rep. 2 005 005, 2005.

[44] J. Liang, P. Suganthan, and K. Deb, “Novel composition test functionsfor numerical global optimization,” in Proc. IEEE Symp. Swarm Intell.,2005, pp. 68–75.

[45] J. Vesterstrøm and R. Thomsen, “A comparative study of differentialevolution particle swarm optimization and evolutionary algorithms onnumerical benchmark problems,” in Proc. IEEE Congr. Evol. Comput.,vol. 3. Jun. 2004, pp. 1980–1987.

[46] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,”IEEE Trans. Evol. Comput., vol. 3, no. 2, pp. 82–102, Jul. 1999.

[47] S. Das, A. Konar, and U. K. Chakraborty, “Two improved differentialevolution schemes for faster global search,” in Proc. Conf. Genet. Evol.Comput., 2005, pp. 991–998.

[48] F. Wilcoxon, “Individual comparisons by ranking methods,” BiometricsBull., vol. 1, no. 6, pp. 80–83, 1945.

[49] Y. S. Ong and A. J. Keane, “Meta-Lamarkian learning in memeticalgorithms,” IEEE Trans. Evol. Comput., vol. 8, no. 2, pp. 99–110, Apr.2004.

[50] B. Yuan and M. Gallagher, “Experimental results for the special sessionon real-parameter optimization at CEC 2005: A simple, continuousEDA,” in Proc. IEEE Conf. Evol. Comput., Sep. 2005, pp. 1792–1799.

[51] J. Xiao, Y. Yan, and J. Zhang, “HPBILc: A histogram-based EDA forcontinuous optimization,” Appl. Math. Comput., vol. 215, no. 3, pp.973–982, Oct. 2009.

[52] N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evol. Comput., vol. 9, no. 2, pp.159–195, 2001.

[53] F. J. Lin, P. H. Shen, S. L. Yang, and P. H. Chou, “Recurrent radial basisfunction network-based fuzzy neural network control for permanent-magnet linear synchronous motor servo drive,” IEEE Trans. Mag.,vol. 42, no. 11, pp. 3694–3705, Nov. 2009.

[54] Z. Z. Liu, F. L. Luo, and M. A. Rahman, “Robust and precision motioncontrol system of linear-motor direct drive for high-speed x-y tablepositioning mechanism,” IEEE Trans. Ind. Electron., vol. 52, no. 5, pp.1357–1363, Oct. 2005.

[55] K. Low and M. Keck, “Advanced precision linear stage for industrialautomation applications,” IEEE Trans. Instrum. Meas., vol. 52, no. 3,pp. 785–789, Jun. 2003.

[56] F. Cupertino, D. Naso, E. Mininno, and B. Turchiano, “Sliding-modecontrol with double boundary layer for robust compensation of payloadmass and friction in linear motors,” IEEE Trans. Ind. Applicat., vol. 45,no. 5, pp. 1688–1696, Sep.–Oct. 2009.

[57] J. E. Slotine and W. Li, Applied Nonlinear Control. Englewood Cliffs,NJ: Prentice-Hall, 1991.

[58] J. C. Spall, Introduction to Stochastic Search and Optimization. NewYork: Wiley, 2003.

[59] X. D. Ji and B. O. Familoni, “A diagonal recurrent neural network-based hybrid direct adaptive SPSA control system,” IEEE Trans. Autom.Control, vol. 44, no. 9, pp. 1469–1473, Jul. 1999.

[60] F. L. Lewis, R. Selmic, and J. Campos, Neuro-Fuzzy Control of Indus-trial Systems with Actuator Nonlinearities. Philadelphia, PA: Society forIndustrial and Applied Mathematics, 2002.

[61] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Ma-chine Learning. Reading, MA: Addison-Wesley, 1989.

[62] R. Zanasi, A. Tonielli, and G. Lo Bianco, “Nonlinear filters for thegeneration of smooth trajectories,” Automatica, vol. 36, no. 3, pp. 439–448, 2000.

[63] F. Neri and E. Mininno, “Memetic compact differential evolution forCartesian robot control,” IEEE Comput. Intell. Mag., vol. 5, no. 2,pp. 54–65, May 2010.

Ernesto Mininno (M’04) received the Masters andPh.D. degrees in electrical engineering from theTechnical University of Bari, Bari, Italy, in 2002 and2007, respectively, and the MBA degree from theNational Research Center, Milan, Italy, in 2003.

He was a Project Manager with the NationalResearch Center from 2003 to 2009. Currently, he isa Post-Doctoral Researcher with the Department ofMathematical Information Technology, University ofJyväskylä, Jyväskylä, Finland. His current researchinterests include robotics, intelligent motion control,

evolutionary optimization, compact algorithms, and optimization in noisyenvironments.

Ferrante Neri (S’04–M’08) received the Mastersand Ph.D. degrees in electrical engineering from theTechnical University of Bari, Bari, Italy, in 2002 and2007, respectively, and the Ph.D. degree in computerscience from the University of Jyväskylä, Jyväskylä,Finland, in 2007.

Currently, he is an Assistant Professor with the De-partment of Mathematical Information Technology,University of Jyväskylä, and is a Research Fellowwith the Academy of Finland, Helsinki, Finland.His current research interests include computational

intelligence optimization and more specifically memetic computing, differen-tial evolution, noisy and large scale optimization, and compact and parallelalgorithms.

Francesco Cupertino (M’08) was born inDecember 1972. He received the Laurea and Ph.D.degrees in electrical engineering from the TechnicalUniversity of Bari, Bari, Italy, in 1997 and 2001,respectively.

From 1999 to 2000, he was with the PEMCResearch Group, University of Nottingham,Nottingham, U.K. Since July 2002, he has beenan Assistant Professor with the Department ofElectrical and Electronic Engineering, TechnicalUniversity of Bari. He teaches two courses in

electrical drives at the Technical University of Bari. His current researchinterests include intelligent motion control of electrical machines, applicationsof computational intelligence to control, sliding-mode control, sensorlesscontrol of ac electric drives, signal processing techniques for three phasesignal analysis, and fault diagnosis of ac motors. He is the author orco-author of more than 70 scientific papers on these topics.

Dr. Cupertino is a Registered Professional Engineer in Italy.

David Naso (M’98) received the Laurea (Honors)degree in electronic engineering and the Ph.D. de-gree in electrical engineering from the PolytechnicInstitute of Bari, Bari, Italy, in 1994 and 1998,respectively.

He was a Guest Researcher with the OperationResearch Institute, Technical University of Aachen,Aachen, Germany, in 1997. Since 1999, he hasbeen an Assistant Professor of Automatic Controland the Technical Head of the Robotics Laboratory,Department of Electric and Electronic Engineering,

Polytechnic Institute of Bari. His current research interests include compu-tational intelligence and its application to control and robotics, distributedmultiagent systems, modeling and control of smart materials, and unconven-tional actuators for precise positioning and vibration damping.

Dr. Naso is currently an Area Editor of the journal Fuzzy Sets and Systemsfor the topic of intelligent control, and a member of the InternationalFederation of Automatic Control Technical Committee on ComputationalIntelligence in Control.


Recommended