+ All Categories
Home > Documents > A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey...

A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey...

Date post: 06-Jun-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
32
A Survey of Parallel Distributed Genetic Algorithms Enrique Alba Dpto. de Lenguajes y CC.CC. Universidad de Málaga Campus de Teatinos (2.2.A.6) 29071-Málaga (ESPAÑA) [email protected] José M. Troya Dpto. de Lenguajes y CC.CC. Universidad de Málaga Campus de Teatinos (2.2.A.13) 29071-Málaga (ESPAÑA) [email protected] ABSTRACT In this work we review the most important existing developments and future trends in the class of Parallel Genetic Algorithms (PGAs). PGAs are mainly subdivided into coarse and fine grain PGAs, the coarse grain models being the most popular ones. An exceptional characteristic of PGAs is that they are not just the parallel version of a sequential algorithm intended to provide speed gains. Instead, they represent a new kind of meta-heuristics of higher efficiency and efficacy thanks to their structured population and parallel execution. The good robustness of these algorithms on problems of high complexity has led to an increasing number of applications in the fields of artificial intelligence, numeric and combinatorial optimization, business, engineering, etc. We make a formalization of these algorithms, and present a timely and topic survey of their most important traditional and recent technical issues. Besides that, useful summaries on their main applications plus Internet pointers to important web sites are included in order to help new researchers to access this growing area. Keywords: Parallel genetic algorithms (PGAs), Evolution topics, Migration, PGA classification, Overview of technical issues, Future trends [Contact author: Enrique Alba ]
Transcript
Page 1: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

A Survey of Parallel Distributed Genetic Algorithms

Enrique AlbaDpto. de Lenguajes y CC.CC.

Universidad de MálagaCampus de Teatinos (2.2.A.6)

29071-Málaga (ESPAÑA)[email protected]

José M. TroyaDpto. de Lenguajes y CC.CC.

Universidad de MálagaCampus de Teatinos (2.2.A.13)

29071-Málaga (ESPAÑA)[email protected]

ABSTRACT

In this work we review the most important existing developments and future trends in the class of Parallel GeneticAlgorithms (PGAs). PGAs are mainly subdivided into coarse and fine grain PGAs, the coarse grain models beingthe most popular ones. An exceptional characteristic of PGAs is that they are not just the parallel version of asequential algorithm intended to provide speed gains. Instead, they represent a new kind of meta-heuristics ofhigher efficiency and efficacy thanks to their structured population and parallel execution. The good robustness ofthese algorithms on problems of high complexity has led to an increasing number of applications in the fields ofartificial intelligence, numeric and combinatorial optimization, business, engineering, etc. We make a formalizationof these algorithms, and present a timely and topic survey of their most important traditional and recent technicalissues. Besides that, useful summaries on their main applications plus Internet pointers to important web sites areincluded in order to help new researchers to access this growing area.

Keywords: Parallel genetic algorithms (PGAs), Evolution topics, Migration,PGA classification, Overview of technical issues, Future trends

[Contact author: Enrique Alba ]

Page 2: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-1-

A Survey of Parallel Distributed Genetic Algorithms

ABSTRACT

In this work we review the most important existing developments and future trends in the class of Parallel GeneticAlgorithms (PGAs). PGAs are mainly subdivided into coarse and fine grain PGAs, the coarse grain models beingthe most popular ones. An exceptional characteristic of PGAs is that they are not just the parallel version of asequential algorithm intended to provide speed gains. Instead, they represent a new kind of meta-heuristics ofhigher efficiency and efficacy thanks to their structured population and parallel execution. The good robustness ofthese algorithms on problems of high complexity has led to an increasing number of applications in the fields ofartificial intelligence, numeric and combinatorial optimization, business, engineering, etc. We make a formalizationof these algorithms, and present a timely and topic survey of their most important traditional and recent technicalissues. Besides that, useful summaries on their main applications plus Internet pointers to important web sites areincluded in order to help new researchers to access this growing area.

1. INTRODUCTION

A genetic algorithm (GA) [26], [36] is a heuristic used to find a vector *x&

(a string) of free parameterswith values in an admissible region for which an arbitrary quality criterion is optimized:

maxxf →)(�

: find an *x&

such that **)()(: fxfxfMx =≤∈∀ &&&

(1)

A sequential GA (Figure 1) proceeds in an iterative manner by generating new populations ofstrings from the old ones. Every string is the encoded (binary, real, ...) version of a tentative solution. Anevaluation function associates a fitness measure to every string indicating its suitability to the problem.The algorithm applies stochastic operators such as selection, crossover and mutation on an initiallyrandom population in order to compute a whole generation of new strings.

Initialize population // With randomly generated solutionsRepeat t = 1, 2, ... // Reproductive loop

Evaluate solutions in the populationPerform competitive selectionApply variation operators

Until convergence criterion satisfied

Figure 1. Pseudo-code of a sequential genetic algorithm.

Unlike most other optimization techniques, GAs maintain a population of tentative solutions thatare competitively manipulated by applying some variation operators to find a global optimum. For non-trivial problems this process might require high computational resources (large memory and searchtimes, for example), and thus a variety of algorithmic issues are being studied to design efficient GAs.With this goal numerous advances are continuously being achieved by designing new operators, hybridalgorithms, termination criteria, and more [10]. We revisit and survey one of such improvementsconsisting in using parallel models of GAs (PGAs) [16], [17].

Page 3: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-2-

PGAs are not just parallel versions of sequential genetic algorithms. In fact they actually reachthe ideal goal of having a parallel algorithm whose behavior is better than the sum of the separatebehaviors of its component sub-algorithms, and this is why we directly focus on them.

Several arguments justify our work. First of all, PGAs are naturally prone to parallelism since theoperations on the strings are relatively independent from each other. Besides that, the whole population(panmixia) can be geographically structured [32], [55], [65] to localize competitive selection betweenstring subsets, often leading to better algorithms. The evidences of a higher efficiency [31], [66], largerdiversity maintenance [44], [50], additional availability of memory and CPU, and multi-solutioncapabilities [14], reinforce the importance of the research advances in the field of PGAs.

Using parallel GAs often leads to superior numerical performance (not only to faster algorithms)even when the algorithms run on a single processor [31], [34]. However, the truly interestingobservation is that the use of a structured population, either in the form of a set of islands [68] or adiffusion grid [65], is responsible for such numerical benefits. As a consequence, many authors do notuse a parallel machine at all to run structured-population models, and still get better results than withserial GAs [4], [31], [45]. Hardware parallelization is an additional way of speeding up the execution ofthe algorithm, and it can be attained in many ways on a given structured-population GA.

Hence, once a structured-population model is defined, it could be implemented in anyuniprocessor or parallel machine. There exist many examples of this modern vision of parallel GAs,namely, a ring of panmictic GAs on a MIMD computer, a grid of individuals onuniprocessor/MIMD/SIMD computers, and many hybrids (see Section 3 for more details). We mainlyfocus on distributed implementations since clusters of workstations and similar hardware are verypopular and accessible platforms (the impact of the research is potentially very large).

In this work we present some background for the research with sequential and parallel GAs. Alsowe identify their current and future implications, as well as some unresolved issues. Section 2 containsintroductory material providing the nomenclature, and some general working principles of sequentialGAs. Section 3 offers an introduction to parallel GAs and formalizes the work of a parallel GA, whileSection 4 summarizes some existing theories about why and how they work. In Section 5 we give somehints to help classification in such a diverse field, and explicitly present a detailed survey. Section 6contains the formalization of the main technical issues concerning parallel GAs, and it outlines futuretrends. Finally, in Section 7 we discuss some implementation considerations, and Section 8 gives someconcluding remarks. We add an extensive bibliography to help the reader in finding useful startingpointers.

2. SEQUENTIAL GENETIC ALGORITHMS

Before starting off by discussing sequential GAs we will put in context these algorithms. Afterwards, wewill offer a review of the nomenclature, the representation issues, and the basic internal operations of acanonical GA in this section. We develop our study in the field of GAs since they are quite popular andreceive many applications, and also because many results on GAs are extensible to other EAs(evolutionary algorithms) in general. See [11], [24], [58] for learning about the origin and similaritiesamong the different EA families. The Figure 2 details the well-accepted sub-classes of EAs, namelygenetic algorithms (GA), evolutionary programming (EP), evolution strategies (ES), geneticprogramming (GP), and classifier systems (CS). In [10] the reader can found a great compendium of thestate of the art in evolutionary computing.

Page 4: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-3-

After some recent works, the fields of evolutionary computing, neural networks, and fuzzy logic,are listed together as techniques looking for solving problems by using numeric knowledgerepresentation, in opposition to traditional artificial intelligence, where symbolic knowledgerepresentation is used. This broader research field is known as Computational Intelligence or SoftComputing [15], and it is gaining acceptance (recent international journals and conferences).

ComputationalIntelligence

Evolutionary Computing

Artificial Neural Networks

Fuzzy Logic

GA

EP

ES

GP

CS

Figure 2. Location of the different families of evolutionary algorithms.

Since GAs apply operations drawn from nature, the nomenclature used in this field is closelyrelated to the terms we can find in biology. Table I summarizes the meaning of these special terms in theaim of helping novel researchers.

TABLE I. NOMENCLATURE

Genotype The code, devised to represent the parameters of the problem in the form of a stringChromosome One encoded string of parameters (binary, Gray, floating point numbers, etc...)Individual One or more chromosomes with an associated fitness valueGene The encoded version of a parameter of the problem being solvedAllele Value which a gene can assume (binary, integer, real, or even complex data structures)Locus The position that the gene occupies in the chromosomePhenotype Problem version of the genotype (algorithm version), suited for being evaluatedFitness Real value indicating the quality of an individual as a solution to the problemEnvironment The problem. This is represented as a function indicating the suitability of phenotypesPopulation A set of individuals with their associated statistics (fitness average, Hamming distance, ...)Selection Policy for selecting one individual from the population (selection of the fittest, ...)Crossover Operation that merges the genotypes of two selected parents to yield two new childrenMutation Operation that spontaneously changes one or more alleles of the genotype

The algorithm operates on a population of strings or, in general, structures of arbitrarycomplexity representing tentative solutions. In the textbook versions of GAs, every string is called anindividual and it is composed of one or more chromosomes and a fitness value. Normally, an individualcontains just one chromosome (i.e. haploid individuals, although some diploid and n-ploid concepts andapplications exist, see [26]) that represents the set of parameters called the genes. Every gene is in turnencoded (usually) in binary (or Gray) by using a given number of alleles (0,1) (see Figure 3).

The fitness function behaves as the environment within which the quality of the decoded versionof the genotype (called the phenotype) is evaluated. The fitness function was used in the earlyapplications of EAs to encapsulate the problem knowledge, while the rest of the algorithm was ageneral-purpose algorithm with standard operators performing a blind-search, only guided by the valuesreturned by this function. However, this vision is somewhat old, and we now know that problemknowledge is in general a necessity to provide efficient GAs (see the No Free Lunch Theorem [74]). Infact, GAs are considered as a family of parameterizable meta-heuristics that have to be tailored for everynew application in order to have some chances of outperforming other search techniques.

Page 5: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-4-

CHROMOSOMEChromosome

1 100 000 111 110 010 101 101 01 1 00Alelles

Gene 8 Gene 7 Gene 6 Gene 5 Gene 4 Gene 3 Gene 2 Gene 1 Gene 0Genes

2 1 0 2 1 0 2 1 0 2 1 0 2 1 0 2 1 0 2 1 0 2 1 0 2 1 0Loci

FITNESS

Individual

Figure 3. Some details on the genotype.

To illustrate how a canonical GA works we must remember that the population of individualscan be ranked according to the fitness values. Then, probabilistic selection of the fittest can be appliedto select two parents and cross them to yield two new children. These children have the mixed and alsomutated contents inherited from their parent chromosomes. These operations are repeated until a newpopulation has been built up. The new population needs to be evaluated, and then the process startsagain. Crossover and mutation are applied according to a given probability, and not deterministically.

String# String Fitness % of the Total 1 01101 169 14.4 2 11000 576 49.2 3 01000 64 5.5 4 10011 361 30.9

Total 1170 100.0

1. Roulette Wheel Selection (RW)

Pf

fSi

i

jj=

n=

1∑

5%

31%

14%

50%

1001101101

11000

01000

2. Single Point Crossover (SPX)pc ∈ [0.6 .. 1.0]

parents offspring01|101 (169) 01000 (64)11|000 (576) 11101 (841)

3. Mutationpm ∈ [0.001 .. 0.1]

1 0 0 0 1 0 0 0

Mutation of the 1st allele

4 3 2 1 04 3 2 1 0

0 1

fitness=64 � fitness=576Figure 4. An example of the computations that a GA performs to maximize f(x)=x2. The parameter x isencoded as a binary string of five bits. The population size is four strings.

See in Figure 4 some basic implementations of these operators (roulette wheel selection, singlepoint crossover, and bit-flip mutation [26]). In roulette wheel selection pairs of strings are selectedattending to their fitness in relation to the mean fitness of the population. Even very bad strings have anon-zero probability of being selected for reproduction (although this rarely occurs). The crossovermakes hyper-plane recombination in the search space by merging string slices of the two selectedparents. Finally, the mutation randomly flips (with a low probability) every bit value in order to insertfresh values in the population, allowing jumps in the problem space.

Page 6: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-5-

3. PARALLEL GENETIC ALGORITHMSIf we mimic natural evolution we would not operate on a single population in which a given individualhas the potential to mate with any other partner in the entire population (panmixia). Instead, specieswould tend to reproduce within subgroups or within neighborhoods. A large population distributedamong a number of semi-isolated breeding groups is known as polytypic. A PGA introduces the conceptof interconnected demes. The local selection and reproduction rules allow the species to evolve locally,and diversity is enhanced by migrations of strings among demes.

In the Subsection 3.1 we present some generalities on the EA branch dedicated to PGAs. Next,in Subsection 3.2, we detail the mechanics of a parallel genetic algorithm more formally.

3.1. Introduction to Parallel Genetic Algorithms

Let us begin by explaining why one should bother using a parallel GA. We can summarize theadvantages of using a PGA in Table II. A PGA has the same advantages as a serial GA, consisting inusing representations of the problem parameters (and not the parameters themselves), robustness, easycustomization for a new problem, and multiple-solution capabilities. These are the characteristics thatled GAs and other EAs to be worth of study and use. In addition, a PGA is usually faster, less prone tofinding only sub-optimal solutions, and able of cooperating with other search techniques in parallel.

TABLE II. ADVANTAGES OF USING A PGA

Advantages of using a Parallel Genetic Algorithmn Works on a coding of the problem (least restrictive)o Basically independent of the problem (robustness)p Can yield alternative solutions to the problemq Parallel search from multiple points in the space

r Easy parallelization as islands or neighborhoodss Better search, even if no parallel hardware is usedt Higher efficiency and efficacy than sequential GAsu Easy cooperation with other search procedures

For an overview of the applications of parallel genetic algorithms (PGAs) see [3], [4], [7], [66].Also, there is a lot of evidence of the higher efficacy and efficiency of PGAs over traditional sequentialGAs (for example [1], [7], [16], [55], [59]).

PGAs are a class of guided random evolutionary algorithms (see a taxonomy of search methodsin Figure 5). If we take a deeper look at them we can distinguish among 4 different types ofparallelization [16]. Types 1 and 2 proceed in the same way as a sequential GA, but in a faster manner.This is achieved by either profiting from a code parallelizer embedded in the compiler, or else by theexplicit parallelization (master-slave global parallelization) of the genetic operators and/or evaluations.However, only for problems with a time-consuming function evaluation do they represent a viablechoice; otherwise the communication overhead is higher than the benefits of their parallel execution.

Many different models have been proposed for the rest of PGAs (types 3 and 4). However, allthese existing models can fit into two more general classes depending on their grain of parallelism,called coarse [68], [73] or fine grain [21], [32], [46] parallel GAs (cgpGAs and fgpGAs). Thisclassification relies on the computation/communication ratio. If this ratio is high we call the PGA acoarse grain algorithm, and if low then we call it a fine grain parallel GA. However, as we will point outin the next section and have suggested in the introduction, some other criteria exist to distinguish them.What we really have is a continuum of model types between coarse and fine grain models of parallelGAs, all of them evolving separate populations. The implementation of these separate evolution modelsdiffers from each other, giving birth to numerous variants, some (many) of them difficult to classify.

Page 7: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-6-

Coarse grain PGAs are also known as distributed or island GAs (dGAs), and fine grain PGAs areknown as cellular (cGAs), diffusion or massively-parallel GAs. The early models of cgpGAs wereconceived of as running on MIMD platforms, while fgpGAs were developed to exploit the SIMDparallelism.

S E A R C H T E C H N IQ U E S

C A L C U L U S B A S E D R A N D O M E N U M E R A T IV E S

D IR E C T IN D IR E C T

F ib o n ac c i N ew to n G reed y

G U ID E D N O N G U ID E D

E V O L U T IO NS T R A T E G IE S

S im u la tedA n n ea lin g

E V O L U T IO N A R Y A L G O R IT H M S

G E N E T ICA L G O R IT H M S

P A R A L L E L G A s S E Q U E N T IA L G A s

E V O L U T IO N A R YP R O G R A M M IN G

A u to m a ticP a ra lle li sm(co m p ile r)

O n e p o p u la t io n ,P a ra lle l ev a ls . +C ro s s . + M u ta t .

C O A R S EG R A INP G A s

F IN EG R A INP G A s

H O M O G E N E O U S H E T E R O G E N E O U S

H A R D W A R EP L A T F O R M

S O F T W A R E(S E A R C H )

H A R D W A R EP L A T F O R M

S O F T W A R E(S E A R C H )

N E U R A L N E T W O R K S

G U ID E D N O N G U ID E D

D yn a m icP ro g ram m in g

B ra n ch &B o u n d

B a ck trac k in gL a s V e g as

G en era tion a l S tea d y-S ta te M essy

H o p field K o h o n en M a p s M u lt ilaye r P e rc ep tro n s

G E N E T ICP R O G R A M M IN G

T a b uS ea rc h

1 2 3 4

Figure 5. Taxonomy of search techniques.

A typical coarse grain parallel GA is a ring (hypercube or other disposition) of sub-populationsrunning in parallel with sparse migrations of copies or of individuals among them [68]. On the otherhand, the most popular fine grain parallel GA is a toroidal grid of individuals, each one interacting onlywith its neighborhood [46].

A PGA can run on a network of computers or in massively parallel computers, withindependence of its granularity. This provokes a further classification depending on the homogeneity ofthe hardware: do sub-algorithms run on similar computers?. On the other hand, the sub-populationsbelonging to a cgpGA (local strings in a fgpGA) can evolve by applying the same techniques or by usingdifferent kinds of coding, operators, parameters, etc. The latter is the reason for our distinction abouthomogeneity at the software level (see the bottom part of Figure 5).

3.2. Algorithmic Description of a PGAIn this section we formalize and visualize the different types of PGAs from a unifying point of view.This is aimed at helping future research in the field: comparisons, knowledge exchange, proposal of newmodels, etc. The outline of a general PGA is shown in Algorithm 1. As a stochastic technique, we candistinguish three major steps, namely initial sampling, optimization, and checking the stoppingcriterion. Therefore, it begins (t=0) by randomly creating a population P(t=0) of µ structures, each oneencoding the p problem variables on some alphabet.

Each structure is usually a vector over IB={0,1} (I= IB p⋅lx) or IR (I= IR p ). Consequently, eachproblem variable is encoded in lx bits or in a floating-point number. However, as mentioned before,other non-string representations are possible.

Page 8: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-7-

ALGORITHM 1: PARALLEL GENETIC ALGORITHM

t := 0;initialize: { } µ

µ IaaP ∈= )0(),...,0(:)0( 1

��

;

evaluate: ( ) ( ){ })0(,...,)0(:)0( 1 µΦΦ aaP��

;

while ( ) true≠)(tPι do // Reproductive plan

select: ( ))(:)( tPstPsΘ=′ ;

recombine: ( ))(:)( tPtPc

′⊗=′′ Θ ;

mutate: ( ))(:)( tPmtPm

′′=′′′ Θ ;

evaluate: ( ) ( ){ })0(,...,)0(:)( 1 λΦΦ aatP ′′′′′′′′′ ��

;

replace: ( )QtPrtPr

∪′′′=+ Θ )(:)1( ;

<communication>t := t +1 ;

end while

To illustrate the very different genotypes that a GA can evolve we show in Figure 6 fourexamples of complex data structures used in diverse applications. In Figure 6a, a binary string representsa mapping from processes (genes) to processors (gene values) [4]. In Figure 6b, every individualencodes a tree of symbols representing the rules of a fuzzy logic controller [5]. Figure 6c describes howa binary string can be interpreted as an integer sequence; each integer value represents a fired transitionin the finite state machine of a communication protocol [6]. Finally, in Figure 6d we plot an individualcontaining real-valued genes encoding the weights of a neural network in order to train it with a GA [3].

P0 P1 P2 Pn

1 0 0 1 1 1 .................... 0 0

Processor 2Processor 1

Process 1

Processor 0

(a)IF pos IS NL AND vel IS NL THEN for IS PL

NL NL

PL

VELPOS

FOR

AND

IF

ISIS

EL

RLIST

IS

Type I

Type II

Type III

Type IV

(b)

ji

[10, 35, 204, 78, 27, 106]

FSM1 FSM2

w u

Interpretation Pointer

ActualState

ActualState

Fires

Two activetransitions

String

(c)

n

Input Layer

o p

a b m...c

q r z...s

Hidden Layer

Output Layer

(a-n)...(m-n)(a-o)...(m-o)(a-p)..(m-p)(n-q)(o-q)(p-q)...(n-z)(o-z)(p-z)

GENETIC STRING

(d)

Figure 6. Complex data structures -I- evolved by a PGA. In this figure every individual encodes: (a) anassignment of processes to processors, (b) a rule base for a fuzzy logic controller, (c) a firing sequencefor a finite state machine, and (d) a real-valued vector containing the weights of a neural network.

Page 9: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-8-

An evaluation function Φ is needed each time a new structure is generated in the algorithm. Thisevaluation is used to associate a real value to the (decoded) structure indicating its quality as a solutionto the problem. All the mentioned structures encode tentative solutions to complex systems in a singlegenetic individual. This individual is used to simulate this complex system every time an evaluation isrequested by the algorithm. Consequently, it can be inferred that considerable time is spent whencomplex and real-life applications are being tackled with GAs, thus supporting our claims about theneed of using parallel GAs as more efficient search methods.

Afterwards, the GA iteratively applies some set of variation operators on some selectedstructures from the current population. The goal is to create a new pool of λ tentative solutions andevaluate them to yield P(t+1) from P(t). This generates a sequence of populations P(0), P(1), P(2), ...with increasingly fitter structures. The stopping criterion ι is to fulfill some condition like reaching agiven number of function evaluations, finding an optimum (if known), or detecting stagnation in thealgorithm after a given number of generations. See [26], [36] and [49] for the introductory background.

The selection sΘs uses the relationship among the fitness of the structures to create a mating pool.Some parameters Θs might be required depending on the kind of selection [13]. Typical variationoperators are crossover (⊗ binary operator) and mutation (m, unary operator). Crossover recombinestwo parents by exchanging string slices to yield two new offspring, while mutation randomly alters thecontents of these new structures. They both are stochastic operators whose behavior is governed by a setof parameters like a probability of application: Θc={pc} -high- and Θm={pm} -low-.

Finally, each iteration ends by selecting the µ new individuals that will comprise the newpopulation. For this purpose, the temporary pool P’’’(t) plus a set Q are considered. Q might be empty(Q=∅) or contain part of all of the old population Q=P(t). This step applies a replacement policy r thatuses the temporary pool (and optionally the old pool) to compute the new population of µ individuals.The best structure in one population usually deterministically survives in the next generation, giving riseto the so-called elitist evolution (or R-elitism if R>1 -more than one string survives-). The best structureever found is used as the PGA solution.

Many sequential variants exist, but only two of them are especially popular. The first one is thegenerational GA -genGA-, where a whole new population is created from the old one (λ=µ) to replace it(Q=∅). The second variant is the steady-state GA -ssGA- in which only a few structures are generatedin each iteration (λ=1 or 2) and they are inserted in the current population (Q=P(t)) [67] -see Figure 7-.At present, many skilled practitioners use genGAs in which λ<µ and Q=P(t), thus providing otherintermediate models of panmictic evolution.

λ=1 λ=µ1<λ<µ

steady-state generational

Figure 7. Difference between steady-state and generational GAs. The intermediate region is occupiedby many algorithms generating and replacing only a given percentage of the population.

Many works as [39], [67] and [72] show that the one-at-a-time reproduction of a ssGA is oftenbetter than a pure generational GA. However, many works still deal with genGAs since some of theirdrawbacks can be solved by using improved operators, which, incidentally, could also improve thecanonical ssGA as well. In any case, ssGAs deserve more comparison studies to highlight theiradvantages and drawbacks.

Page 10: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-9-

In a parallel GA there exist many elementary GAs working on separate sub-populations Pi(t).Each sub-algorithm includes an additional phase of periodic communication with a set of neighboringsub-algorithms located on some topology. This communication usually consists in exchanging a set ofindividuals, although nothing prevents the sub-algorithms of exchanging other kind of information suchas population statistics. All the sub-algorithms are thought to perform the same reproductive plan.Otherwise the PGA is heterogeneous [2], [34] (even the representation could differ among the islands,posing new challenges to the exchange of individuals [44]).

In a distributed GA demes are loosely-coupled islands of strings (Figure 8b). A cellular GA(Figure 8c) defines a NEWS neighborhood (North-East-West-South in a toroidal grid) in whichoverlapping demes of 5 strings (4+1) execute the same reproductive plan. In every neighborhood (deme)the new string computed after selection, crossover, and mutation replaces the current one only if it isbetter (binary tournament), although many other variants are possible. This process is repeated for all theneighborhoods in the grid of the cellular GA (there are as many neighborhoods as strings).

With regard to the classes of Figure 5 we suggest in Figure 8 several implementations of PGAs.Global parallelization consists in evaluating, and maybe crossing+mutating in parallel all the structures,while selection uses the whole population. Interesting considerations on global parallelization can befound in [16]. This model provides lower runtime only for slow objective functions; an additionallimitation is that the search mechanism uses a single population. The automatic parallelization is rarellyfound since the compiler must provide the parallelization of the algorithm automatically.

The other hybrid models in Figure 8 combine different parallel GAs at two levels in order toenhance the search in some way. Interesting surveys on these and other parallel models can be found in[1], [16], [17]. Hierarchies of GAs are the most recurrent models found in the literature. In Figure 8d wecan appreciate a distributed algorithm in which every island runs a cellular GA. In Figure 8e severalalgorithms using global parallelization are used to create a ring of islands. Finally, Figure 8f shows twolevels of coarse grain PGAs, the inner level having a full-connected topology, and the outer level havinga simple ring topology.

...

Master

Slaves

Master

Slaves Workers (a) (b) (c) (d) (e) (f)

Figure 8. Different models of PGA: (a) global parallelization, (b) coarse grain, and (c) fine grain.Many hybrids have been defined by combining PGAs at two levels: (d) coarse and fine grain, (e) coarsegrain and global parallelization, and (f) coarse grain plus coarse grain.

We want to point out that coarse (cgPGA) and fine grain (fgPGA) PGAs are subclasses of thesame kind of parallel GA consisting in a set of communicating sub-algorithms. We propose a change inthe nomenclature to call them distributed and cellular GAs (dGA and cGA), since the grain is usuallyintended to refer to their computation/communication ratio, while actual differences can also be foundin the way in which they both structure their population (see Figure 9).

While a distributed GA has a large sub-population (>>1) a cGA has typically only one string inevery sub-algorithm. For a dGA the sub-algorithms are loosely connected, while for a cGA they aretightly connected. In addition, in a dGA there exist only a few sub-algorithms, while in a cGA there is alarge number of them.

Page 11: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-10-

#sub-algorithms

coupling

sub-pop size

cGA

dGA

Figure 9. The Structured-Population Genetic Algorithm Cube.

4. THEORETICAL EXPLANATION OF WHY dGAs WORKIn this section we present the most important results that formalize the work of a dGA. We analyze theschema theorem [26], [36] as the more popular explanation of the kind of search that a sequential GAperforms. In addition, there exist new and interesting complementary theories such as the response toselection [50] that we do not present here, although they offer improved insights into the convergenceand dynamics of evolutionary algorithms. We will address some theoretical extensions for a dGAalgorithm later in this section. Finally, we review a figure of merit to characterize the search in terms ofthe number of processed schemata and the convergence time.

Our goal is to show that parallel dGAs and serial GAs have a common explanation of theireffectiveness, i.e. the exponential allocation of trials to better schemata in the population is also the mainunderlying force guiding the search in the distributed evolution.

4.1. The Schema TheoremThe basic work of any GA resides in the implicit management of schemata. Any string is considered tobe an instance of several schemata. If we are working with strings on a binary alphabet V={0,1}, aschema is a hyper-plane of the search space that can be represented as a string of the extended alphabet

V+={0,1,*}. The ‘*’ symbol indicates a don’t care (undefined) position in the schema.

Any given string in the GA is an instance of a large number of schemata. For example, the stringss1=1110 and s2=1100 are possible instances of the schema S=11*0. Good schemata have high averagefitness, a small number o(S) -order- of defined (non ‘*’) positions, and a short distance δ(S) -defininglength- between the first and last defined positions. This kind of schema is good since it has a lowerprobability of being corrupted, and therefore a higher probability of being mixed with another goodschema.

� S1=011*1**, S2=0****** → o(S1)=4, o(S2)=1 and δ(S1)=5-1=4, δ(S2)=1-1=0.

Holland’s [36] initial estimates work out a result known as the implicit parallelism of a GA. Thiseffect explains why, although the GA actually works on n strings, it is really working with O(n3)schemata in the schema space. Every binary string is an instance of up to 2

l schemata. A population of n

individuals contains a maximum of n�(2l) schemata.

Although this basic description still holds for binary alphabets, there exists a new interpretationof schemata that overcomes the drawbacks of low expressiveness that this initial interpretation assignedto alphabets of higher cardinality (for example to integers V=Z or even to reals V=IR ) [9].

In Figure 10 we summarize the additive effects that every operation (selection, crossover, andmutation) causes on the number of instances (strings) belonging to a given schema.

Page 12: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-11-

This theorem explains why a good solution schema will receive an exponentially increasingnumber of strings (m(S,t)) as the evolution proceeds. Pc and Pm are the probabilities of applyingcrossover and mutation, f(S) is the schema average fitness, and f is the population average fitness.

Under proportionate selection (Figure 10A) every above-average-fitness schema obtains anexponential number of instances throughout generations (t). Under crossover (Figure 10B) only shortschemata are appropriately sampled in the next generation (depending on the defining length of theschema and Pc). Construction of new and existing (re-sampling) schemata takes also place throughcrossover. Finally, low order schemata have a higher probability of surviving to mutation (Figure 10C).

Hence, the selection defines the fittest strings, crossover samples new points in the problemspace, and finally the mutation brings new alleles into the population (and corrupts some of the existingones) in order to get out of local optima, also allowing smooth changes for final tuning.

Therefore, if a problem can be expressed in terms of basic building blocks on some kind ofalphabet (floating point -FP- or integer numbers are very popular in GAs and EAs in general), and afitness function for strings can be defined, then we are very likely able to solve this problem with a GA.

The schema theorem has some limitations in predicting string losses, and thus it has to beconsiderably improved before being used in a computational framework. Also, some generalizations ofthe concept of a schema (such as defining formae) seem to be of greatest success. In fact, some otherconcepts like the variance of the fitness landscape, the representation used, and the operators are neededto give a full picture of the behavior of genetic algorithms.

� THE EFFECTS OF SELECTION (A)

1. m S t m S tf S

f( , ) ( , )

( )+ = ⋅1 (If f(S) above average...)

2. m S t m S tf c f

fc m S t( , ) ( , ) ( ) ( , )+ = ⋅

+ ⋅= + ⋅1 1

3. m S t m S c t( , ) ( , ) ( )= ⋅ +0 1 (The fittest schemata grow)

� THE EFFECTS OF CROSSOVER (B)

1. P S PS

ls c( )

( )≥ − ⋅

−1

1

δ (Survival probability)

2. m S t m S tf S

fP

S

lc( , ) ( , )

( ) ( )+ ≥ ⋅ ⋅ − ⋅

1 1

1

δ

� THE EFFECTS OF MUTATION (C)

1. ( )P S Ps m

o S( )

( )= −1

2. ( )P P o S Pm m

o S

m<< ⇒ − ≈ − ⋅1 1 1( )

( ) (Survival probability)

� THE SCHEMA THEOREM (D)

m S t m S tf S

fP

S

lo S Pc m( , ) ( , )

( ) ( )( )+ ≥ ⋅ ⋅ − ⋅

− ⋅

1 11

δ

Figure 10. The schema theorem predicts the number of instances m(S,t+1) of a schema S at time t+1.

4.2. Extensions of the Schema Theorem for the dGA

We can consider dGAs to be as efficient and robust as their sequential counterparts if they are showed toallocate better and better strings to good schemata by following an exponential law. Some results [29]can be summarized by saying that, if Pm is the probability of mutation in all the distributed nodes of thedGA, then a schema S of order o(S) survives in the whole population with probability 1-Pmo(S), just asin the sequential GA. In addition, it is shown in [29] that the probability of being successfullyrecombined, and also the selection growth rate for good schemata, are basically the same as for thesequential algorithm. In order to get this conclusion it is needed to suppose that the average fitness isapproximately the same in all the sub-algorithms, what it is difficult to achieve in practice.

Page 13: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-12-

We must note also that this last guess is only correct for synchronous dGAs or else asynchronousones running in computers of similar characteristics. For a software/hardware heterogeneous paralleldGA this assumption might not hold. This represents an open research issue.

Of course, these results only mean that the dGA is an efficient and robust search procedure. Theactual performance of this algorithm is often better than for the sequential GA in many problems; see forexample [42], [45], [51] and many other references at the end of this paper. Not only do we have sub-algorithms with smaller populations that run every step in parallel (reducing the whole processing time),but also this kind of search maintains samples of very different promising zones of the search space inevery population, thus showing a higher efficacy, and probability of obtaining a solution.

However, much work is still needed since the schema characteristics of a distributed GA aretheoretically proven to hold only under some considerations [55]: full connected topology, uniformmigration every step, and other similar parameters that many successful dGAs do not directly follow.

4.3. Optimum Schema Processing Rates in a Parallel dGAThere exists an interesting measure of the relative merit of any parallel GA, developed by Goldberg[27], which relates the schema processing rates to the time of convergence of the algorithm and theparallel disposition of the individuals. The results presented in this section are additionally appealing bythe fact that a parallel dGA is loosely coupled, and communications are sparsely undertaken.

If we had a perfectly parallel machine with one processor for every individual then theconvergence time of the algorithm would be constant )1(Θ∈ct as regards the length of strings. But the

communication delays and other time consuming operations will slow down this ideal value.

If ϕ is the ratio between the parallel and sequential speeds, and if we suppose it can be expressedas a power of the number of processors ϕ=nβ then β is the parallelization degree. The time of

convergence can be expressed as t nc = −1 β , where β=0 is the sequential GA and β=1 represents theperfectly parallel GA. Therefore, we can define a figure of merit by relating the number of schemataprocessed (any order and length are considered, since the authors in [27] used asymptoticapproximations) in the whole evolution (nc is the number of steps and tc is the time for one step):

M n lS

t

S n l

K n n n

l

( , , )( , )

( log )β β= =

⋅ ⋅ ⋅ −

21

(2) ( )

M n ln

K n n n

l

( , , )( log )

β β=⋅ −

⋅ ⋅ ⋅ −

2 11

(3)

where K is a constant for the actual number of generations and n n⋅ log is the number of generations ofthe algorithm to converge to a solution (worst case). For medium size populations, the schema functionS(n,l) can be estimated to S n l n l( , ) ≈ ⋅2 , and the right hand equation gives us the merit of a parallel GAdepending on the population size (n), string length (l), and degree of parallelism (β) (see Figure 11).

The relative merit of a sequential GA (β=0) versus a perfectly parallel GA (β=1) isM(n,l,0)/M(n,l,1)≈1/n. This means that the parallel version is n times better than the sequential one. Onthe other hand, the degree of parallelism needed to get twice the merit of a sequential GA is:

β twice

n

n=

−+

log log

log

21 (4)

This means that this degree of parallelism depends on the number of strings in the population,and that values as small as β=0.2 can easily allow doubling the performance of a sequential algorithm.Therefore, we do not need ideally perfect parallel hardware to obtain an algorithm of high performance.

Page 14: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-13-

merit of a dGA versus different degrees of parallelism (K=1, n=100, l=200)

0

5E+58

1E+59

1,5E+59

2E+59

2,5E+59

3E+59

3,5E+59

0

0,04

0,08

0,12

0,16 0,2

0,24

0,28

0,32

0,36 0,4

0,44

0,48

0,52

0,56 0,6

0,64

0,68

0,72

0,76 0,8

0,84

0,88

0,92

0,96

degree of parallelism [0..1]

nu

mb

er o

f p

roce

ssed

sc

hem

ata

in o

ne

run

Figure 11. Relationship between the merit and the degree of parallelism. Computer simulation with 100strings of 200 bit length.

5. CLASSIFICATION OF PARALLEL AND SEQUENTIAL GAsIn this section we make a survey of the algorithms and software more readily accessible in this area. Wefirst present parallel dGAs from a historical and application-oriented perspective. Then, we encompass aclassification of models and software, this time taking also into account some sequential versions.

Let us begin with the temporal review of parallel distributed GAs. The most well-knownimplementations and their main characteristics sorted by date of published material are:

TABLE III. OVERVIEW OF PARALLEL DISTRIBUTED GAs BY YEAR

Par. dGA Ref. Year Main CharacteristicsPGA [55] 1987 Generational islands on an Intel iPSC hypercube (8 CPUs). Migrate the best. Dynamic Top.dGA [68] 1989 Distributed populations. Good results with 20% of population migrants every 20 generationsGENITOR II [73] 1990 Steady-State islands with ranked selection and reduced surrogate crossoverPGA [51] 1991 Sub-populations in a circular ladder-like 2-D topology. Migrate the best, local hill-climbingSGA-cube [23] 1991 Made for nCUBE2. This is the parallel extension of the well-known simple GA of GoldbergPARAGENESIS --- 1992 Made for the CM-200. This places one individual in every CPUPeGAsuS [59] 1993 Targeted for MIMD machines and written in a very high and flexible description languageGAMAS [56] 1994 Uses 4 very heterogeneous species (islands) and quite specialized migrations and genotypesiiGA [44] 1994 Injection island GA with hierarchical heterogeneous nodes and asynchronous migrationsSP1-GA [42] 1994 128 steady-state islands on an IBM SP1 machine of 128 nodes. 2-D toroidal mesh. mr=1DGENESIS [47] 1994 Free topology, flexible migration, and policies for selection. Implemented with sockets (UDP)GALOPPS [30] 1996 Very flexible. Implemented with PVM and comprising a large number of operatorsGDGA [34] 1996 Synchronous. Simulated on one processor. Generational. Uses Fuzzy crossover and FP genesCoPDEB [2] 1996 Every island uses its own probabilities for mutation, crossover, and specialized operators

Of course this list is not complete, but it gives us a feeling of the evolution of parallel dGAs overrecent years. We will offer more details about these and other algorithms in the following tables. Therehas been a tendency towards making parallel dGAs that run on clusters of machines paying muchattention to their portability. Old systems like transputer machines do not support recentimplementations. Also, new developments in JAVA are appearing as a confirmation of these tendencies.

From a different point of view, we present in Table IV a set of representative applications inorder to show the wide spectrum (i. e. robustness) of successful studies. It can be seen that the set ofapplications is quite diverse, thus stating the relative importance of this kind of heuristics.

Page 15: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-14-

TABLE IV. SOME APPLICATIONS OF PARALLEL DISTRIBUTED GAs

Reference Application Domain[7] Parallel training of artificial neural networks, fuzzy logic controllers, and communication protocols

[19] Synthesis of VLSI circuits[31] Function optimization[42] Set partitioning problem[44] Graph partitioning problem[49] Constraint Optimization, reordering problems, ...[51] Traveling salesperson problem (TSP), function optimization[53] Distributing the computing load onto a set of processing nodes[56] The file allocation problem, XOR neural network, sine envelope sine wave function[66] Systems modeling, protein tertiary structure prediction, and two-dimensional bin packing problems[68] Walsh polynomials[72] Optimization of the connection weights of neural networks (XOR, bin-adder, ...), and function optimization

There exist many other applications in very interesting domains such as frequency assignmentproblems, market predictions, game theory, filters, graphic transformations, etc. Some of theseapplications motivated the need for new classes of genotypes that can be thought of consisting invariable-length strings of arbitrary symbols. A further extension led to the mentioned geneticprogramming paradigm [40] in which the individuals are trees of symbols representing computerprograms, fuzzy controllers, or many other kinds of high-level data structures. This is a new andimportant research area in which parallel dGAs are developing quickly. Also, new genotypes andoperators are being designed for dealing with constraint problems and combinatorial optimization.

Besides that, the importance of cellular GAs is also growing due to recent studies in which noparallel hardware is used, but in which the search is still enhanced due to the existence of neighborhood-like spatial dispositions [32], [46] or [51]. For these reasons we make in Table V a resume on the kindof parallelism, topology, and some known applications of several models of parallel GAs (not only ofthe distributed ones). It can be noted that many hybrids, static/dynamic topologies, and applications havebeen addressed. All the implementations represent models of coarse/fine grain PGAs with techniques toimprove local search, execution time, or other issues relating efficiency.

TABLE V. DETAILS OF SEVERAL PARALLEL GAs

Parallel GA Kind of Parallelism Topology Present ApplicationsASPARAGOS Fine grain. Applies Hill-Climbing if no improvement Ladder TSP

CoPDEB Coarse grain. Every sub-pop. applies different operators Full Connected Func. Opt. and ANN’sDGENESIS 1.0 Coarse grain with migrations among sub-populations Any Desired Function Optimization

ECO-GA Fine grain. One of the first of its class Grid Function OptimizationEnGENEer Global parallelization (parallel evaluations) Master / Slave Various

GALOPPS 3.1 Coarse grain. A very portable software Any Desired Func. Opt. and TransportGAMAS Coarse grain. Uses 4 species of strings (nodes) Fixed Hierarchy ANN, Func. Opt., ...GAME Parallel version not available yet. Object Oriented Any Desired TSP, Func. Opt., ...

GAucsd 1.2 / 1.4 Distributes the experiments over the network (not parallel) <sequential> <same as GENESIS>GDGA Coarse Grain. Admits explicit exploration/exploitation Hierarchy Func. Opt. (FP-genes.)

GENITOR II Coarse grain. Interesting crossover operator Ring Func. Opt. and ANN’sHSDGA Hierarchical coarse and fine grain GA. Uses E. S. Ring, Tree, Star, ... Function Optimization

PARAGENESIS Global P. & coarse grain. Made for the CM-200 (1 ind.-1 cpu) Local sel. (seq.) Function OptimizationPeGAsuS Coarse or fine grain. High-level programming. MIMD Multiple Teaching and Func. Opt.PGA 2.5 Spatially structured selection. Allows migrations Multiple Knapsack and Func. Opt.PGAPack Global parallelization (parallel evaluations) Master / Slave Function Optimization

RPL2 Coarse and fine grain. Very flexible and open to new models Any Desired Research and BusinessSGA-Cube Global parallelization. Made for the nCUBE 2 Hypercube Function Optimization

Page 16: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-15-

In order to complete the precedent historical and application overviews we now give anextensive classification of sequential and parallel GAs into three major categories [59] according to theirspecific objectives:

• Application Oriented: These are black-box systems designed to hide the details of GAs and help theuser in developing applications for specific domains. Some of these are useful for different purposessuch as scheduling or telecommunications (e.g. PC/BEAGLE). Some others are much more applicationoriented (like OMEGA for finance). Usually they are menu-driven, and easily parameterizable.

• Algorithm Oriented: Based on specific algorithms. The source code is available in order to providetheir easy incorporation into new applications. This class may be further sub-divided into:

- Algorithm Specific: They contain one single GA (e.g. GENESIS). Their users are systemdevelopers (for making applications) or GA-researchers (interested in extensions).

- Algorithm Libraries: They support a group of algorithms in a library format (e.g. OOGA).They are highly parameterized and contain many different operators to help future applications.

• Tool Kits: These are flexible environments for programming a range of different GAs andapplications. They can be sub-divided into:

- Educational: Used for introducing GA concepts to novice users (e.g. GA Workbench). Thebasic techniques to track executions and results during the evolution are easily managed.

- General Purpose: Useful for modifying, developing, and supervising a wide range ofoperators, algorithms and applications (e.g. Splicer).

We present some detailed tables with a classification of algorithms, and mark this classificationwith the above labels. Also, we use the FREE label for distinguishing between commercial and publiclicense systems. We have also opened the contents of this table to non-PGAs to offer a better view ofthe global achievements of this class of evolutionary algorithms.

In Table VI, many software packages are classified according to the kind of algorithm theyrepresent, the evolution mode, the language used, and some other information about the kind ofmachines or operating systems supporting the implementation. Web (URL) pointers are provided toquick access to the referred software wherever possible; otherwise, mail, FAX, or phone numbers havebeen included.

The classifications consider many different approaches to evolutionary algorithms. Some of themare representative of a given kind of application or theoretical thought. Depending on the context, theclassification can vary. For example, authors may be thinking of having constructed a library of GAswhile they are only constructing a single reproductive plan in which selection or crossover operators canbe changed by the user; in the context of this paper this implementation is too restrictive to beconsidered a library for GAs.

In Table VII, we explain the meaning of some symbols used in Table VI, and finally, in TableVIII, we extend the brief classification found in [66] with all the mentioned PGA software.

Page 17: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-16-

TA

BL

E V

I. O

VE

RV

IEW

OF

SO

ME

UP

-TO

-DA

TE

EX

IST

ING

SE

QU

EN

TIA

L A

ND

PA

RA

LL

EL

GE

NE

TIC

AL

GO

RIT

HM

S

Gen

etic

Alg

orit

hmR

ef.

Lan

g.O

S/M

achi

neK

ind

of G

AA

lgor

ithm

Seq/

Par

Evo

luti

onH

OW

TO

GE

T I

T!

AS

PA

RA

GO

S[3

2]C

64 T

800

tran

spA

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

PA

RL

ocal

---

CoP

DE

B[2

]C

UN

IXA

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

PA

RG

en.

mai

l to:

adam

idis

@it

.teit

he.g

rD

GE

NE

SIS

1.0

[47]

CU

NIX

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eP

AR

Per

cent

age

ftp

to:

ftp.

aic.

nrl.n

avy.

mil

EC

O-G

A[2

1]C

?U

NIX

?A

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

PA

RL

ocal

mai

l to:

yuv

al@

wis

dom

.wei

zman

n.ac

.ilE

M 2

.1[6

9]C

DO

SA

lgor

ithm

Lib

rary

Alg

rthm

. Lib

rary

SE

QG

en./S

S/E

Sft

p to

:[1

30.1

49.1

92.5

0]/>

EnG

EN

Eer

[60]

CU

NIX

Gen

eral

Pur

pose

Alg

rthm

. Lib

rary

SE

Q/p

arG

en. /

SS

Tel

: +44

171

6 37

9111

, Geo

rge

Rob

bins

ES

CAP

AD

E 1

.2[3

5]C

---

Alg

orit

hm L

ibra

ryA

lgrt

hm. L

ibra

ry--

-E

S, o

ther

s...

ftp

to:

ls11

.info

rmat

ik.u

ni-d

ortm

und.

deE

VO

LV

ER

2.0

---

C+

+D

OS

& M

AC

App

lica

tion

Ori

ent.

Sin

gle

/ Not

Fre

eS

EQ

---

http

://w

ww

.pal

isad

e.co

mG

A W

orkb

ench

[37]

CD

OS

& W

INE

duca

tion

alA

lgrt

hm. L

ibra

ryS

EQ

Per

cent

age

ftp

to:

cam

con.

co.u

kG

AG

A[8

]C

UN

IX, D

OS,

MA

CA

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

SE

QG

A-l

ike

ftp

to:

cs.u

cl.a

c.uk

/dar

pa/g

aga.

shar

GA

GS

[48]

C+

+U

NIX

& D

OS

Gen

eral

Pur

pose

Alg

rthm

. Lib

rary

SE

QG

en.

http

://k

al-e

l.ugr

.es/

GA

GS

GA

lib

[71]

C+

+U

NIX

Alg

orit

hm L

ibra

ryA

lgrt

hm. L

ibra

ryS

EQ

Gen

.ft

p to

:la

ncet

.mit

.edu

/pub

/ga/

gali

b242

.tar.

gzG

AL

OP

PS

3.1

[30]

CU

NIX

, DO

S, M

AC

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eP

AR

Gen

.ft

p to

:G

AR

AG

e.cp

s.m

su.e

duG

AM

AS

[56]

C?

UN

IXA

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

PA

RG

en.

---

GA

ME

[66]

C+

+U

NIX

& D

OS

Gen

eral

Pur

pose

Alg

rthm

. Lib

rary

SE

Q/p

arG

en. /

SS

ftp

to:

bell

s.cs

.ucl

.ac.

uk/p

apag

ena/

gam

eG

Auc

sd 1

.2 /

1.4

[61]

CU

NIX

, DO

S ..

.A

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

SE

Q/p

arG

en.

http

://w

ww

.aic

.nrl

.nav

y.m

il/ga

list/

src/

GA

ucsd

14.s

h.Z

GD

GA

[34]

CU

NIX

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eS

EQ

/par

Gen

.m

ail t

o:

loza

no@

decs

ai.u

gr.e

sG

EN

ES

IS 5

.0[3

3]C

UN

IX, D

OS

...

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eS

EQ

Per

cent

age

http

://w

ww

.aic

.nrl

.nav

y.m

il/ga

list/

src/

gene

sis.

tar.

Z

GE

NE

sYs

1.0

---

CU

NIX

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eS

EQ

Per

cent

age

[129

.217

.36.

140]

/pub

/GA

/src

/GE

NE

sYs-

1.0.

tar.

ZG

EN

ITO

R I

- I

I[7

3]C

UN

IXA

lgor

ithm

Lib

rary

Alg

rthm

. Lib

rary

SE

Q-P

AR

SS

/ R

ank

*/ge

nito

rH

SD

GA

[70]

CH

EL

IOS

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eP

AR

Loc

al--

-L

ibG

A[1

8]C

UN

IX, D

OS,

NeX

TA

lgor

ithm

Spe

cifi

cA

lgrt

hm. L

ibra

ryS

EQ

Gen

. / S

S*/

libg

aO

ME

GA

[66]

---

---

App

lica

tion

Ori

ent.

Not

Fre

eS

EQ

---

Tel

: +44

137

1 87

0254

, KiQ

Lim

ited

(bu

sine

ss)

OO

GA

[22]

CL

OS

any

LIS

PA

lgor

ithm

Lib

rary

Alg

rthm

. Lib

rary

SE

QG

en. /

SS

TS

P, P

.O. B

ox 9

91, M

elro

se, M

A 0

2176

-US

AP

AR

AG

EN

ES

IS--

-C

*C

M-2

00A

lgor

ithm

Spe

cifi

cS

ingl

e / F

ree

PA

RP

erce

ntag

e[1

92.2

6.18

.74]

/pub

/gal

ist/

src/

ga/p

arag

enes

is.ta

r.Z

PC

/BE

AG

LE

[59]

---

DO

SA

ppli

cati

on O

rien

t.N

ot F

ree

SE

QS

ST

el: +

44 1

17 9

42 8

692,

Pat

hway

Res

earc

h L

td.

PeG

Asu

S[5

9]C

UN

IX, .

..G

ener

al P

urpo

seN

ot F

ree

PA

RG

en. /

SS

mai

l to:

di

rk.s

chli

erka

mp-

voos

en@

gmd.

deP

GA

2.5

---

C U

NIX

& D

OS

Alg

orit

hm L

ibra

ryA

lgrt

hm. L

ibra

ryP

AR

Gen

. / L

ocal

http

://w

ww

.aic

.nrl

.nav

y.m

il/ga

list/

src

/pga

-2.5

.tar.

Z

PG

AP

ack

[43]

For

t., C

UN

IX, .

..A

lgor

ithm

Lib

rary

Alg

rthm

. Lib

rary

PA

RG

en.

/ SS

http

://w

ww

.mcs

.anl

.gov

/pga

pack

.htm

lR

PL

2[5

7]C

UN

IX, .

..G

ener

al P

urpo

se L

ibra

ry /

Not

Fre

eP

AR

Gen

. / S

Sht

tp:/

/ww

w.q

uads

tone

.co.

uk/~

rpl2

SG

A[2

6]P

asca

lU

NIX

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eS

EQ

Gen

.ft

p to

:[1

92.2

6.18

.56]

...

sgac

_94m

.tgz

(in

C)

SG

A-C

ube

[23]

CnC

UB

E 2

Alg

orit

hm S

peci

fic

Sin

gle

/ Fre

eP

AR

Gen

.ft

p to

:[1

92.2

6.18

.56]

...

sgac

ub94

.tgz

Spl

icer

1.0

[54]

CU

NIX

& M

AC

Gen

eral

Pur

pose

Alg

rthm

. Lib

rary

SE

QG

en.

ftp

to:

gali

leo.

jsc.

nasa

.gov

SU

GA

L 2

.0[3

8]C

UN

IX, D

OS

, ...

Alg

orit

hm L

ibra

ryA

lgrt

hm. L

ibra

ryS

EQ

Gen

. / S

Sht

tp:/

/osi

ris.

sund

.ac.

uk/a

hu/s

ugal

/hom

e.ht

ml

TO

LK

IEN

---

C+

+U

NIX

, DO

S, W

ING

ener

al P

urpo

seA

lgrt

hm. L

ibra

ryS

EQ

Gen

.?*/

tolk

ien

Xpe

rtR

ule

Gen

Asy

s--

---

---

-A

ppli

cati

on O

rien

t.N

ot F

ree

SE

Q--

-ht

tp:/

/ww

w.a

ttar

.com

, Att

ar S

oftw

are

(sch

edul

ing)

Page 18: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-17-

TA

BL

E V

II. M

EA

NIN

G O

F T

HE

SY

MB

OL

S

Sym

bol

Mea

ning

of

the

Sym

bol

Loc

alT

he a

lgor

ithm

per

form

s op

erat

ions

on

ever

y st

ring

by

inte

ract

ions

with

onl

y its

nei

ghbo

ring

str

ings

Gen

.G

ener

atio

nal:

The

bas

ic a

lgor

ithm

ic s

tep

is a

ful

l gen

erat

ion

of in

divi

dual

sSS

Stea

dy-S

tate

: the

bas

ic a

lgor

ithm

ic s

tep

is th

e co

mpu

tati

on o

f a

very

low

num

ber

of in

divi

dual

s (u

sual

ly o

ne)

Per

cent

age

GA

P: T

he a

lgor

ithm

wor

ks a

nd r

epla

ces

only

a g

iven

per

cent

age

of th

e w

hole

pop

ulat

ion

of in

divi

dual

sE

SE

volu

tion

Stra

tegy

Ran

kT

he in

divi

dual

s ar

e so

rted

acc

ordi

ng to

incr

easi

ng f

itne

ss a

nd th

e se

lect

ion

uses

the

‘pos

ition

’ in

the

rank

(an

d no

t the

fitn

ess

itsel

f)pa

rP

aral

lel v

ersi

on is

not

ava

ilabl

e ye

t or

para

llelis

m is

a m

inor

cha

ract

eris

tic o

f th

e ex

ecut

ion

of th

e al

gori

thm

SEQ

/PA

RT

he a

lgor

ithm

is a

ble

to w

ork

eith

er in

seq

uent

ial a

nd in

par

alle

l*

http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/genetic/ga/systems

>pub/software/Evolution_Machine/em_tc.exe

TA

BL

E V

III.

CL

ASS

IFIC

AT

ION

OF

TH

E G

EN

ET

IC S

YST

EM

S

App

licat

ion

Alg

orit

hm O

rien

ted

Too

l Kit

sO

rien

ted

Alg

orit

hm S

peci

fic

Alg

orit

hm L

ibra

ries

Edu

cati

onal

Gen

eral

Pur

pose

GA

Wor

kben

ch

EV

OL

VE

R 2

.0

OM

EG

A

PC

/BE

AG

LE

Xpe

rtR

ule

Gen

Asy

s

ASP

AR

AG

OS

CoP

DE

B

DG

EN

ESI

S 1.

0

EC

O-G

A

GA

GA

GA

LO

PP

S 3.

1

GA

MA

S

GA

ucsd

1.2

/ 1.

4

GD

GA

GE

NE

SIS

5.0

- G

EN

EsY

s 1.

0

HSD

GA

PA

RA

GE

NE

SIS

SGA

SGA

-Cub

e

EM

2.1

ESC

AP

AD

E 1

.2

GA

lib

GE

NIT

OR

I -

II

Lib

GA

OO

GA

PG

A 2

.5

PG

AP

ack

SUG

AL

2.0

EnG

EN

Eer

GA

GS

GA

ME

PeG

Asu

S

RP

L2

Spl

icer

1.0

TO

LK

IEN

Page 19: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-18-

6. TECHNICAL ISSUES IN PARALLEL DISTRIBUTED GAs

In this section we present the most important advances in the procedures governing the evolution of aparallel distributed GA. Most of these are related to the operators being applied, and this is the subjectof the first Subsection. The rest are related to the migration policy and the heterogeneity of the algorithm(second and third Subsections).

6.1. Basic Algorithmic Components

In this section we explain the basic techniques leading the evolution in the sub-algorithms of a parallelGA (particularly in the islands of a distributed GA). These techniques include the mode of evolution, theselection and replacement techniques, and the variation operators, namely crossover and mutation.

6.1.1. The Evolution Mode

As regards the kind of evolution the dGA performs, there are basically two important models: thegenerational [26] and the steady-state [67] models. In fact they are the two extreme ends of thegranularity of the evolution step (as shown in Figure 7). The class of messy-GAs [28] is an example of avery different but interesting kind of search that have yet received a lower international interest.

Traditionally, each island in the dGA creates a new population of individuals from the old one byapplying genetic operators such as selection, crossover or mutation. However, this basic generationalGA is slow and loses diversity quickly. The offspring is not available for use until the new generation.Skilled practitioners do not usually utilize this kind of models; instead, they replace a percentage of thepopulation. This has leaded to several extensions of this model, and to many intermediate selectionmethods.

Steady-state evolution creates in every step just a single new string by applying geneticoperations, and inserts it back according to a given replacement policy (for example only if it is betterthan the worst existing string or randomly). This kind of GA behaves very well in terms of efficacy, andvery fast evolution (up to 10 times faster than a generational GA) in many real applications [20], [39],[41], [72]. Parents and children coexist in the same population and the algorithm can use the newoffspring for further operations immediately after their insertion.

Messy GAs have very different genotypes that also influence the kind of applied geneticoperations highly. They have also proven to be a fast class of algorithms (see Figure 5 again).

In general, the evolution models are different in the number of offspring λ they create from the µindividuals present in the population, and how the former replace the latter ones [11]. Nothing preventsof using nodal GAs in the islands of a dGA that perform arbitrary evolution modes, such as a structured-population GA, like suggested in [57].

6.1.2. The Selection Operator

The selection mechanism of a dGA plays an important role. It drives the search towards betterindividuals promoting convergence. This operation implements a trade-off between high convergencevelocity and high probability of finding a global optimum in complex problems. In this section we offera brief resume of the more important selection schemes used in evolutionary algorithms. In thefollowing table µ is the number of strings in one population and ( )&

a ti ( ) is a string at evolution step t.

Page 20: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-19-

The proportional selection method directly uses the fitness to select individuals. The rankedmechanisms use the position of the string in a sorted population (from 1 to µ) with increasing values offitness (we suppose a maximization problem without any loss of generality). This is thought to help thesearch when fitness values are very similar (for example at the end of the evolution). We classify aselection scheme in terms of several criteria (Table IX). All these criteria are discussed by showing theprobability of selection of any individual. In any case the sum of probabilities is always 1.

TABLE IX. SELECTION SCHEMES

Selection Scheme Formalization Comments

Proportional ( ) ( ) ( )∑=

ΦΦ1

)()()(j

jiis tatatap&&&

[36] Called Roulette Wheel

Linear Ranking ( )

−−⋅−−⋅=

1

11))((

µηηη

µi

tap mínmáxmáxis

& [12] η ηmin max= −2 and 1 2≤ ≤ηmax .

Individuals are sorted by fitness from 1 to µ

Whitley’sLinear Ranking ( ) ( )( )χµχ ⋅−⋅−−⋅

−⋅= 14

12),( 2 bbb

bbÍndice

[72] This returns the index of the selectedindividual. b∈[1,2] is the selection pressure(bias) and χ∈[0,1] is a random unif. variable

(µ,λ)-UniformRanking

( )

≤<≤≤

=µλλλ

i

itap is 0

11)(

&[62] Typical in Evolution Strategies.Individuals are sorted by fitness from 1 to µ

The selection pressure of the algorithms can be controlled by choosing the appropriate values forthese parameters. For example, for linear ranking we can choose different constants. Also, mostselection procedures admit versions and combinations. We review some of them in Table X. Differentvariants can be distinguished in terms of (1) whether the probabilities of selection are static acrossgenerations, (2) zero probabilities are allowed to appear, (3) there exists a part of the population that donot reproduce, (4) the best individuals always survive, (5) which is the grain of the reproductive step, ...

TABLE X. POSSIBLE VERSIONS OF THE SELECTION SCHEMES

Attending to... Selection Scheme Version 1 (YES) Selection Scheme Version 2 (NO)Do we change the

probabilities across gens.?

Dynamic

{ } ( ) iis ctapti =≥∀∈∃/⇔ )(0,,...,1&µ

Static{ } ( ) iis ctapti =≥∀∈∀⇔ )(0,,...,1

Are zero probabilitiesallowed to appear?

Extinctive

{ } ( ) 0)(,...,1,0 =∈∃≥∀⇔ tapit is

&µPreservative

{ } ( ) 0)(0,,...,1 >≥∀∈∀⇔ tapti is

Do the worst individualsreproduce?

Left Extinctive{ }

( ) 0)(

1,...,1,0

=⇒≤−∈∃≥∀⇔

tapli

lt

is

&

µRight Extinctive

{ }( ) 0)(

,...,2,0

=⇒≥∈∃≥∀⇔

tapli

lt

is

&

µ

Shall the best k parentsundergo selection with their

offspring?

k-Elitist{ } { }

( ) ( ))1()(

,...,1,0,,...,1

−≥∈∀≥∀∈∃⇔

taftaf

kitk

ii

&&

µPure

<No individual satisfies the k-Elitist property>

Shall a whole population ofµ individuals be generated?

Generational<Fixed set of parents until µ offspring>

Steady-State<Special case of (µ-1)-elitism>

Any node in a dGA or in a cGA is thought to apply one of these selection algorithms on its ownsub-population. Other interesting operators like tournament or even improved implementations offitness proportional selection like stochastic universal sampling can be found elsewhere ([13]).

Page 21: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-20-

6.1.3. The Replacement Policy

The replacement policy is used in each sub-algorithm of a PGA to replace some individuals in thepresent population with a new pool of individual(s). These new individuals may have been created in thelocal or in a remote sub-algorithm. Table XI shows the probabilities of replacement of several policies.

TABLE XI. DIFFERENT POLICIES FOR THE REPLACEMENT OF STRINGS

Replacement Scheme Formalization Comments

Inverse Proportional ( ) ( ) ( )∑=

−=µ

ΦΦ1

)()(1)(j

jiir tatatap&&&

The inverse of the roulette wheel selection

Uniform Random µ1

))(( == ctap ir

& Any individual can be replaced according tothe same probability (typical in steady-state)

Worst ( )

≠=

=positionworsti

positionworstitap ir _0

_1)(

Always replace the worst string in a sortedpopulation. A popular version replaces theworst string only if the new one is better

Generational p a tr i( ( ))&

= 1 Canonical generational GAs replace thewhole old population with a new one

Just like the selection policy, the replacement admits different versions and hybridizations.Hence, we can define an elitist generational replacement, extinctive replacement, etc.

6.1.4. The Crossover Operator

The crossover is one of the most studied operators in parallel and sequential GAs. Many variants havebeen defined. All of them can be included in one of the following classes:

• Pure crossover operators: robust procedures that can be applied to many problems.

• Hybrid crossover operators: mixing of a pure crossover operator and some other algorithm.

• Problem dependent operators: these carry out very specialized operations on the strings,usually by including specific problem knowledge.

The traditional crossover operates on two parents to yield two new offspring. However, thereexist many versions which only compute one new string from these two parents. Also, it is normal incombinatorial optimization or in reordering problems (in which the strings are permutations) to use asafe crossover: the offspring of two correct strings are also correct. This is a need in some problemswhen not all the possible strings are allowable solutions, e.g. when managing constraints [49]. Often theconstraints are dealt with in the fitness function (if a string does not match a given problem constraint itsfitness will be penalized severely). However, some applications find it more efficient to force thecrossover to work out only correct strings [5]. A third solution is to repair the incorrect strings.

The most traditional crossover randomly defines a position in every parent string as the crossingpoint. Afterwards, the two parent strings exchange their slices located before and after these point.

Multiple extensions of this single point crossover (SPX) are possible. They are known as N-pointcrossover operators [63]. A very interesting operator in this family is the two-point crossover (DPX).

In DPX two points are stochastically defined and the two parents exchange the slices bounded bythese two limits. A version we call DPX1 has been successfully used [6], [7] to yield only one child thatcontains the largest possible slice from the best (higher fitness) parent; the other offspring is nevergenerated, thus representing a basic and inexpensive evolution step. See the following formalizations ofthe mentioned operators (G is I in the Algorithm 1 previously showed).

Page 22: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-21-

� Single Point Crossover (SPX)

{ }cpc

:G G G Gx x→ [ ]τ ∈ 0 1, { }α ∈ 1, . . . , l

if cp≤τ then { } ( ) ( )( ) ( ) ( )( )c a a a b b b a a b b b b a ap l l l l

c

&

&

= = = + +1 1 1 1 1 1, . . . , , , . . . , , . . . , , , , . . . , , , . . . , , , , . . . ,α α α α

else { } ( ) ( )( ) ( )( )( )c a a a b b b a a b bp l l l l

c

&

&

= = =1 1 1 1, . . . , , , . . . , , . . . , , . . . ,

� Double Point Crossover 1-child (DPX1)

{ }cpc

:G G Gx → [ ]τ ∈ 0 1, { }α σ σ α, , . . . , ,∈ ≥1 l ( ) ( ) ( )Φ Φ&

&

a b l≥ ∧ − <σ α 2

if cp≤τ then { } ( ) ( )( ) ( )c a a a b b b a a b b a ap l l l

c

&

&

= = = + +1 1 1 1 1, . . . , , , . . . , : , . . . , , , , . . . , , , . . . ,α α σ σ

else { } ( ) ( )( ) ( )c a a a b b b a ap l l l

c

&

&

= = =1 1 1, . . . , , , . . . , , . . . ,

� Biased Uniform Crossover (UX(Pb))

{ }cpc

:G G G Gx x→ [ ]τ ∈ 0 1, [ ]α i ∈ 0 1, ( ) ( )Φ Φ&

&

a b≥ usually Pb≥0.5 αi sampled anew for every allele

if cp≤τ then { } ( ) ( )( ) { }c a b a b aa P

b Pb

b P

a Pi l

p i

i i b

i i b

i

i i b

i i bc

&

&

&

&

, , , , , . . . ,= ′ ′ ′ =>

′ =

>

∀ ∈

α

α

α

α1

else { } ( ) ( )( ) ( )( )( )c a a a b b b a a b bp l l l l

c

&

&

= = =1 1 1 1, . . . , , , . . . , , . . . , , . . . ,

� Biased Arithmetic Crossover (AX(Pb))

{ }cpc

:G G G Gx x→ [ ]τ ∈ 0 1, ( ) ( )Φ Φ&

&

a b≥ usually Pb≥0.5

if cp≤τ then { } ( ) ( )( ) ( ) ( ) { }c a b a b a P a P b b P b P a i lp i b i b i i b i b i

c

&

&

&

&

, , , , , . . . ,= ′ ′ ′ = ⋅ + − ⋅ ′ = ⋅ + − ⋅ ∀ ∈1 1 1

else { } ( ) ( )( ) ( )( )( )c a a a b b b a a b bp l l l l

c

&

&

= = =1 1 1 1, . . . , , , . . . , , . . . , , . . . ,

The uniform crossover (UX) is usually implemented by creating two offspring with every allelein the new offspring taken randomly from one parent (Pb=0.5). Here we present a generalized biasedversion in which the alleles of the best parent have a higher probability of appearing in the child. Thearithmetic crossover (AX) has the same aspect but every child allele is a weighted new value of the twoparental alleles. This is intended to be useful for floating point codes. These two operators (UX and AX)are independent of the length of the string. This characteristic is very useful in many real complexapplications. See in Figure 12 a graphical interpretation of these crossover operators.

In general, there exist typical operators of high efficacy depending on the field of application.Thus, for training neural networks we can use a traditional DPX or else a reduced surrogate operator[72]. For the traveling salesperson problem (TSP) better results can be obtained by using circuitcrossovers such as Partially Matched Crossover (PMX), Cyclic Crossover (CX), Order Crossover (OX),etc. [49]. Also, the interesting and recent incorporation of fuzzy connectives to develop new classes ofcrossover operators is very promising, in particular for real representations (see [34] for more details).

Page 23: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-22-

SPX + =

Padre1 Padre2 Descendiente

Padre1 Padre2 Descendiente

DPX + =

UPX + =

Padre1 Padre2 Descendiente

Padre1 Padre2 Descendiente

APX + =

Parent1

Parent1

Parent1

Parent1

Parent2

Parent2

Parent2

Parent2

Offspring

Offspring

Offspring

Offspring

SPX

DPX

UX

AX

Figure 12. The work of several crossover operators (only one child is shown).

6.1.5. The Mutation Operator

The mutation operator is intended to introduce new genotypes into the genetic pool of strings. The goalis to maintain a good diversity that allows a continuous search towards a solution. This is needed toescape from local optima when the algorithm has got stuck in bad regions of exploration [64]. It canhelp also in refining a sub-optimal solution. In a dGA the migration of individuals from a neighborisland also extends the exploration in the distributed populations (an effect similar to that of mutation).

� Bit-Flip Mutation

{ }mpm

:G G→ { } ( ) ( ) { }m s s s s j l ss p

s pp l l j

j i m

j i mm

1 1 11

, . . . , , . . . , , . . . ,= ′ ′ ∀ ∈ ′ =>

− ≤

α

α [ ]α j ∈ 0 1,

The traditional mutation works on binary strings by probabilistically changing every position(allele) to its complementary value. In addition, many possible versions exist. For example, for floating-point real representations the mutation could add/subtract some random value to the allele. Frequently,the new gene value is determined after a fixed probability density function (PDF) drawn from theevolution strategies field, in which a Gaussian formula gives the additive new value of every real-valuedgene. The standard deviation σj might be different for every gene, and could also be evolved.

� Mutation for Real-Coded Strings

{ }mpm

:G G→ { }( ) ( ) { }

≤⋅+>

=′∈∀′′=mijj

mij

jllp pNs

pssljssssm

m ασα)1,0(

,...,1,...,,..., 11 [ ]α j ∈ 0 1,

When the mean is zero the standard deviation offers the control on the mutation process. The useof zero-mean Gaussian mutations generates offspring that are on average no different from their parents,and increasingly less likely to be increasingly different from their parents [10].

Page 24: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-23-

An interesting modification of any type of mutation consists in varying its application probabilitythroughout the evolution. Hence, we could increase the mutation probability very slightly in eachiteration. This is appropriate since, at the beginning, the genotypes are different enough to sustainexploration, but, by the end, they become very similar. This dynamic probability can follow an a priorigiven linear or exponential growth or be computed (for example) as the inverse of the Hammingdistance of the strings in the population. See [39] for more details. This dynamic variant is also possiblefor crossover or for other hybrid operators we would want to apply (if guided by a probability ofapplication).

6.2. MigrationMigration is the operator that guides the exchange of individuals among demes in a dGA. In relation tomigration many aspects have to be fine-tuned to define the kind of search we are dealing with. In thissection we want to outline the most accepted techniques and highlight some open questions. We discussthe frequency of migrations, how many migrants have to migrate, how to select and replace them in thetarget island, some considerations about the topology of islands, and the heterogeneity of the system.

6.2.1. Migration Gap

Since a dGA usually makes sparse exchanges of individuals among the sub-populations we must definethe migration gap, i. e., the number of steps in every sub-population between two successive exchanges(steps of isolated evolution). This can be made in every sub-population periodically or else by using agiven probability of migration PM to decide in every step whether migration will take place or not. Adifferent approach can be found for example in [52] in which the authors explore the sigma-exchangealgorithm consisting in only making migrations when the standard fitness deviation is small enough tomake sense of the exchange of strings. Although this criterion can reduce the number of exchanges, itrelies on the fitness and not on the genotypes of the parallel islands (this could be a disadvantage).

It is very usual for a parallel dGA to be synchronous in that the phases of migration andreception of external migrants are embedded in the same portion of the algorithm. However,synchronous migrations are known to be slow for some problems [52], [70]. In the asynchronousversions the reception of individuals are scheduled to occur at any time during a run of every island.This means sending migrants whenever it is needed, and also accepting migrants whenever they arrive.

This last asynchronous parallel dGA can perform very efficiently [32], [52]. Besides this, itsbehavior and theoretical study is similar to its synchronous counterpart. However, if the sub-algorithmsrun in very different processors, the exchanged individuals can be in very different stages of evolution.Thus the well-known non-effect problem (the incoming migrant is unsuitable in its new population) orthe super-individual problem (the incoming migrant is much better than the strings in the populationreceiving it) are very likely to appear.

6.2.2. Migration Rate

The migration rate is the parameter determining the number of individuals that undergo migration inevery exchange. This value can be expressed as a percentage of the population or else as an absolutevalue. In any case, it is not clear which is the best value for the migration rate. It seems to be clear that alow percentage should be used (from 1% to10% of the population size) but, although many studies havebeen conducted, no clear conclusions can be drawn. In fact it seems that, for some problems, sending 1individual is better than sending 5 individuals, and vice-versa for other problems [47].

Page 25: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-24-

We think this is due to the close correlation existing between this value and the migration gapand topology. However, every topology has its own characteristics in terms of the velocity in spreadingone individual into the whole set of sub-algorithms, thus interacting with the migration rate in aproblem-dependent (or PGA model-dependent) manner. See [14], [47], [68] for results in this sense.

6.2.3. Selection and Replacement in the Migration Procedure

It is very common in parallel dGAs that the same selection/replacement operators are used for dealingwith migrants. Hence, all the presented models in sections 6.1.2 and 6.1.3 are useful here.

Two alternative selection procedures for the migrants are (1) sending the best individual or (2)sending a random one. If you have no a priori knowledge of the problem under consideration you mightdecide to migrate a random individual. This is useful since it is the safest criterion for a large range ofmigration gaps (that you may also be trying to set correctly). If every sub-algorithm is sending the bestindividual then the migration gap needs to be considerably larger in order to avoid the prematureconvergence of the whole population (genotypic diversity disappears).

6.2.4. Topology

There exist many works trying to decide the best topology for a parallel dGA [1], [31], [44], [47]. Itseems that, for most problems, the ring and hyper-cube are two of the best topologies. Full-connectedand centralized topologies have problems in their parallelization and scalability due to the tightconnectivity. The ring topology is easy to implement on very different hardware platforms (cluster ofworkstations, multi-computers, ...), and its use is very extended.

It is very usual for the topologies to be unidirectional. Hierarchical topologies such as trees ofsub-algorithms have been also successfully applied [70]. A hierarchical topology is a natural way ofcarrying out the search at different levels. For example, in every level you can use a GA with a differentkind of genotype and then get a dGA that first roughly determines the promising zones (leaves) andafterwards it exploits more accurate solutions inside these zones (e.g. [56]).

The traditional nomenclature divides parallel dGAs into stepping-stone and island models,depending on whether individuals can freely migrate to any sub-population or if they are restricted tomigrating to geographically nearby islands, respectively. Most recent papers assume to work with agiven predefined topology.

6.3. Heterogeneity

The homo-hetero/geneity of a distributed system is usually understood as a term referring to thehardware. This is also correct for a parallel dGA since it runs in several machines. However, we have anadditional level for possible heterogeneity as regards to the kind of search the islands are making (Figure5). We discuss this concept in this paper because of its potential benefits. This last represents an openresearch line: in which situations do a heterogeneous dGA outperform a homogeneous one?.

The d sub-algorithms can be performing the same kind of search (same genotypes, operators, ...)on different sets of randomly generated individuals, or else the sub-algorithms can be using differentsets of operators, parameters or genotypes. These last heterogeneous systems can have some advantageson homogeneous algorithms since they can be explicitly tuned to carry out exploration and exploitation(a well-known trade-off decision in evolutionary algorithms) depending on the problem [26].

Page 26: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-25-

e1 e2

e3e4

E1 E2

E3E4

+

+

-

-

eXPLOITATION PLANE

EXPLORATION PLANE

GDGA

PcPm↑↑

GA1

PcPm↑

GA2

Pc

GA4

GoodPc, Pm

GA3

CoPDEB

Figure 13. Two recent dGAs: GDGA (left) and CoPDEB (right).

The Gradually Distributed GA (GDGA) [34] creates two planes of search with exploitation andexploration capabilities embedded in several fuzzy crossover operators (on real codes). The backwardand forward migrations exchange individuals between these two planes of the hypercube-3. This is asynchronous model based on generational evolution.

The Co-operating Populations with Different Evolution Behavior (CoPDEB) [2], on the otherhand, applies traditional operators but it changes the probabilities of crossover and mutation in everysub-population in order to obtain different GAs, also with different exploration/exploitation properties.Besides this, local hill-climbing is included in islands.

These algorithms represent two different ways of achieving heterogeneity. A third way is using aset of algorithms, some of them evolutionary and some other non-evolutionary methods, tocooperate/compete in solving the same problem and exchanging information in a distributed fashion.

7. IMPLEMENTATION ISSUES

Among all the languages for developing PGAs, C is the most popular. However, implementations withC++ are now growing in number due to the advantages in software reuse, security, and extensibility.Also, because of its large capability of structured parameterization, an object oriented (OO) languagehas advantages in composing new prototypes. On the other hand, there exist many PGAs developed forconcrete machine architectures that use embedded languages (usually versions of C).

Using object orientation is directly useful in developing classes for the data structures andoperations contained in a parallel or sequential GA. Hence, implementing classes for genotype,population, operator pool, fitness manipulations, etc... is a natural way of coding evolutionaryalgorithms in general with the added advantages of any OO implementation.

Communication among processes is normally achieved by using the BSD socket interface onUNIX systems, either directly or through the services of the well known Parallel Virtual Machine(PVM) [25]. Some MPI [43] and JAVA implementations are also becoming familiar. Finally, manysystems simulate parallelism in a single process. This latter is useful only when the basic behavior is tobe studied. However, for real-life and complex applications, a truly parallel dGA is needed in order tohave lower computational times and using larger populations than with sequential GAs.

Page 27: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-26-

8. CONCLUDING REMARKS

In this work we have presented the fundamental syntax and semantics of sequential genetic algorithms(GAs). The paper deals with very popular extensions of these algorithms known as parallel distributedGAs, in which many sub-algorithms run in parallel with sparse migrations of strings. The robustness andadvantages of sequential GAs are enhanced when a PGA is used. The drawback is the more complexanalysis and design, and also the need of some kind of parallel machine to run it.

A structured and extensive overview on the more important and up-to-date PGA systems isdiscussed. In it, much of the existing software and criteria for their classification is used. In addition, wepresent in the paper useful technical information about PGAs relating operators, structured-populationparadigms, and parameters guiding the parallel search. We have included a brief theoretical foundationof a distributed GA to make the paper relatively self-contained. In all the presentation not only a surveyof existing problems is outlined, but also possible variants apart from the basic operations and futuretrends are discussed.

In particular, we have offered a location of PGAs in the taxonomy of search techniques, anomenclature revision, algorithmic descriptions of techniques, future trends, a classification of a largeportion of the existing software, open questions relating generational versus steady-state evolutionmodes and heterogeneous versus homogeneous parallel algorithms, and many other minor details andmajor concepts relating parallel GAs in general. Our main interest has been in parallel distributed GAs,since the impact of the research in this kind of algorithms is a priori larger than for other kinds ofparallel genetic algorithms.

We are especially concerned with offering useful and rigorous material that could help new andexpert practitioners. Although our overview is obviously not complete, it represents a good startingpoint to conduct future research in this domain or to make new applications by using parallel distributedGAs.

ACKNOWLEDGEMENTS

The authors want to thanks the anonymous reviewers whose comments and discussions helped toimprove the contents of this article considerably.

Page 28: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-27-

References

1. P. Adamidis. “Review of Parallel Genetic Algorithms Bibliography”. Internal T.R., AristotleUniversity of Thessaloniki, November (http://www.control,ee.auth.gr/~panos/papers/pga_review.ps.gz). 1994.

2. P. Adamidis, V. Petridis. “Co-operating Populations with Different Evolution Behavior”.Proceedings of the Second IEEE Conference on Evolutionary Computation, pp. 188-191. 1996.

3. E. Alba, J. F. Aldana, J. M. Troya. “Full Automatic ANN Design: A Genetic Approach”. In J. Mira,J. Cabestany, A. Prieto (eds.), IWANN'93. New Trends in Neural Computation. Lecture Notes inComputer Science 686, Springer-Verlag, pp. 399-404. 1993.

4. E. Alba, J. F. Aldana, J. M. Troya. “A Genetic Algorithm for Load Balancing in Parallel QueryEvaluation for Deductive Relational Databases”. Procs. of the I. C. on ANNs and GAs, D.W.Pearson, N.C. Steele, R.F. Albrecht (eds.), Springer-Verlag, pp. 479-482. 1995.

5. E. Alba, C. Cotta, J. M. Troya. “Type-Constrained Genetic Programming for Rule-Base Definitionin Fuzzy Logic Controllers”. Proceedings of the First Annual Conference on Genetic Programming,J. R. Koza, D. E. Goldberg, D. B. Fogel & R. L. Riolo (eds.), Stanford Univ., Cambridge, MA. TheMIT Press, pp. 255-260. 1996.

6. E. Alba, J. M. Troya. “Genetic Algorithms for Protocol Validation”. Proceedings of the PPSN IVI.C., H. M. Voigt, W. Ebeling, I. Rechenberg & H. P. Schwefel (eds.), Berlin. Springer-Verlag, pp.870-879. 1996.

7. E. Alba, C. Cotta. “Evolution of Complex Data Structures”. Informática y Automática, 30(3), pp.42-60. September, 1997.

8. C. Alippi, P. Treleaven. “GAME: A Genetic Algorithms Manipulation Environment”. InternalReport Department of Computer Science, UCL. 1991.

9. J. Antonisse. “A New Interpretation of Schema Notion that Overturns the Binary EncodingConstraint”. Proceedings of the 3rd ICGA, J. D. Schaffer (ed.), Morgan Kaufmann, pp. 86-91. 1989.

10. T.Bäck, D. Fogel, Z. Michalewicz (eds.) Handbook of Evolutionary Computation. OxfordUniversity Press. 1997.

11. T. Bäck, H. P. Schwefel. “An Overview of Evolutionary Algorithms for Parameter Optimization”.Evolutionary Computation, 1 (1), pp. 1-23, The MIT Press. 1993.

12. J. E. Baker. “Adaptive Selection Methods for Genetic Algorithms”. Proceedings of the FirstInternational Conference on Genetic Algorithms and Their Applications, J. J. Grefenstette (ed.),Lawrence Erlbaum Associates, Publishers, pp. 101-111. 1985.

13. J. E. Baker. “Reducing Bias and Inefficiency in the Selection Algorithm”. Proceedings of the SecondInternational Conference on Genetic Algorithms, J. J. Grefenstette (ed.), Lawrence ErlbaumAssociates, Publishers, pp. 14-21. 1987.

14. T. C. Belding. “The Distributed Genetic Algorithm Revisited”. Proceedings of the 6th InternationalConference on GAs, L. J. Eshelman (ed.), Morgan Kaufmann, pp. 122-129. 1995.

15. P. P. Bonissone. "Soft Computing: the Convergence of Emerging Reasoning Technologies". Journalof Research in Soft Computing, 1(1), pp. 6-18. 1997.

Page 29: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-28-

16. E. Cantú-Paz. “A Summary of Research on Parallel Genetic Algorithms”. R. 95007, July 1995. Alsorevised version, IlliGAL R. 97003. May 1997.

17. A. Chipperfield, P. Fleming. “Parallel Genetic Algorithms”. Parallel and Distributed ComputingHandbook, A. Y. H. Zomaya (ed.), MacGraw-Hill, pp. 1118-1143. 1996.

18. A. L. Corcoran, R. L. Wainwright. “LibGA: A User-Friendly Workbench for Order-Based GeneticAlgorithm Research”. Proceedings of the 1993 ACM/SIGAPP Symposium on Applied Computing,ACM Press. 1993.

19. F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza-Reorda. “Exploiting Competing Subpopulationsfor Automatic Generation of Test Sequences for Digital Circuits”. Procs. of the PPSN IV I.C., H. M.Voigt, W. Ebeling, I. Rechenberg, H. P. Schwefel (eds.), Springer-Verlag, pp. 792-800. 1996.

20. C. Cotta, E. Alba, J. M. Troya. “Evolutionary Design of Fuzzy Logic Controllers”. IEEE Catalog N.96CH35855, Proceedings of the ISISC’96 Conference, Dearborn, MI pp. 127-132. 1996.

21. Y. Davidor. “A Naturally Occurring Niche & Species Phenomenon: The Model and First Results”.Procs. of the 4th ICGA, R. K Belew, L. B. Booker (eds.), pp. 257-263. 1991.

22. L. Davis, J. J. Grefenstette. “Concerning GENESIS and OOGA”. Handbook of Genetic Algorithms,L. Davis (ed.), New York: Van Nostrand Reinhold, pp. 374-376. 1991.

23. J. A. Erickson, R. E. Smith, D. E. Goldberg. “SGA-Cube, A Simple Genetic Algorithm for nCUBE2 Hypercube Parallel Computers”. TCGA Report No. 91005, The Univ. of Alabama. 1991.

24. D. B. Fogel (ed.). Evolutionary Computation. The Fossil Record (Selected Readings on the Historyof Evolutionary Algorithms). IEEE Press. 1998

25. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, V. Sunderam. PVM: Parallel VirtualMachine. A Users’ Guide and Tutorial for Networked Parallel Computing. The MIT Press. 1994.

26. D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley. 1989.

27. D. E. Goldberg. “Sizing Populations for Serial and Parallel Genetic Algorithms”. Proceedings of the3rd ICGA, J. D. Schaffer (ed.), Morgan Kaufmann, pp. 70-79. 1989.

28. D. E. Goldberg, K. Deb, B. Korb. “Don’t Worry, Be Messy”. Proceedings of the FourthInternational Conference on Genetic Algorithms, R. K. Belew and L. B. Booker (eds.), MorganKaufmann, San Mateo, CA, pp. 24-30. 1991.

29. D. E. Goldberg, et al. “Critical Deme Size for Serial and Parallel Genetic Algorithms”. IlliGALReport No. 95002. January 1995.

30. E. D. Goodman. An Introduction to GALOPPS v3.2. TR#96-07-01, GARAGe, I. S. Lab., Dpt. of C.S. and C. C. C. A. E. M., Michigan State University, East Lansing. 1996.

31. V. S. Gordon, D. Whitley. “Serial and Parallel Genetic Algorithms as Function Optimizers”. Procs.of the 5th ICGA, S. Forrest (ed.), Morgan Kaufmann, pp. 177-183. 1993.

32. M.Gorges-Schleuter. “ASPARAGOS An Asynchronous Parallel Genetic Optimisation Strategy”.Procs. of the 3rd ICGA, J. D. Schaffer (ed.), Morgan Kaufmann, pp. 422-427. 1989.

33. J. J. Grefenstette. “GENESIS: A System for Using Genetic Search Procedures”. Proceedings of the1984 Conference on Intelligent Systems and Machines, pp. 161-165. 1984.

Page 30: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-29-

34. F. Herrera, M. Lozano. Gradual Distributed Real-Coded Genetic Algorithms. Technical Report#DECSAI-97-01-03. February 1997 (revised version 98).

35. F. Hoffmeister. “The User’s Guide to ESCAPADE 1.2 A Runtime Environment for EvolutionStrategies”. Department of Computer Science, University of Dortmund, Germany. 1991.

36. J. H. Holland. Adaptation in Natural and Artificial Systems. Univ. of Michigan Press, Ann Arbor,MI. 1975.

37. M. Hughes. “Genetic Algorithm Workbench Documentation”, Cambridge Consultants Ltd. 1989.

38. A. Hunter. “Sugal Programming Manual v2.0”. T. R. in Univ. Sunderland, U.K.. July 1995.

39. B. A. Juslstrom. “What Have You Done for Me Lately? Adapting Operator Probabilities in a Steady-State Genetic Algorithm”. Proceedings of the 6th International Conference on Genetic Algorithms,L. J. Eshelman (ed.), Morgan Kaufmann, pp. 81-87. 1995.

40. J. R. Koza. Genetic Programming. The MIT Press. 1992.

41. T. Kuo, S. Y. Hwang. “A Genetic Algorithm with Disruptive Selection”. Proceedings of the 5thInternational Conference on GAs, S. Forrest (ed.), Morgan Kaufmann, pp. 65-69. 1993.

42. D. Levine. “A Parallel Genetic Algorithm fot the Set Partitioning Problem”. T. R. No. ANL-94/23,Argonne National Laboratory, Mathematics and Computer Science Division. 1994.

43. D. Levine. “Users Guide to the PGAPack Parallel Genetic Algorithm Library”. T. R. ANL-95/18,January 31. 1996.

44. S. Lin, W. F. Punch and E. D. Goodman. “Coarse-Grain Parallel Genetic Algorithms: Categorizationand New Approach”. Parallel & Distributed Processing. October 1994.

45. M. Lozano. “Application of Fuzzy Logic Based Techniques for Improving the Behavior of GAs withFloating Point Encoding”. Ph.D. Th., Dpt. of C. S. and A.I., Univ. Granada. July 1996.

46. B. Manderick, P. Spiessens. “Fine-Grained Parallel Genetic Algorithms”. Procs. of the 3rd I. C. onGenetic Algorithms, J. D. Schaffer (ed.), Morgan Kaufmann, pp. 428- 433. 1989.

47. M. Mejía-Olvera, E. Cantú-Paz. “DGENESIS-Software for the Execution of Distributed GeneticAlgorithms”. Proceedings of the XX Conferencia Latinoamericana de Informática, pp. 935-946,Monterrey, México. 1994.

48. J. J. Merelo, A. Prieto. GAGS, “A Flexible Object-Oriented Library for Evolutionary Computation”.Procs. of MALFO, D. Borrajo, P. Isasi (eds.), pp. 99-105. 1996.

49. Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag.1992.

50. H. Mühlenbein. “Evolution in Time and Space - The Parallel Genetic Algorithm”. Foundations ofGenetic Algorithms, G. J. E. Rawlins (ed.), Morgan Kaufmann, pp. 316-337. 1991.

51. H. Mühlenbein, M. Schomisch, J. Born. “The Parallel Genetic Algorithm as Function Optimizer”.Parallel Computing, 17, pp. 619-632. September 1991.

52. M. Munetomo, Y. Takai, Y. Sato. “An Efficient Migration Scheme for Subpopulation-BasedAsynchronously Parallel GAs”. HIER-IS-9301, Hokkaido University. July 1993.

Page 31: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-30-

53. T. Muntean, E. G. Talbi. “A Parallel Genetic Algorithm for Process-Processors Mapping”.Proceedings of the Second Symposium II. High Performance Computing, M. Durán, E. Dabaghi(eds.), pp. 71-82. Montpellier, France: F. Amsterdam, Amsterdam. 1991.

54. NASA - Johnson Space Center. “Splicer - A Genetic Tool for Search and Optimization”. GeneticAlgorithm Digest, Vol 5, Issue 17. 1991.

55. C. C. Pettey, M. R. Leuze, J. Grefenstette. “A Parallel Genetic Algorithm”. Proceedings of the 2ndICGA, J. Grefenstette (ed.), Lawrence Erlbraum Associates, pp. 155-161. 1987.

56. J. C. Potts, T. D. Giddens, S. B. Yadav. “The Development and Evaluation of an Improved GeneticAlgorithm Based on Migration and Artificial Selection”. IEEE Transactions on Systems, Man, andCybernetics, 24 (1), pp. 73-86. January 1994.

57. N. J. Radcliffe, P. D. Surry. “The Reproductive Plan Language RPL2: Motivation, Architecture andApplications”. Genetic Algorithms in Optimisation, Simulation and Modelling, J. Stender, E.Hillebrand, J. Kingdon (eds.), IOS Press. 1994

58. I. Rechenberg. Evolutionstrategie: Optimierung Technisher Systeme nach Prinzipien derBiologischen Evolution. Fromman-Holzboog Verlag, Sttutg. 1973.

59. J. L. Ribeiro Filho, C. Alippi, P. Treleaven. “Genetic Algorithm Programming Environments”.Parallel Genetic Algorithms: Theory & Applications, J. Stender (ed.), IOS Press. 1993.

60. G. Robbins. “EnGENEer - The Evolution of Solutions”. Proceedings of the 5th Annual Seminar onNeural Networks and Genetic Algorithms. 1992.

61. N. N. Schraudolp, J. J. Grefenstette. “A User’s Guide to GAUCSD 1.2”. T. R. Computer Science &Engineering Department, University of California, San Diego. 1991.

62. H. P. Schwefel. Numerical Optimization of Computer Models. Wiley, Chichester. 1981.

63. W. M. Spears, K. A. De Jong. “An Analysis of Multi-Point Crossover”. Foundations of GeneticAlgorithms, G. J. E. Rawlins (ed.), Morgan Kaufmann, pp. 301-315. 1991.

64. W. M. Spears. “Crossover or Mutation?”. Proceedings of the Foundations of Genetic AlgorithmsWorkshop, D. Whitley (ed.), Morgan Kaufmann, pp. 221-237. 1992.

65. P. Spiessens, B. Manderick. “A Massively Parallel Genetic Algorithm”. Proceedings of the 4thInternational Conference on Genetic Algorithms, R. K. Belew, L. B. Booker (eds.), MorganKaufmann, pp. 279-286. 1991.

66. J. Stender (ed.). Parallel Genetic Algorithms: Theory and Applications. IOS Press. 1993.

67. G. Syswerda. “A Study of Reproduction in Generational and Steady-State Genetic Algorithms”;Foundations of GAs, G. J. E. Rawlins (ed.), Morgan Kaufmann, pp. 94-101. 1991.

68. R. Tanese. “Distributed Genetic Algorithms”. Proceedings of the 3rd International Conference onGenetic Algorithms, J. D. Schaffer (ed.), Morgan Kaufmann, pp. 434-439. 1989.

69. H. M. Voigt, J. Born, J. Treptow. “The Evolution Machine Manual - V 2.1”. T. R. in the Institute forInformatics and Computing Techniques, Berlin. 1991.

70. H. M. Voigt, I. Santibáñez-Koref, J. Born. “Hierarchically Structured Distributed GeneticAlgorithms”. Proceedings of the International Conference Parallel Problem Solving from Nature, 2,R. Männer, B. Manderick (eds.), North-Holland, Amsterdam, pp. 155-164. 1992.

Page 32: A Survey of Parallel Distributed Genetic Algorithmsneo.lcc.uma.es/Articles/albatroyaxx_2.pdfA Survey of Parallel Distributed Genetic Algorithms ABSTRACT In this work we review the

-31-

71. M. Wall. “Overview of Matthew’s Genetic Algorithm Library”. http://lancet.mit.edu/ga. 1995.

72. D. Whitley. “The GENITOR Algorithm and Selection Pressure: Why Rank-Based Allocation ofReproductive Trials is Best”. Proceedings of the 3rd. International Conference on GeneticAlgorithms, J. D. Schaffer (ed.), Morgan Kaufmann, pp. 116-121. 1989.

73. D. Whitley, T. Starkweather. “GENITOR II: a Distributed Genetic Algorithm”. J. Expt. Theor. Artif.Intelligence 2, pp. 189-214. 1990.

74. D. H. Wolpert, W. G. Macready. “No Free Lunch Theorems for Optimization”. IEEE Transactionson Evolutionary Computation 1(1), pp. 67-82. 1997.


Recommended