+ All Categories
Home > Education > Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Date post: 11-May-2015
Category:
Upload: xin-she-yang
View: 1,696 times
Download: 4 times
Share this document with a friend
Description:
This is the Tutorial given at the FedCSIS2011 in Poland.
Popular Tags:
140
Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Nature-Inspired Metaheristics Algorithms for Optimization and Computational Intelligence Xin-She Yang National Physical Laboratory, UK @ FedCSIS2011 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
Transcript
Page 1: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Nature-Inspired Metaheristics Algorithmsfor Optimization and Computational Intelligence

Xin-She Yang

National Physical Laboratory, UK

@ FedCSIS2011

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 2: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 3: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 4: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

All models are wrong, but some are useful.

- George Box, Statistician

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 5: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

All models are inaccurate, but some are useful.

- George Box, Statistician

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 6: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

All models are inaccurate, but some are useful.

- George Box, Statistician

All algorithms perform equally well on average over all possiblefunctions.

- No-free-lunch theorems (Wolpert & Macready)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 7: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

All models are inaccurate, but some are useful.

- George Box, Statistician

All algorithms perform equally well on average over all possiblefunctions. How so?

- No-free-lunch theorems (Wolpert & Macready)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 8: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

All models are inaccurate, but some are useful.

- George Box, Statistician

All algorithms perform equally well on average over all possiblefunctions. Not quite! (more later)

- No-free-lunch theorems (Wolpert & Macready)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 9: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Intro

Intro

Computational science is now the third paradigm of science,complementing theory and experiment.

- Ken Wilson (Cornell University), Nobel Laureate.

All models are inaccurate, but some are useful.

- George Box, Statistician

All algorithms perform equally well on average over all possiblefunctions. Not quite! (more later)

- No-free-lunch theorems (Wolpert & Macready)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 10: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Overview

Overview

Part I

Introduction

Metaheuristic Algorithms

Monte Carlo and Markov Chains

Algorithm Analysis

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 11: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Overview

Overview

Part I

Introduction

Metaheuristic Algorithms

Monte Carlo and Markov Chains

Algorithm Analysis

Part II

Exploration & Exploitation

Dealing with Constraints

Applications

Discussions & Bibliography

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 12: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 13: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 14: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c ,

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 15: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 16: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 17: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 18: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Steepest Descent

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 19: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Steepest Descent

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 20: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Steepest Descent

=⇒

min t =

∫ d

0

1

vds =

∫ d

0

1 + y′2

2g [h − y(x)]dx

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 21: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Steepest Descent

=⇒

min t =

∫ d

0

1

vds =

∫ d

0

1 + y′2

2g [h − y(x)]dx

=⇒

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 22: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Steepest Descent

=⇒

min t =

∫ d

0

1

vds =

∫ d

0

1 + y′2

2g [h − y(x)]dx

=⇒ =⇒

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 23: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

A Perfect Algorithm

A Perfect Algorithm

What is the best relationship among E , m and c?

Initial state: m,E ,c , =⇒ =⇒E =mc2

Steepest Descent

=⇒

min t =

∫ d

0

1

vds =

∫ d

0

1 + y′2

2g [h − y(x)]dx

=⇒ =⇒

x = A2 (θ − sin θ)

y = h − A2 (1− cos θ)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 24: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Computing in Reality

Computing in Reality

A Problem & Problem Solvers⇓

Mathematical/Numerical Models

⇓Computer & Algorithms & Programming

⇓Validation⇓

Results

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 25: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

What is an Algorithm?

What is an Algorithm?

Essence of an Optimization Algorithm

To move to a new, better point xi+1 from an existing knownlocation xi .

x1

x2

xi

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 26: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

What is an Algorithm?

What is an Algorithm?

Essence of an Optimization Algorithm

To move to a new, better point xi+1 from an existing knownlocation xi .

x1

x2

xi

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 27: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

What is an Algorithm?

What is an Algorithm?

Essence of an Optimization Algorithm

To move to a new, better point xi+1 from an existing knownlocation xi .

x1

x2

xi

xi+1

?

Population-based algorithms use multiple, interacting paths.

Different algorithms

Different strategies/approaches in generating these moves!

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 28: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Optimization is Like Treasure Hunting

Optimization is Like Treasure Hunting

How to find a treasure, a hidden 1 million dollars?What is your best strategy?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 29: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Optimization Algorithms

Optimization Algorithms

Deterministic

Newton’s method (1669, published in 1711), Newton-Raphson(1690), hill-climbing/steepest descent (Cauchy 1847),least-squares (Gauss 1795),

linear programming (Dantzig 1947), conjugate gradient(Lanczos et al. 1952), interior-point method (Karmarkar1984), etc.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 30: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Stochastic/Metaheuristic

Stochastic/Metaheuristic

Genetic algorithms (1960s/1970s), evolutionary strategy(Rechenberg & Swefel 1960s), evolutionary programming(Fogel et al. 1960s).

Simulated annealing (Kirkpatrick et al. 1983), Tabu search(Glover 1980s), ant colony optimization (Dorigo 1992),genetic programming (Koza 1992), particle swarmoptimization (Kennedy & Eberhart 1995), differentialevolution (Storn & Price 1996/1997),

harmony search (Geem et al. 2001), honeybee algorithm(Nakrani & Tovey 2004), ..., firefly algorithm (Yang 2008),cuckoo search (Yang & Deb 2009), ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 31: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent/Hill Climbing

Steepest Descent/Hill Climbing

Gradient-Based Methods

Use gradient/derivative information – very efficient for local search.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 32: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent/Hill Climbing

Steepest Descent/Hill Climbing

Gradient-Based Methods

Use gradient/derivative information – very efficient for local search.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 33: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent/Hill Climbing

Steepest Descent/Hill Climbing

Gradient-Based Methods

Use gradient/derivative information – very efficient for local search.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 34: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent/Hill Climbing

Steepest Descent/Hill Climbing

Gradient-Based Methods

Use gradient/derivative information – very efficient for local search.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 35: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent/Hill Climbing

Steepest Descent/Hill Climbing

Gradient-Based Methods

Use gradient/derivative information – very efficient for local search.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 36: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent/Hill Climbing

Steepest Descent/Hill Climbing

Gradient-Based Methods

Use gradient/derivative information – very efficient for local search.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 37: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Newton’s Method

xn+1 = xn −H−1∇f , H =

∂2f∂x1

2 · · · ∂2f∂x1∂xn

.... . .

...∂2f

∂xn∂x1· · · ∂2f

∂xn2

.

Generation of new moves by gradient.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 38: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Newton’s Method

xn+1 = xn −H−1∇f , H =

∂2f∂x1

2 · · · ∂2f∂x1∂xn

.... . .

...∂2f

∂xn∂x1· · · ∂2f

∂xn2

.

Quasi-Newton

If H is replaced by I, we have

xn+1 = xn − αI∇f (xn).

Here α controls the step length.

Generation of new moves by gradient.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 39: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Steepest Descent Method (Cauchy 1847, Riemann 1863)

Steepest Descent Method (Cauchy 1847, Riemann 1863)

From the Taylor expansion of f (x) about x(n), we have

f (x(n+1)) = f (x(n) + ∆s) ≈ f (x(n) + (∇f (x(n)))T ∆s,

where ∆s = x(n+1) − x(n) is the increment vector.So

f (x(n) + ∆s)− f (x(n)) = (∇f )T∆s < 0.

Therefore, we have∆s = −α∇f (x(n)),

where α > 0 is the step size.In the case of finding maxima, this method is often referred to ashill-climbing.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 40: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method

Belong to Krylov subspace iteration methods. The conjugategradient method was pioneered by Magnus Hestenes, EduardStiefel and Cornelius Lanczos in the 1950s. It was named as one ofthe top 10 algorithms of the 20th century.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 41: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method

Belong to Krylov subspace iteration methods. The conjugategradient method was pioneered by Magnus Hestenes, EduardStiefel and Cornelius Lanczos in the 1950s. It was named as one ofthe top 10 algorithms of the 20th century.

A linear system with a symmetric positive definite matrix A

Au = b,

is equivalent to minimizing the following function f (u)

f (u) =1

2uTAu− bTu + v,

where v is a vector constant and can be taken to be zero. We caneasily see that ∇f (u) = 0 leads to Au = b.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 42: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

CG

CG

The theory behind these iterative methods is closely related to theKrylov subspace Kn spanned by A and b as defined by

Kn(A,b) = {Ib,Ab,A2b, ...,An−1b},

where A0 = I.If we use an iterative procedure to obtain the approximate solutionun to Au = b at nth iteration, the residual is given by

rn = b− Aun,

which is essentially the negative gradient ∇f (un).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 43: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

The search direction vector in the conjugate gradient method issubsequently determined by

dn+1 = rn −dT

n Arn

dTn Adn

dn.

The solution often starts with an initial guess u0 at n = 0, andproceeds iteratively. The above steps can compactly be written as

un+1 = un + αndn, rn+1 = rn − αnAdn,

anddn+1 = rn+1 + βndn,

where

αn =rTn rn

dTn Adn

, βn =rTn+1rn+1

rTn rn.

Iterations stop when a prescribed accuracy is reached.Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 44: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Gradient-free Methods

Gradient-free Methods

Gradient-base methods

Requires the information of derivatives. Not suitable for problemswith discontinuities.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 45: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Gradient-free Methods

Gradient-free Methods

Gradient-base methods

Requires the information of derivatives. Not suitable for problemswith discontinuities.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 46: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Gradient-free Methods

Gradient-free Methods

Gradient-base methods

Requires the information of derivatives. Not suitable for problemswith discontinuities.

Gradient-free or derivative-free methods

BFGS, Downhill simplex, Trust-region, SQP ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 47: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Nelder-Mead Downhill Simplex Method

Nelder-Mead Downhill Simplex Method

The Nelder-Mead method is a downhill simplex algorithm, firstdeveloped by J. A. Nelder and R. Mead in 1965.

A Simplex

In the n-dimensional space, a simplex, which is a generalization ofa triangle on a plane, is a convex hull with n + 1 distinct points.For simplicity, a simplex in the n-dimension space is referred to asn-simplex.

(a) (b) (c)Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 48: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Downhill Simplex Method

Downhill Simplex Method

s

xn+1

xr

x s

xr

xn+1

xe

xcxn+1

The first step is to rank and re-order the vertex values

f (x1) ≤ f (x2) ≤ ... ≤ f (xn+1),

at x1, x2, ..., xn+1, respectively. Wikipedia Animation

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 49: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Metaheuristic

Metaheuristic

Most are nature-inspired, mimicking certain successful features innature.

Simulated annealing

Genetic algorithms

Ant and bee algorithms

Particle Swarm Optimization

Firefly algorithm and cuckoo search

Harmony search ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 50: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Simulated Annealling

Simulated Annealling

Metal annealing to increase strength =⇒ simulated annealing.

Probabilistic Move: p ∝ exp[−E/kBT ].

kB=Boltzmann constant (e.g., kB = 1), T=temperature, E=energy.

E ∝ f (x),T = T0αt (cooling schedule) , (0 < α < 1).

T → 0, =⇒p → 0, =⇒ hill climbing.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 51: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Simulated Annealling

Simulated Annealling

Metal annealing to increase strength =⇒ simulated annealing.

Probabilistic Move: p ∝ exp[−E/kBT ].

kB=Boltzmann constant (e.g., kB = 1), T=temperature, E=energy.

E ∝ f (x),T = T0αt (cooling schedule) , (0 < α < 1).

T → 0, =⇒p → 0, =⇒ hill climbing.

This is essentially a Markov chain.Generation of new moves by Markov chain.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 52: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

An Example

An Example

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 53: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Genetic Algorithms

Genetic Algorithms

crossover mutation

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 54: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Genetic Algorithms

Genetic Algorithms

crossover mutation

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 55: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Genetic Algorithms

Genetic Algorithms

crossover mutation

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 56: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 57: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 58: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Generation of new solutions by crossover, mutation and elistism.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 59: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Swarm Intelligence

Swarm Intelligence

Ants, bees, birds, fish ...

Simple rules lead to complex behaviour.

Go to Metaheuristic Slides

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 60: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Cuckoo Search

Cuckoo Search

Local random walk:

xt+1i = xt

i + s ⊗ H(pa − ǫ)⊗ (xtj − xt

k).

[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫis a random number drawn from a uniform distribution, and s isthe step size.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 61: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Cuckoo Search

Cuckoo Search

Local random walk:

xt+1i = xt

i + s ⊗ H(pa − ǫ)⊗ (xtj − xt

k).

[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫis a random number drawn from a uniform distribution, and s isthe step size.

Global random walk via Levy flights:

xt+1i = xt

i + αL(s, λ), L(s, λ) =λΓ(λ) sin(πλ/2)

π

1

s1+λ, (s ≫ s0).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 62: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Cuckoo Search

Cuckoo Search

Local random walk:

xt+1i = xt

i + s ⊗ H(pa − ǫ)⊗ (xtj − xt

k).

[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫis a random number drawn from a uniform distribution, and s isthe step size.

Global random walk via Levy flights:

xt+1i = xt

i + αL(s, λ), L(s, λ) =λΓ(λ) sin(πλ/2)

π

1

s1+λ, (s ≫ s0).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 63: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Cuckoo Search

Cuckoo Search

Local random walk:

xt+1i = xt

i + s ⊗ H(pa − ǫ)⊗ (xtj − xt

k).

[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫis a random number drawn from a uniform distribution, and s isthe step size.

Global random walk via Levy flights:

xt+1i = xt

i + αL(s, λ), L(s, λ) =λΓ(λ) sin(πλ/2)

π

1

s1+λ, (s ≫ s0).

Generation of new moves by Levy flights, random walk and elitism.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 64: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Monte Carlo Methods

Monte Carlo Methods

Almost everyone has used Monte Carlo methods in some way ...

Measure temperatures, choose a product, ...Taste soup, wine ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 65: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Markov Chains

Markov Chains

Random walk – A drunkard’s walk:

ut+1 = µ+ ut + wt ,

where wt is a random variable, and µ is the drift.

For example, wt ∼ N(0, σ2) (Gaussian).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 66: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Markov Chains

Markov Chains

Random walk – A drunkard’s walk:

ut+1 = µ+ ut + wt ,

where wt is a random variable, and µ is the drift.

For example, wt ∼ N(0, σ2) (Gaussian).

-10

-5

0

5

10

15

20

25

0 100 200 300 400 500

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 67: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Markov Chains

Markov Chains

Random walk – A drunkard’s walk:

ut+1 = µ+ ut + wt ,

where wt is a random variable, and µ is the drift.

For example, wt ∼ N(0, σ2) (Gaussian).

-10

-5

0

5

10

15

20

25

0 100 200 300 400 500-20

-15

-10

-5

0

5

10

-15 -10 -5 0 5 10 15 20

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 68: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Markov Chains

Markov Chains

Markov chain: the next state only depends on the current stateand the transition probability.

P(i , j) ≡ P(Vt+1 = Sj

∣V0 = Sp, ...,Vt = Si)

= P(Vt+1 = Sj

∣Vt = Sj),

=⇒Pijπ∗i = Pjiπ

∗j , π∗ = stionary probability distribution.

Examples: Brownian motion

ui+1 = µ+ ui + ǫi , ǫi ∼ N(0, σ2).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 69: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Markov Chains

Markov Chains

Monopoly (board games)

Monopoly Animation

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 70: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Markov Chain Monte Carlo

Markov Chain Monte Carlo

Landmarks: Monte Carlo method (1930s, 1945, from 1950s) e.g.,Metropolis Algorithm (1953), Metropolis-Hastings (1970).

Markov Chain Monte Carlo (MCMC) methods – A class ofmethods.

Really took off in 1990s, now applied to a wide range of areas:physics, Bayesian statistics, climate changes, machine learning,finance, economy, medicine, biology, materials and engineering ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 71: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Convergence Behaviour

Convergence Behaviour

As the MCMC runs, convergence may be reached

When does a chain converge? When to stop the chain ... ?

Are multiple chains better than a single chain?

0

100

200

300

400

500

600

0 100 200 300 400 500 600 700 800 900

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 72: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Convergence Behaviour

Convergence Behaviour

t=2

t=0

t=−2U

1

2

3

−∞← t

t=−n

converged

Multiple, interacting chains

Multiple agents trace multiple, interacting Markov chains duringthe Monte Carlo process.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 73: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Analysis

Analysis

Classifications of Algorithms

Trajectory-based: hill-climbing, simulated annealing, patternsearch ...

Population-based: genetic algorithms, ant & bee algorithms,artificial immune systems, differential evolutions, PSO, HS,FA, CS, ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 74: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Analysis

Analysis

Classifications of Algorithms

Trajectory-based: hill-climbing, simulated annealing, patternsearch ...

Population-based: genetic algorithms, ant & bee algorithms,artificial immune systems, differential evolutions, PSO, HS,FA, CS, ...

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 75: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Analysis

Analysis

Classifications of Algorithms

Trajectory-based: hill-climbing, simulated annealing, patternsearch ...

Population-based: genetic algorithms, ant & bee algorithms,artificial immune systems, differential evolutions, PSO, HS,FA, CS, ...

Ways of Generating New Moves/Solutions

Markov chains with different transition probability.

Trajectory-based =⇒ a single Markov chain;Population-based =⇒ multiple, interacting chains.

Tabu search (with memory) =⇒ self-avoiding Markov chains.Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 76: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Ergodicity

Ergodicity

Markov Chains & Markov Processes

Most theoretical studies uses Markov chains/process as aframework for convergence analysis.

A Markov chain is said be to regular if some positive power k

of the transition matrix P has only positive elements.

A chain is call time-homogeneous if the change of itstransition matrix P is the same after each step, thus thetransition probability after k steps become Pk .

A chain is ergodic or irreducible if it is aperiodic and positiverecurrent – it is possible to reach every state from any state.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 77: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Convergence Behaviour

Convergence Behaviour

As k →∞, we have the stationary probability distribution π

π = πP, =⇒ thus the first eigenvalue is always 1.

Asymptotic convergence to optimality:

limk→∞

θk → θ∗, (with probability one).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 78: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Convergence Behaviour

Convergence Behaviour

As k →∞, we have the stationary probability distribution π

π = πP, =⇒ thus the first eigenvalue is always 1.

Asymptotic convergence to optimality:

limk→∞

θk → θ∗, (with probability one).

The rate of convergence is usually determined by the secondeigenvalue 0 < λ2 < 1.

An algorithm can converge, but may not be necessarily efficient,as the rate of convergence is typically low.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 79: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Convergence of GA

Convergence of GA

Important studies by Aytug et al. (1996)1, Aytug and Koehler(2000)2, Greenhalgh and Marschall (2000)3, Gutjahr (2010),4 etc.5

The number of iterations t(ζ) in GA with a convergenceprobability of ζ can be estimated by

t(ζ) ≤

ln(1− ζ)

ln

{

1−min[(1− µ)Ln, µLn]

}

,

where µ=mutation rate, L=string length, and n=population size.

1H. Aytug, S. Bhattacharrya and G. J. Koehler, A Markov chain analysis of genetic algorithms with power of

2 cardinality alphabets, Euro. J. Operational Research, 96, 195-201 (1996).2H. Aytug and G. J. Koehler, New stopping criterion for genetic algorithms, Euro. J. Operational research,

126, 662-674 (2000).3D. Greenhalgh & S. Marshal, Convergence criteria for genetic algorithms, SIAM J. Computing, 30, 269-282

(2000).4W. J. Gutjahr, Convergence Analysis of Metaheuristics Annals of Information Systems, 10, 159-187 (2010).

5 ´

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 80: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Multiobjective Metaheuristics

Multiobjective Metaheuristics

Asymptotic convergence of metaheuristic for multiobjectiveoptimization (Villalobos-Arias et al. 2005)6

The transition matrix P of a metaheuristic algorithm has astationary distribution π such that

|Pkij − πj | ≤ (1− ζ)k−1, ∀i , j , (k = 1, 2, ...),

where ζ is a function of mutation probability µ, string length L

and population size. For example, ζ = 2nLµnL, so µ < 0.5.

6M. Villalobos-Arias, C. A. Coello Coello and O. Hernandez-Lerma, Asymptotic convergence of metaheuristics

for multiobjective optimization problems, Soft Computing, 10, 1001-1005 (2005).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 81: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Multiobjective Metaheuristics

Multiobjective Metaheuristics

Asymptotic convergence of metaheuristic for multiobjectiveoptimization (Villalobos-Arias et al. 2005)6

The transition matrix P of a metaheuristic algorithm has astationary distribution π such that

|Pkij − πj | ≤ (1− ζ)k−1, ∀i , j , (k = 1, 2, ...),

where ζ is a function of mutation probability µ, string length L

and population size. For example, ζ = 2nLµnL, so µ < 0.5.

6M. Villalobos-Arias, C. A. Coello Coello and O. Hernandez-Lerma, Asymptotic convergence of metaheuristics

for multiobjective optimization problems, Soft Computing, 10, 1001-1005 (2005).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 82: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Multiobjective Metaheuristics

Multiobjective Metaheuristics

Asymptotic convergence of metaheuristic for multiobjectiveoptimization (Villalobos-Arias et al. 2005)6

The transition matrix P of a metaheuristic algorithm has astationary distribution π such that

|Pkij − πj | ≤ (1− ζ)k−1, ∀i , j , (k = 1, 2, ...),

where ζ is a function of mutation probability µ, string length L

and population size. For example, ζ = 2nLµnL, so µ < 0.5.

Note: An algorithm satisfying this condition may not converge (formultiobjective optimization)However, an algorithm with elitism, obeying the above condition,does converge!.

6M. Villalobos-Arias, C. A. Coello Coello and O. Hernandez-Lerma, Asymptotic convergence of metaheuristics

for multiobjective optimization problems, Soft Computing, 10, 1001-1005 (2005).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 83: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Other results

Other results

Limited results on convergence analysis exist, concerning (finitestates/domains)

ant colony optimization

generalized hill-climbers and simulated annealing,

best-so-far convergence of cross-entropy optimization,

nested partition method, Tabu search, and

of course, combinatorial optimization.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 84: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Other results

Other results

Limited results on convergence analysis exist, concerning (finitestates/domains)

ant colony optimization

generalized hill-climbers and simulated annealing,

best-so-far convergence of cross-entropy optimization,

nested partition method, Tabu search, and

of course, combinatorial optimization.

However, more challenging tasks for infinite states/domains andcontinuous problems.

Many, many open problems needs satisfactory answers.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 85: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Converged?

Converged?

Converged, often the ‘best-so-far’ convergence, not necessarily atthe global optimality

In theory, a Markov chain can converge, but the number ofiterations tends to be large.

In practice, a finite (hopefully, small) number of generations, if thealgorithm converges, it may not reach the global optimum.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 86: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Converged?

Converged?

Converged, often the ‘best-so-far’ convergence, not necessarily atthe global optimality

In theory, a Markov chain can converge, but the number ofiterations tends to be large.

In practice, a finite (hopefully, small) number of generations, if thealgorithm converges, it may not reach the global optimum.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 87: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Converged?

Converged?

Converged, often the ‘best-so-far’ convergence, not necessarily atthe global optimality

In theory, a Markov chain can converge, but the number ofiterations tends to be large.

In practice, a finite (hopefully, small) number of generations, if thealgorithm converges, it may not reach the global optimum.

How to avoid premature convergence

Equip an algorithm with the ability to escape a local optimum

Increase diversity of the solutions

Enough randomization at the right stage

....(unknown, new) ....

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 88: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Coffee Break (15 Minutes)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 89: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

All and NFL

All and NFL

So many algorithms – what are the common characteristics?

What are the key components?

How to use and balance different components?

What controls the overall behaviour of an algorithm?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 90: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Exploration and Exploitation

Exploration and Exploitation

Characteristics of Metaheuristics

Exploration and Exploitation, or Diversification and Intensification.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 91: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Exploration and Exploitation

Exploration and Exploitation

Characteristics of Metaheuristics

Exploration and Exploitation, or Diversification and Intensification.

Exploitation/Intensification

Intensive local search, exploiting local information.E.g., hill-climbing.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 92: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Exploration and Exploitation

Exploration and Exploitation

Characteristics of Metaheuristics

Exploration and Exploitation, or Diversification and Intensification.

Exploitation/Intensification

Intensive local search, exploiting local information.E.g., hill-climbing.

Exploration/Diversification

Exploratory global search, using randomization/stochasticcomponents. E.g., hill-climbing with random restart.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 93: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Summary

Summary

Exploitation

Exp

lora

tion

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 94: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Summary

Summary

Exploitation

Exp

lora

tion

uniformsearch

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 95: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Summary

Summary

Exploitation

Exp

lora

tion

uniformsearch

steepestdescent

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 96: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Summary

Summary

Exploitation

Exp

lora

tion

uniformsearch

steepestdescent

Tabu Nelder-Mead

CS

PSO/FAEP/ESSA Ant/Bee

Genetic algorithms

Newton-Raphson

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 97: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Summary

Summary

Exploitation

Exp

lora

tion

uniformsearch

steepestdescent

Tabu Nelder-Mead

CS

PSO/FAEP/ESSA Ant/Bee

Genetic algorithms

Newton-Raphson

Best?

Free lunch?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 98: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

No-Free-Lunch (NFL) Theorems

No-Free-Lunch (NFL) Theorems

Algorithm Performance

Any algorithm is as good/bad as random search, when averagedover all possible problems/functions.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 99: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

No-Free-Lunch (NFL) Theorems

No-Free-Lunch (NFL) Theorems

Algorithm Performance

Any algorithm is as good/bad as random search, when averagedover all possible problems/functions.

Finite domains

No universally efficient algorithm!

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 100: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

No-Free-Lunch (NFL) Theorems

No-Free-Lunch (NFL) Theorems

Algorithm Performance

Any algorithm is as good/bad as random search, when averagedover all possible problems/functions.

Finite domains

No universally efficient algorithm!

Any free taster or dessert?

Yes and no. (more later)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 101: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

NFL Theorems (Wolpert and Macready 1997)

NFL Theorems (Wolpert and Macready 1997)

Search space is finite (though quite large), thus the space ofpossible “cost” values is also finite. Objective functionf : X 7→ Y, with F = YX (space of all possible problems).Assumptions: finite domain, closed under permutation (c.u.p).

For m iterations, m distinct visited points form a time-ordered

set dm ={(

dxm(1), dy

m(1))

, ...,(

dxm(m), dy

m(m))}

.

The performance of an algorithm a iterated m times on a costfunction f is denoted by P(dy

m|f ,m, a).

For any pair of algorithms a and b, the NFL theorem states∑

f

P(dym|f ,m, a) =

f

P(dym|f ,m, b).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 102: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

NFL Theorems (Wolpert and Macready 1997)

NFL Theorems (Wolpert and Macready 1997)

Search space is finite (though quite large), thus the space ofpossible “cost” values is also finite. Objective functionf : X 7→ Y, with F = YX (space of all possible problems).Assumptions: finite domain, closed under permutation (c.u.p).

For m iterations, m distinct visited points form a time-ordered

set dm ={(

dxm(1), dy

m(1))

, ...,(

dxm(m), dy

m(m))}

.

The performance of an algorithm a iterated m times on a costfunction f is denoted by P(dy

m|f ,m, a).

For any pair of algorithms a and b, the NFL theorem states∑

f

P(dym|f ,m, a) =

f

P(dym|f ,m, b).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 103: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

NFL Theorems (Wolpert and Macready 1997)

NFL Theorems (Wolpert and Macready 1997)

Search space is finite (though quite large), thus the space ofpossible “cost” values is also finite. Objective functionf : X 7→ Y, with F = YX (space of all possible problems).Assumptions: finite domain, closed under permutation (c.u.p).

For m iterations, m distinct visited points form a time-ordered

set dm ={(

dxm(1), dy

m(1))

, ...,(

dxm(m), dy

m(m))}

.

The performance of an algorithm a iterated m times on a costfunction f is denoted by P(dy

m|f ,m, a).

For any pair of algorithms a and b, the NFL theorem states∑

f

P(dym|f ,m, a) =

f

P(dym|f ,m, b).

Any algorithm is as good (bad) as a random search!Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 104: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Open Problems

Open Problems

Framework: Need to develop a unified framework foralgorithmic analysis (e.g.,convergence).

Exploration and exploitation: What is the optimal balancebetween these two components? (50-50 or what?)

Performance measure: What are the best performancemeasures ? Statistically? Why ?

Convergence: Convergence analysis of algorithms for infinite,continuous domains require systematic approaches?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 105: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Open Problems

Open Problems

Framework: Need to develop a unified framework foralgorithmic analysis (e.g.,convergence).

Exploration and exploitation: What is the optimal balancebetween these two components? (50-50 or what?)

Performance measure: What are the best performancemeasures ? Statistically? Why ?

Convergence: Convergence analysis of algorithms for infinite,continuous domains require systematic approaches?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 106: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Open Problems

Open Problems

Framework: Need to develop a unified framework foralgorithmic analysis (e.g.,convergence).

Exploration and exploitation: What is the optimal balancebetween these two components? (50-50 or what?)

Performance measure: What are the best performancemeasures ? Statistically? Why ?

Convergence: Convergence analysis of algorithms for infinite,continuous domains require systematic approaches?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 107: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Open Problems

Open Problems

Framework: Need to develop a unified framework foralgorithmic analysis (e.g.,convergence).

Exploration and exploitation: What is the optimal balancebetween these two components? (50-50 or what?)

Performance measure: What are the best performancemeasures ? Statistically? Why ?

Convergence: Convergence analysis of algorithms for infinite,continuous domains require systematic approaches?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 108: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

More Open Problems

More Open Problems

Free lunches: Unproved for infinite or continuous domains formultiobjective optimization. (possible free lunches!)What are implications of NFL theorems in practice?If free lunches exist, how to find the best algorithm(s)?

Knowledge: Problem-specific knowledge always helps to findappropriate solutions? How to quantify such knowledge?

Intelligent algorithms: Any practical way to design trulyintelligent, self-evolving algorithms?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 109: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

More Open Problems

More Open Problems

Free lunches: Unproved for infinite or continuous domains formultiobjective optimization. (possible free lunches!)What are implications of NFL theorems in practice?If free lunches exist, how to find the best algorithm(s)?

Knowledge: Problem-specific knowledge always helps to findappropriate solutions? How to quantify such knowledge?

Intelligent algorithms: Any practical way to design trulyintelligent, self-evolving algorithms?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 110: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

More Open Problems

More Open Problems

Free lunches: Unproved for infinite or continuous domains formultiobjective optimization. (possible free lunches!)What are implications of NFL theorems in practice?If free lunches exist, how to find the best algorithm(s)?

Knowledge: Problem-specific knowledge always helps to findappropriate solutions? How to quantify such knowledge?

Intelligent algorithms: Any practical way to design trulyintelligent, self-evolving algorithms?

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 111: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Constraints

Constraints

In describing optimization algorithms, we are not concern withconstraints. Algorithms can solve both unconstrained and moreoften constrained problems.

The handling of constraints is an implementation issue, thoughincorrect or inefficient methods of dealing with constraints can slowdown the algorithm efficiency, or even result in wrong solutions.

Methods of handling constraints

Direct methods

Langrange multipliers

Barrier functions

Penalty methods

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 112: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Aims

Aims

Either converting a constrained problem to an unconstrained one

or changing the search space into a regular domain

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 113: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Aims

Aims

Either converting a constrained problem to an unconstrained one

or changing the search space into a regular domain

The ease of programming and implementation

Improve (or at least not hinder) the efficiency of the chosenalgorithm in implementation.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 114: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Aims

Aims

Either converting a constrained problem to an unconstrained one

or changing the search space into a regular domain

The ease of programming and implementation

Improve (or at least not hinder) the efficiency of the chosenalgorithm in implementation.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 115: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Aims

Aims

Either converting a constrained problem to an unconstrained one

or changing the search space into a regular domain

The ease of programming and implementation

Improve (or at least not hinder) the efficiency of the chosenalgorithm in implementation.

Scalability

The used approach should be able to deal with small, large andvery large scale problems.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 116: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Common Approaches

Common Approaches

Direct method

Simple, but not versatile, difficult in programming.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 117: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Common Approaches

Common Approaches

Direct method

Simple, but not versatile, difficult in programming.

Lagrange multipliers

Main for equality constraints.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 118: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Common Approaches

Common Approaches

Direct method

Simple, but not versatile, difficult in programming.

Lagrange multipliers

Main for equality constraints.

Barrier functions

Very powerful and widely used in convex optimization.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 119: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Common Approaches

Common Approaches

Direct method

Simple, but not versatile, difficult in programming.

Lagrange multipliers

Main for equality constraints.

Barrier functions

Very powerful and widely used in convex optimization.

Penalty methods

Simple and versatile, widely used.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 120: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Common Approaches

Common Approaches

Direct method

Simple, but not versatile, difficult in programming.

Lagrange multipliers

Main for equality constraints.

Barrier functions

Very powerful and widely used in convex optimization.

Penalty methods

Simple and versatile, widely used.

Others

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 121: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Direct Methods

Direct Methods

Minimize f (x , y) = (x − 2)2 + 4(y − 3)2

subject to −x + y ≤ 2, x + 2y ≤ 3.

−x+

y≤

2

x + 2y ≤ 3

Optimal

Direct Methods: to generate solutions/points inside the region!(easy for rectangular regions)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 122: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Method of Lagrange Multipliers

Method of Lagrange Multipliers

Maximize f (x , y) = 10− x2 − (y − 2)2 subject to x + 2y = 5.

Defining a combined function Φ using a multiplier λ, we have

Φ = 10− x2 − (y − 2)2 + λ(x + 2y − 5).

The optimality conditions are

∂Φ

∂x= 2x +λ = 0,

∂Φ

∂y= −2(y−2)+2λ = 0,

∂Φ

∂λ= x+2y−5,

whose solutions become

x = 1/5, y = 12/5, λ = 2/5, =⇒ fmax =49

5.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 123: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Barrier Functions

Barrier Functions

As an equality h(x) = 0 can be written as two inequalities h(x) ≤ 0and −h(x) ≤ 0, we only use inequalities.

For a general optimization problem:

minimize f (x), subject to g(xi ) ≤ 0(i = 1, 2, ...,N),

we can define a Indicator or barrier function

I−1[u] =

{

0 if u ≤ 0∞ if u > 0.

Not so easy to deal with numerically. Also discontinuous!

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 124: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Logarithmic Barrier Functions

Logarithmic Barrier Functions

A log barrier function

I−(u) = −1

tlog(−u), u < 0,

where t > 0 is an accuracy parameters (can be very large).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 125: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Logarithmic Barrier Functions

Logarithmic Barrier Functions

A log barrier function

I−(u) = −1

tlog(−u), u < 0,

where t > 0 is an accuracy parameters (can be very large).Then, the above minimization problem becomes

minimize f (x) +

N∑

i=1

I−(gi (x)) = f (x) +

N∑

i=1

−1

tlog[−gi (x)].

This is an unconstrained problem and easy to implement!

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 126: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Penalty Methods

Penalty Methods

For a nonlinear optimization problem with equality and inequalityconstraints,

minimize

x∈ℜn f (x), x = (x1, ..., xn)T ∈ ℜn,

subject to φi (x) = 0, (i = 1, ...,M),

ψj (x) ≤ 0, (j = 1, ...,N),

the idea is to define a penalty function so that the constrainedproblem is transformed into an unconstrained problem. Now wedefine

Π(x, µi , νj ) = f (x) +M

i=1

µiφ2i (x) +

N∑

j=1

νjψ2j (x),

where µi ≫ 1 and νj ≥ 0 which should be large enough,depending on the solution quality needed.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 127: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

In addition, for simplicity of implementation, we can use µ = µi forall i and ν = νj for all j . That is, we can use a simplified

Π(x, µ, ν) = f (x) + µ

M∑

i=1

Qi [φi (x)]φ2i (x) + ν

N∑

j=1

Hj [ψj (x)]ψ2j (x).

Here the barrier/indicator-like functions

Hj =

{

0 if ψj(x) ≤ 01 if ψj(x) > 0

, Qi =

{

0 if φi (x) = 01 if φi (x) 6= 0

.

In general, for most applications, µ and ν can be taken as 1010 to1015. We will use these values in most implementations.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 128: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Pressure Vessel Design Optimization

Pressure Vessel Design Optimization

r

d1

r

L

d2

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 129: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Formulation

Formulation

minimize f (x) = 0.6224d1rL+1.7781d2r2+3.1661d2

1 L+19.84d21 r ,

subject to

g1(x) = −d1 + 0.0193r ≤ 0g2(x) = −d2 + 0.00954r ≤ 0g3(x) = −πr2L− 4π

3 r3 + 1296000 ≤ 0g4(x) = L− 240 ≤ 0h1(x) = [d1/0.0625] − n = 0h2(x) = [d2/0.0625] − k = 0.

The simple bounds are

0.0625 ≤ d1, d2 ≤ 99× 0.0625, 10.0 ≤ r , L ≤ 200.0.

1 ≤ n, k ≤ 99 are integers.Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 130: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Minimze Π(x, λ) = f (x)+λ

2∑

i=1

Qi [hi (x)]h2i (x)+λ

4∑

j=1

Hj [gj (x)]g2j (x),

where λ = 1015.

This becomes an unconstrained optimization problemin a regular domain.

Best solutions found so far in the literature

f∗ = $6059.714

at(0.8125, 0.4375, 42.0984, 176.6366).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 131: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Applications

Applications

Design optimization: structural engineering, product design ...

Scheduling, routing and planning: often discrete,combinatorial problems ...

Applications in almost all areas (e.g., finance, economics,engineering, industry, ...)

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 132: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Dome Design

Dome Design

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 133: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Dome Design

Dome Design

120-bar dome: Divided into 7 groups, 120 design elements, about 200

constraints (Kaveh and Talatahari 2010; Gandomi and Yang 2011).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 134: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Tower Design

Tower Design

26-storey tower: 942 design elements, 244 nodal links, 59 groups/types,

> 4000 nonlinear constraints (Kaveh & Talatahari 2010; Gandomi & Yang 2011).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 135: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Topology Optimization of Nanoscale Device

Topology Optimization of Nanoscale Device

The topology optimization of a nanoscale heat-conducting system is

shape optimization,7 which can be considered an inverse problem for

shape or distribution of materials.

u

e

150 nm

150 nm

flux

Tx

=1

Tx = 1 − x

Tx = 1 − x

T=

0

Benchmark Design

Two materials with heat diffussivities of K1 and K2, respectively. Forexample, Si and Mg2Si, K1/K2 ≈ 10. The aim is to distribute the twomaterials such that the difference |Ta − Tb| is as large as possible.

7A. Evgrafov, K. Maute, R. G. Yang and M. L. Dunn, Topologyoptimization for nano-scale heat transfer, Int. J. Num. Methods in Engrg., 77(2), 285-300 (2009).Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 136: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Initial Congfiguration

Initial Congfiguration

Unit square with two different materials (initial configuration).

z

jTa

Tb

=⇒

K2

K1

Then, use FA to redistribute these two materials so as to maximizethe temperature difference.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 137: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Optimal shape and distribution of materials: Si (blue) and Mg2Si (red).

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

Optimal topology (left) and temperature distribution (right).

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 138: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

References

References

Sambridge, M. And Mosegaard, K., (2002). Monte Carlo methodsin geophysical inverse problems, Reviews of Geophysics, 40, 3-1-29.

Scales, J. A., Smith, M. L., and Treitel, S., IntroductoryGeophysical Inverse Theory, Samizdat Press, (2001).

Yang X. S. (2008). Nature-Inspired Metaheuristic Algorithms,Lunver Press, UK.

Yang, X. S., (2009). Firefly algorithms for multimodal optimization,5th Symposium on Stochastic Algorithms, Foundation andApplications (SAGA 2009) (Eds Watanabe O. and Zeugmann T.),LNCS, 5792, pp. 169-178.

Yang X.-S. and Deb S., (2009). ”Cuckoo search via Lvy flights”.World Congress on Nature & Biologically Inspired Computing(NaBIC 2009). IEEE Publications. pp. 210214. arXiv:1003.1594v1.

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 139: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Thanks

Thanks

International Journal of Mathematical Modelling and NumericalOptimization (IJMMNO) http://www.inderscience.com/ijmmno

Books:

Computational Optimization, Methods and Algorithms (Slawomir Kozieland Xin-She Yang), Springer (2011).http://www.springerlink.com/content/978-3-642-20858-4

Engineering Optimization: An Introduction with MetaheuristicAppliactions (Xin-She Yang), John Wiley & Sons, (2010).http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470582464.html

Thank you!

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence

Page 140: Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks

Thanks

Thanks

International Journal of Mathematical Modelling and NumericalOptimization (IJMMNO) http://www.inderscience.com/ijmmno

Books:

Computational Optimization, Methods and Algorithms (Slawomir Kozieland Xin-She Yang), Springer (2011).http://www.springerlink.com/content/978-3-642-20858-4

Engineering Optimization: An Introduction with MetaheuristicAppliactions (Xin-She Yang), John Wiley & Sons, (2010).http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470582464.html

Notes

https://sites.google.com/site/tutorialmetaheuristic/tutorials

Thank you!

Xin-She Yang FedCSIS2011

Metaheuristics and Computational Intelligence


Recommended