Benchmarking the SMS-EMOA withSelf-adaptation on the bbob-biobj Test Suite
Simon Wessing
Chair of Algorithm EngineeringComputer Science Department
Technische Universität Dortmund
16 July 2017
Introduction
I Evolutionary multiobjective optimizationI Continuous decision variablesI (1 + 1)-SMS-EMOA is algorithmically equivalent to
single-objective (1 + 1)-EA⇒ Theory about optimal step size from single-objective
optimization applies
I Situation for (µ+ 1), (µ+ λ) unknownI How to define step size optimality?I How to adapt step size if not with very sophisticated
MO-CMA-ES?
Benchmarking the SMS-EMOA with Self-adaptation 2 / 18
Introduction
I Evolutionary multiobjective optimizationI Continuous decision variablesI (1 + 1)-SMS-EMOA is algorithmically equivalent to
single-objective (1 + 1)-EA⇒ Theory about optimal step size from single-objective
optimization applies
I Situation for (µ+ 1), (µ+ λ) unknownI How to define step size optimality?I How to adapt step size if not with very sophisticated
MO-CMA-ES?
Benchmarking the SMS-EMOA with Self-adaptation 2 / 18
Development of Control Mechanism
I Idea: use self-adaptation from single-objective optimization
I Mutation of genome: y = x + σN (0, I)I Mutation of step size: σ = σ̃ · exp(τN (0, 1))
I Learning parameter τ ∝ 1/√
n
I Not state of the art any moreI Behavior is emergentI Theoretical analysis is difficultI Application to multiobjective optimization is scarce⇒ Experiment to find good parameter configurations
Benchmarking the SMS-EMOA with Self-adaptation 3 / 18
Development of Control Mechanism
I Idea: use self-adaptation from single-objective optimizationI Mutation of genome: y = x + σN (0, I)I Mutation of step size: σ = σ̃ · exp(τN (0, 1))
I Learning parameter τ ∝ 1/√
n
I Not state of the art any moreI Behavior is emergentI Theoretical analysis is difficultI Application to multiobjective optimization is scarce⇒ Experiment to find good parameter configurations
Benchmarking the SMS-EMOA with Self-adaptation 3 / 18
Development of Control Mechanism
I Idea: use self-adaptation from single-objective optimizationI Mutation of genome: y = x + σN (0, I)I Mutation of step size: σ = σ̃ · exp(τN (0, 1))
I Learning parameter τ ∝ 1/√
n
I Not state of the art any moreI Behavior is emergentI Theoretical analysis is difficultI Application to multiobjective optimization is scarce⇒ Experiment to find good parameter configurations
Benchmarking the SMS-EMOA with Self-adaptation 3 / 18
Experimental SetupFactor Type Symbol Levels
Number variables observable n {2, 3, 5, 10, 20}Learning param.constant
control c {2−2, 2−1, 20, 21, 22, 23}
Population size control µ {10, 50}Number offspring control λ {1, µ, 5µ}Recombination control {discrete,
intermediate,arithmetic, none}
I Full factorial designI 15 unimodal problems of BBOB-BIOBJ 2016
(only first instance)I Budget: 104n function evaluationsI Assessment: rank-transformed HV values of whole EA runs
Benchmarking the SMS-EMOA with Self-adaptation 4 / 18
Other Factors Held Constant
I Initial mutation strength σinit = 0.025I Repair method for bound violations: Lamarckian reflection
(search space [−100, 100]n, scaled to unit hypercube)I Selection: iteratively removes worst individual, until µ reached
(backward elimination)
⇒ Might have to reconsider in the future
Benchmarking the SMS-EMOA with Self-adaptation 5 / 18
PseudocodeInput: population size µ, initial population P0, number of
offspring λ1: t ← 02: while stopping criterion not fulfilled do3: Ot ← createOffspring(Pt) // create λ offspring4: evaluate(Ot) // calculate objective values5: Qt ← Pt ∪ Ot6: r ← createReferencePoint(Qt)7: while |Qt | > µ do8: {F1, . . . ,Fw} ← nondominatedSort(Qt) // sort in fronts9: x∗ ← argminx∈Fw (∆s(x,Fw , r)) // x∗ with smallest contr.
10: Qt ← Qt \ {x∗} // remove worst individual11: end while12: Pt+1 ← Qt13: t ← t + 114: end while
Benchmarking the SMS-EMOA with Self-adaptation 6 / 18
Main Effect: Learning Parameters τ = c/√
n
c = 2−2 c = 2−1 c = 20 c = 21 c = 22 c = 230
20
40
60
80
100
120
140
Ave
rage
Ran
k
I c = 2−2 is always the worst choice⇒ Exclude c = 2−2 from further analysis
Benchmarking the SMS-EMOA with Self-adaptation 7 / 18
Mutation Strength vs. Generation
100 101 102 103
Generation
10−5
10−4
10−3
10−2
10−1
100A
vg.
step
size
σ̄
(a) τ = 2−2/√
n.
100 101 102 103
Generation
10−5
10−4
10−3
10−2
10−1
100
Avg
.st
epsi
zeσ̄
(b) τ = 20/√
n.
100 101 102 103
Generation
10−5
10−4
10−3
10−2
10−1
100
Avg
.st
epsi
zeσ̄
(c) τ = 22/√
n.
100 101 102 103
Generation
10−5
10−4
10−3
10−2
10−1
100
Avg
.st
epsi
zeσ̄
(d) τ = 23/√
n.Benchmarking the SMS-EMOA with Self-adaptation 8 / 18
Main Effect: Selection Variants
(10 + 1) (10 + 10) (10 + 50) (50 + 1) (50 + 50)(50 + 250)
20
40
60
80
100
Ave
rage
Ran
k
Benchmarking the SMS-EMOA with Self-adaptation 9 / 18
Main and Interaction Effects: Recombination & Selection
arithmetic discrete intermediate none
20
40
60
80
100
Ave
rage
Ran
k
arithmetic discrete intermediate none(10 + 1) 46.97 85.43 82.53 78.95(10 + 10) 51.29 72.55 83.48 68.34(10 + 50) 47.69 62.90 82.25 42.50(50 + 1) 61.93 63.21 84.93 40.95(50 + 50) 58.23 55.88 84.06 30.43(50 + 250) 53.77 51.34 78.82 27.14
Benchmarking the SMS-EMOA with Self-adaptation 10 / 18
Interaction Effect: Learning Parameter vs. Recombination
arithmetic discrete intermediate none
2−1/√n 49.96 66.60 79.90 40.82
20/√n 57.01 53.97 83.87 44.49
21/√n 55.65 65.43 82.33 52.42
22/√n 48.70 66.57 80.38 50.98
23/√n 55.25 73.53 86.90 51.54
Benchmarking the SMS-EMOA with Self-adaptation 11 / 18
Comparison with (50+ 250) SBX on bbob-biobj 2016
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
SBX
ESbbob-biobj - f1-f55, 2-D5, 5 instances
0.0.0
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
SBX
ESbbob-biobj - f1-f55, 5-D5, 5 instances
0.0.0
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
SBX
ESbbob-biobj - f1-f55, 10-D5, 5 instances
0.0.0
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
SBX
ESbbob-biobj - f1-f55, 20-D5, 5 instances
0.0.0
Benchmarking the SMS-EMOA with Self-adaptation 12 / 18
Comparison with (50+ 250) SBX on bbob-biobj 2016
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
ES
SBXbbob-biobj - f11, 5-D5, 5 instances
0.0.0
11 sep. Ellipsoid/sep. Ellipsoid
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
SBX
ESbbob-biobj - f18, 3-D5, 5 instances
0.0.0
18 sep. Ellipsoid/Schwefel
I SBX is better/competitive on separable problems
Benchmarking the SMS-EMOA with Self-adaptation 13 / 18
Discussion
I Self-adaptive step size adaptation works in both directions(increasing/decreasing)
I Best configuration for budget of 104n:I No recombinationI τ = 20/
√n
I (50 + 250)-selectionI Surprisingly similar to single-objective caseI Only arithmetic and no recombination seem to be worth
investigating further
Benchmarking the SMS-EMOA with Self-adaptation 14 / 18
Application to bbob-biobj 2017
Modifications to previous experiments:I Initialization in [0.475, 0.525]n (normalized), corresponding to
[−5, 5]n in original problem spaceI Budget of 105nI Comparison to (µ+ 1)-SMS-EMOA from bbob-biobj 2016
I DE variationI SBX/PM variation
Benchmarking the SMS-EMOA with Self-adaptation 15 / 18
Some Results 5-Dseparable-separable separable-moderate
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0P
ropo
rtion
of f
unct
ion+
targ
et p
airs
SMS-DE
SMS-PM
SMS-ES
best 2016bbob-biobj - f1, f2, f11, 5-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
porti
on o
f fun
ctio
n+ta
rget
pai
rs
SMS-PM
SMS-DE
SMS-ES
best 2016bbob-biobj - f3, f4, f12, f13, 5-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
multimodal-multimodal multimodal-weakstructure
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
porti
on o
f fun
ctio
n+ta
rget
pai
rs
SMS-ES
SMS-PM
SMS-DE
best 2016bbob-biobj - f46, f47, f50, 5-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
porti
on o
f fun
ctio
n+ta
rget
pai
rs
SMS-ES
SMS-PM
SMS-DE
best 2016bbob-biobj - f48, f49, f51, f52, 5-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
Benchmarking the SMS-EMOA with Self-adaptation 16 / 18
All 55 Functions2-D 5-D
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0P
ropo
rtion
of f
unct
ion+
targ
et p
airs
SMS-PM
SMS-DE
SMS-ES
best 2016bbob-biobj - f1-f55, 2-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
porti
on o
f fun
ctio
n+ta
rget
pai
rs
SMS-PM
SMS-DE
SMS-ES
best 2016bbob-biobj - f1-f55, 5-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
10-D 20-D
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
porti
on o
f fun
ctio
n+ta
rget
pai
rs
SMS-PM
SMS-DE
SMS-ES
best 2016bbob-biobj - f1-f55, 10-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
porti
on o
f fun
ctio
n+ta
rget
pai
rs
SMS-PM
SMS-DE
SMS-ES
best 2016bbob-biobj - f1-f55, 20-D58 targets in 1..-1.0e-410 instances
v2.1, hv-hash=ff0e71e8cd978373
Benchmarking the SMS-EMOA with Self-adaptation 17 / 18
Conclusions and OutlookConclusions:
I Self-adaptive variation better than SBX in all testeddimensions, also on multimodal problems
I But not better than DE on multimodal problemsI Not a good anytime algorithmI Restarts?
Outlook:I Separate step size for each decision variable?I Exploit knowledge that dominated solutions need higher
mutation strength?I More sophisticated recombination variants?I Does variation interact with backward/forward greedy
selection?
Benchmarking the SMS-EMOA with Self-adaptation 18 / 18