+ All Categories
Home > Documents > AALTO UNIVERSITY Mat-2.4108 Independent research projects...

AALTO UNIVERSITY Mat-2.4108 Independent research projects...

Date post: 05-Oct-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
47
AALTO UNIVERSITY School of Science and Technology Faculty of Information and Natural Sciences Department of Mathematics and Systems Analysis Mat-2.4108 Independent research projects in applied mathematics Comparison of Multiobjective Simulated Annealing and Multiobjective Evolutionary Algorithms in Presence of Noise Ossi Koivistoinen May 24, 2010
Transcript
Page 1: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

AALTO UNIVERSITY

School of Science and Technology

Faculty of Information and Natural Sciences

Department of Mathematics and Systems Analysis

Mat-2.4108 Independent research projects in applied mathematics

Comparison of Multiobjective Simulated Annealing and

Multiobjective Evolutionary Algorithms in Presence of Noise

Ossi Koivistoinen

May 24, 2010

Page 2: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

ii

Page 3: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

Contents

1 Introduction 11.1 Multiobjective optimization . . . . . . . . . . . . . . . . . . . . . 11.2 Simulated annealing . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 This study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Multiobjective optimization algorithms for noisy objective func-tions 22.1 MOEA-RF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.1.1 Basic MOEA . . . . . . . . . . . . . . . . . . . . . . . . . 22.1.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 32.1.3 Noise handling . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 PMOSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.1 Evaluating candidate solutions . . . . . . . . . . . . . . . 62.2.2 Maintaining archive . . . . . . . . . . . . . . . . . . . . . 62.2.3 Generating new candidate solutions . . . . . . . . . . . . 72.2.4 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Test setup 93.1 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 Benchmark problems . . . . . . . . . . . . . . . . . . . . . . . . . 93.3 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . 103.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Results 124.1 Parameters for PMOSA . . . . . . . . . . . . . . . . . . . . . . . 124.2 Speed of convergence . . . . . . . . . . . . . . . . . . . . . . . . . 124.3 Quality of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Summary 21

Appendices 24

A Results of all experiments 24

B Implementation of the benchmark problems 33B.1 FON.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33B.2 KUR.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33B.3 DTLZ2B.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

C Implementation of MOEA-RF algorithm 34C.1 moearf.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34C.2 crossover.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36C.3 mutate.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37C.4 tournament selection.m . . . . . . . . . . . . . . . . . . . . . . . 37C.5 diversity operator.m . . . . . . . . . . . . . . . . . . . . . . . . . 37C.6 find pareto front possibilistic.m . . . . . . . . . . . . . . . . . . . 38C.7 random bit matrix.m . . . . . . . . . . . . . . . . . . . . . . . . . 38

iii

Page 4: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

D Implementation of PMOSA algorithm 39D.1 sa2ba.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39D.2 gen.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41D.3 pdom.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42D.4 laprnd.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

E Implementation of metrics 43E.1 stats.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

iv

Page 5: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

1 Introduction

Multiobjective optimization in presence of noise is a topic of interest in real-world optimization problems and simulations and Multiobjective Evolutionaryalgorithms (MOEAs) are an established approach in solving multiobjective opti-mization problems [1]. Multiobjective Simulated Annealing (MOSA) algorithms[2, 3] are a less established family of algorithms for multi-objective optimizationthat show promising result compared to MOEAs [4, 5]. In this study the per-formance of a new Probabilistic MOSA (PMOSA) algorithm [6] is compared tostate-of-the-art MOEA algorithm [4, 7] in a noisy environment.

1.1 Multiobjective optimization

In multiobjective optimization instead of single objective function, the target isto minimize m objective functions f1(x) . . . fm(x) independently for the sameset of decision variables x. A solution x is dominated if there is some othersolution x such that fi(x) ≤ fi(x) for all i ∈ m and there exists i ∈ m suchthat fi(x) < fi(x). In this case x dominates x, x ≺ x. The optimal solution tothe optimization problem consists of a set of non-dominated solutions, which isknown as the Pareto set. [1]

1.2 Simulated annealing

Simulated annealing (SA) is compact and robust metaheuristic inspired by phys-ical concept of annealing in metallurgy. An SA algorithm operates on a solutioncandidate that is repeatedly modified in order to find the optimal solution. Atfirst, the variations in the solution candidate are larger but the movement grad-ually decreases during the process. This is analogous to movement of an atomin a cooling metal alloy. The magnitude of changes in the solution candidateare controlled by temperature T [3].

With correct annealing schedule the simulated annealing algorithm has theability to escape local optima and to converge to the global optimum [8].

Recent successful applications for simulated annealing include for exampleclustering gene expression data in field of bioinformatics [9], performing at-mospheric correction on hyperspectral images [10] and optimization tasks intime-critical online applications [11].

1.3 This study

Simulated annealing can match the performance of evolutionary algorithms indeterministic multiobjective optimization [5, 6]. In this study a version of es-tablished multiobjective evolutionary algorithm MOEA-RF [7] is implementedand its performance compared to a proposed probabilistic multi-objective sim-ulated annealing (PMOSA) algorithm. In previous studies MOEA-RF has beenidentified as superior algoritm compared to other MOEAs and it can effiecientlysolve multiobjective optimization problems [7]. Therefore it is a good candidatefor comparison. The comparison involves three benchmark problems with nor-mally distributed noise. Four performance metrics are utilized to compare thealgorithms.

1

Page 6: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

2 Multiobjective optimization algorithms for noisy

objective functions

2.1 MOEA-RF

MOEA-RF is a basic multiobjective evolutionary algorithm (MOEA) augmentedwith improvements and optimizations to improve its performance in noisy envi-ronment. In this study a slightly simplified MOEA-RF algorithm is implementedwith MATLAB based on description in article by C. K. Goh and K. C. Tan [7].

The MOEA-RF algorithm is presented in table 1. The parameters for thealgorithm are presented in table 2. In some instances this algorithm is simplyreferred as “EA”.

2.1.1 Basic MOEA

The basic MOEA consists of steps normally found in a genetic algorithm: ini-tialization, evaluation, selection, crossover, and mutation. A genetic algorithmmaintains an evolving population of solution candidates [12]. The differenceof MOEA compared to a single objective evolutionary algorithm is that thereare more than one objective functions and the mutual performance of solutioncandidates is described with their Pareto ranking. Pareto ranking is defined foreach solution candidate, s∗ ∈ S, as the number of solutions, s ∈ S \ s∗, that aredominated by it.

Population consists of solution candidates that have a genotype that is theirinternal representation for the evolutionary algorithm. In this implementa-tion, the genotype of a solution candidate consists of n positive integers g =[g1, . . . , gn] represented as a binary integers with length of B bits.

gi = gB gB−1 . . . g1 g0, gb ∈ {0, 1} (1)

where

gn =

B∑

b=0

gb2b (2)

The phenotype of a solution candidate is a vector of n decision variables inthe range specific to the optimization problem.

x = [x1, . . . , xn], xi ∈ [ximin, ximax], ∀i ∈ n (3)

There are functions f : N+n → R

n and its inverse f−1 : Rn → N+

n to convertthe genotype phenotype and vice versa. A linear translation between genotypesand phenotypes is used:

xi(gi) =gi

ximax − ximin

+ ximin (4)

gi(xi) = (ximax − ximin)(xi − ximin) (5)

The genotype of the population is presented as a population matrix.

Pg =

g1,1 . . . g1,n

......

gm,1 . . . gm,n

(6)

2

Page 7: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

Initialization is performed by selecting population P from uniform probabil-ity distribution. Empty archive A is created.

Evaluation is done by applying the objective functions of the benchmarkproblems fobj : R

n → Rm to produce the objective function values from the

phenotypes of decision variables.

Selection is done using tournament selection [12]. Population is distributedto random pairs such that each solution candidate pariticipates in exactly twotournaments. The solution candidate that has better objective function valuein the tournament pair is preserved in the popopulation.

Uniform bitwise crossover algorithm (x, y) ∈ N+ → (x, y) ∈ N+ takesbinary representations (eq. 1) of two integers:

x = xB . . . x0 (7)

y = yB . . . y0 (8)

For each bit b ∈ [0, B] choose continuous random pb ∈ [0, 1].

pb < 0.5 : xb = xb and yb = yb

pb ≥ 0.5 : xb = yb and yb = xb.(9)

This gives the two integers after the crossover operation:

x = xB . . . x0 (10)

y = yB . . . y0. (11)

To perform the crossover random pairs are selected from the population so thateach solution candidate participates in one crossover. The crossover is performedon the pair with crossover probability pc.

Mutation is used to introduce new genes into the population. The basicmutation operation independently flips each bit of each genotype element ofeach solution candidate with mutation probability pm.

2.1.2 Optimization

Two improvements that are used in contemporary MOEAs, elitism and diver-sity operator, are first introduced to the basic MOEA. Elitism is achieved bymaintaining an archive A of best found solutions and combining members of thearchive with the population on every generation. Archive has a maximum sizeand when it is reached the most crowded archive members are removed by usinga recurrent truncation process based on niche count. Radius proposed by [7] forthe niche count (1/sizepopulation in the normalized objective space) was foundto be the best alternative. The purpose of elitism is to avoid the populationescaping the Pareto front too much and the purpose of the recurrent truncationprocess is to maintain maximum diversity in the archive.

3

Page 8: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

2.1.3 Noise handling

In addition to basic improvements, three noise handling techniques are in-cluded in the algorithm. These are Experiential Learning Directed Perturba-tion (ELDP), Gene Adaptation Selection Strategy (GASS), and Necessity (N-)archiving model. They are well explained by Goh and Tan in [7]. In this studythe same parameters are used for the noise handling techniques that are usedby Goh and Tan. Necessity possibilistic (NP-) archiving model, that was alsodescribed in the article, was not implemented in this study.

Experiential Learning Directed Perturbation (ELDP) monitors thevariations in components of decision variables in subsequent generations. Ifthe variation has remained below certain threshold level in last two generationsthe same change in decision variable phenotype space is applied to the elementinstead of random mutation. This should accelerate the convergence of thealgorithm.

Gene Adaptation Selection Strategy (GASS) is an alternative mutationalgorithm that manipulates the population in the phenotype space in order tomake the population show desired divergence or convergence characteristics.

Minimum simin, maximum simax, and mean values simean for each dimensioni ∈ n in phenotype space are calulated for solutions candidates in the archiveand intervals are constructed form these for convergence model

Ai = simin − wsimean (12)

Bi = simin + wsimean (13)

and for divergence model

ai = simean − wsimean (14)

bi = simean + wsimean (15)

After mutation and crossover operations, GASS divergence or convergence modelis applied to the population. Each phenotype component of each solution can-didate is re-selected with probability 1/n from uniform probablity distributionU(Ai, Bi) for convergence model and from U(ai, bi) for divergence model. Con-vergence model is triggered after 150 generations if 60% of the archive capacityis reached and divergence model if less than 60% of archive is filled.

Necessity (N-) archiving model adds fuzzy threshold for determining themembers of Pareto set to compensate the noise found in objective functionvalues. Instead of Pareto dominance, N-dominance is used to identify the dom-inated solutions. The defintion for N-dominance is given in [7]. In essence afuzzy margin depending of the noise level is added between the found Paretofront and the objective function values that must be exceeded for the new solu-tion to N-dominate the old solutions.

4

Page 9: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

Table 1 — MOEA-RF algorithm used in this study.

Step Description1. Initialize Select population P from uniform probabil-

ity distribution and create empty archive A.2. Evaluate Calculate objective objective function val-

ues fi(P )3. Selection Perform tournament selection on popula-

tion.4. Use archive Replace n random members of P by random

members of A, if there are enough membersin A.

5. Crossover Apply uniform crossover operator on popu-lation. Pnew = crossover(Pold).

6. Update archive 1. Combine current archive with the popu-lation Atmp = Aold ∪ P2. Apply N-archiving model to Atmp to findthe possibly non-dominated solutions

7. Diversity operator Limit the size of archive to Amax by applyingthe recursive truncation operator on it.

8. Mutate 1. Apply ELDP mutation algorithm to P .2. Apply GASS algorithm on P .

9. Repeat Repeat from step 2 until g = gmax

10. End Return archive A.

Table 2 — Parameters for the MOEA-RF algorithm

Parameter ValueNumber of samples L = 1Number of generations gmax = 500Population size S = 100Archive size A = 100Mutation bitwise mutation with probability pm = 1/15

plus ELDP plus GASSSelection tournament selection + random swap of 10 so-

lutions from archive to population.Crossover Uniform crossover with probability pc = 0.5

5

Page 10: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

2.2 PMOSA

The probabilistic multiobjective simulated annealing (PMOSA) algorithm thatis evaluated in this study is described and implemented by Mattila et al. in[6]. It is based on simulated annealing [2] and the Archiving MOSA (AMOSA)algorithm [4]. Unlike previous MOSAs, this algorithm uses probabilistic rank-ing to select the non-dominated solutions. Stochastic ranking is used also byMOEA-RF in form of N- and NP-archiving models, but unlike computationallylight MOEA-RF, PMOSA uses probability of dominance for ranking.

The PMOSA algorithm is presented in table 3. Parameters for the algorithmare presented in table 4. In some instances this algorithm is simply referred as“SA”.

2.2.1 Evaluating candidate solutions

The PMOSA algorithm processes one solution candidate at a time, called thecurrent solution x. On each iteration round a new solution x is generatedin the neighborhood of x (see chapter 2.2.3). Objective functions fi are thenevaluated r times for both solutions. A single scalar performance measure F (x)is constructed with respect to the current archive of accepted solutions S basedon probability of dominance P (x ≺ y), where x and y are two solutions.

The estimates for the mean fi(x) and variance s2i (x) are obtained from r

evaluations of fi(x). It is assumed that the noise in fi(x) is normally distributed,thus the probability P (fi(x) < fi(y)) is given by evaluating the cumulativedistribution function of the normal distribution N(fi(x)− fi(y), s2

i (x) + s2i (y))

at the origin.The probability of dominance is then given

P (x ≺ y) =m∏

i=0

P (fi(x) < fi(y)) (16)

Using eq 16 the performance measures for the current and the new solutionare:

Fx

=∑

x ∈ S \ {x} ∪ {x}

P (x ≺ x) (17)

Fx

=∑

x ∈ S ∪{x}

P (x ≺ x) (18)

The probability of accepting x as the current solution is dependent of thetemperature T and is calculated as

p = exp(−(F (x) − F (x))/T ) (19)

On each iteration, if F (x) < F (x), accept the new solution as current solu-tion x = x. Otherwise accept new solution with probability p.

2.2.2 Maintaining archive

A fixed size archive S of n solutions with lowest probability of dominance ismaintained during the iteration and it is returned as the result in the end.

6

Page 11: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

In the beginning of the algorithm the archive is initialized to contain the firstrandomly generated solution candidate. As long as the maximum size N isnot reached each newly accepted current solution is added to the archive. Theperformance measure for the solutions in the archive is

FSx

=∑

x ∈ S

P (x ≺ x) (20)

After the maximum archive size is reached archive performance measures arecalculated on each iteration round for the set S ∪ {x} and the element with thehighest value for eq. 20 is dropped from the archive. This operation is optimizedby storing the mutual probabilities of dominance between the members of thearchive.

Updating the archive is the computationally most expensive part of the al-gorithm and it leads to noticeable slowing of the algorithm when the archivesize is large. This is justified by PMOSA’s background in multiobjective sim-ulation optimization (MOSO), in which the simulation run is computationallymuch more expensive than the optimization algorithm itself.

2.2.3 Generating new candidate solutions

In a MOSA algorithm one solution candidate is processed at a time and it isperturbed in the decision variable space before every evaluation step by

xnew = x + ∆x (21)

Selection of the procedure that is used to generate the new candidate solutionscan have significant effect on the performance of the algorithm [6].

Three different perturbation methods were tested. In the first method, allcomponents of the decision variable are modified by drawing the perturbationsfrom uniform probability distribution

∆xi ∼ Uniform(−δ, δ), ∀i (22)

The second method also draws the perturbation from uniform probability dis-tribution, but only one randomly selected decision variable i ∈ n is modified ata time

{

∆xi ∼ Uniform(−δ, δ) i = i

∆xi = 0 i 6= i(23)

The third method is otherwise identical to the second, but the perturbation isdrawn from laplace distribution

∆xi ∼ Laplace(δ), i = i (24)

This creates perturbations with mean 0 and variance 2δ2.

2.2.4 Parameters

The parameters used in PMOSA algorithm can have significant effect on theperformance of the algorithm. Therefore a set of experiments to find the op-timal parameters for PMOSA are conducted before running the actual experi-ment. The main parameters to select are the temperature T and the method for

7

Page 12: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

selecting the neighboring solution for current solution x. Decreasing tempera-tures are commonly used with SA algorithms, but according to [6] decreasingtemperature has not shown to improve PMOSA algorithm much, therefore theexperiments were limited to constant temperatures.

Table 3 — PMOSA algorithm used in this study.

Step Description1. Initialize Generate random initial current solution candi-

date.2. Update current solu-tion

Generate new solution candidate from theneighborhood of current solution using eq. 24.

3. Evaluate Evaluate the objective function on new solutioncandidate L times and determine the perfor-mances of new and current solution candidates.

4. Selection Select either current solution candidate or newsolution candidate as the new solution as de-scribed in sec 2.2.1. If current solution was notupdate, continue from 2.

5. Update archive If archive is not full add current solution toarchive. Otherwise compute pairwise probabil-ities of dominance for current solution candi-date and each solution in the archive. Selectthe member of archive that has the largest sumof probabilities being dominated by other solu-tions. If the sum is larger than the correspond-ing sum of the current solution, replace the so-lution in the archive with the current solution.(See section 2.2.2)

6. Repeat Repeat from step 2 until K = Kmax

7. End Return archive A

Table 4 — Parameters for the PMOSA algorithm

Parameter ValueNumber of samples L = 10Number of iterations K = 5000Archive size N = 100Perturbation algorithm Eq. 24 with δ = (ximax − ximin)/10Temperature T = 4

8

Page 13: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

3 Test setup

3.1 Noise

To evaluate the algorithms in presence of noise normally distributed noise withzero mean is added to objective function values after the noiseless evaluation.Standard deviation of noise is given as percentage of |fmax

i |, where |fmaxi | is the

maximum of ith objective objective function value in true Pareto front.The algorithms see only the objective function values with noise. The noise-

less values are used to calculate performance metrics thst indicate the efficiencyof the algorithms

3.2 Benchmark problems

The performance of multiobjective optimization algorithms is measured by ap-plying them to different benchmark problems. In literature, there is wide ar-ray of different problems with varying properties. In this study three differentbenchmark problems are used. FON and KUR are commonly used problemsin the field of multiobjective optimization and DTLZ2B is problem used by theauthors of PMOSA algorithm that is examined in this study.

DTLZ2B is a version of DTLZ family of problems, whose Pareto front is theunit sphere. DTLZ2B is a version with two dimensional objective space. Thenon-convex Pareto front is the arc of the unit circle in region f1, f2 ∈ [0, 1]. Themain point of interests are the even distribution of found solutions along thePareto front and the algorithm’s ability to find the extremum points (f1, f2) =(0, 1) and (f1, f2) = (1, 0) to avoid false Pareto points along axes.

DTLZ2B = Minimize(f1, f2) (25)

g(x2, . . . , x5) =

5∑

i=2

(xi −1

2)2 (26)

f1(x1, g) = cos(xπ

2)(1 + g) (27)

f2(x1, g) = sin(xπ

2)(1 + g) (28)

0 ≤ xi < 1, ∀i = 1, . . . , 5 (29)

FON is a problem by C. M. Foneca and P. J. Fleming from 1995. FONis characterized by non-convex Pareto front and non-linear objective functionswith their values concentrated around the worst possible objective function valuef1, f2 = (1, 1). Small changes in any of the components in eight dimensionaldecision variable lead the solution easily far from the Pareto front, which makesit challenging for MOEAs to maintain stable evolving population. FON is gen-erally difficult to solve specially in noisy environments. FON is a versatile testproblem, whose Pareto front shape could be continuously modified by exponen-tiating the objective functions [1].

FON = Minimize(f1, f2) (30)

9

Page 14: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

f1(x1, . . . , x8) = 1 − exp[−8

X

i=1

(xi − 1/√

8)2] (31)

f2(x1, . . . , x8) = 1 − exp[−8

X

i=1

(xi + 1/√

8)2] (32)

−2 ≤ xi < 2,∀i = 1, . . . , 8 (33)

KUR is a problem used e.g. in [7]. It has non-convex Pareto front that isdisconnected both in objective and decision variable space.

KUR = Minimize(f1, f2) (34)

f1(x1, x2, x3) = 1 − exp[−8

X

i=1

(xi − 1/√

8)2] (35)

f2(x1, x2, x3) = 1 − exp[−8

X

i=1

(xi + 1/√

8)2] (36)

−5 ≤ xi < 5,∀i = 1, 2, 3 (37)

In this study the KUR problem is normalized by scaling the objective func-tions so that the Pareto front fits approximately the area of unit square. Thisway same parameters for the optimization algorithms can be used as with theother problems.

f1 =1

10f1 (38)

f2 =1

10f2 (39)

(40)

3.3 Performance metrics

The performance of multiobjective optimization algorithms is measured by fourdifferent statistics in this study. The same metrics that were used by Goh andTan are used and they are presented in [7]. All statics are calculated in theobjective space where true Pareto front of the test function PFtrue is comparedto the Pareto front found by the optimization algorithm PFfound. The truePareto fronts of the benchmark problems were solved usign MOEA-RF withoutnoise, with large population, and with large number of generations.

Generational distance (GD) is a proximity indicator. It is the averageeuclidean distance between members of PFtrue and PFknown

Maximum Spread (MS) is a diversity indicator. It describes how far awaythe extremum members of PFknown from each other.

Spacing (S) is a distribution indicator. It describes how evenly the membersof PFknown are distributed in the objective space in sense of euclidean distance.

10

Page 15: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

Hypervolume Ratio (HVR) is a general quality indicator. It describes howwell the members of PFknown cover PFtrue. Hypervolume is a common perfor-mance metric in comparative studies in MOEA’s[13] and its improvement is alsoused as an optimization heuristic in some MOEAs.

3.4 Experiments

50000 evaluations of objective functions were used in all experiments. With theevolutionary algorithm this translates to 500 generations with population size100 and one sample of objective function value per decision variable vector. Withthe simulated annealing algorithm each evaluation is repeated 10 times, thus,5000 candidate solutions are examined. This is because PMOSA has to evaluateeach decision variable more than once to calculate the sample variance of theobjective function value. MOEA-RF on the other hand considers repetition withsame decision variable unnecessary and uses the available number of evaluationsto process more generations. [7]

The three benchmark problems (DTLZ2B, FON and KUR) are solved byMOEA-RF and PMOSA algorithms with noise levels of 0.1%, 1%, 2%, 5%,10%, and 20%. Level 0.1% was used instead of 0% because the PMOSA algo-rithm requires some variance it the data. For each combination the algorithmis repeated 30 times.

The algorithms are implemented and test are conducted with MATLAB.The multiobjective optimization functions are presented in appendices B–D.The functions were instrumented to collect the performance metrics presentedin appendix E.

11

Page 16: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

4 Results

4.1 Parameters for PMOSA

To find a good value for the temperature several different constant temperatureswere tested. The experiments were conducted on FON benchmark problemthat was found to be the most challenging of the problems to solve for bothMOEA-RF and PMOSA. The results are presented in figure 1, and based onthe results temperature T = 4 was chosen to be used in the actual experiment.The selection of temperature was a compromise between best performance onlow levels of noise versus acceptable performance on high levels of noise. Lowertemperatures did provide better convergence with low levels of noise, but highertemperatures were able to find decent solution with higher probability when thenoise level was high. This suggest that decreasing temperature function couldprovide best overall performance in solving FON, but this was not tested in thisstudy.

Different methods for generating new solutions with different values of δ werealso tested. The results are presented in figure 2. Some of the tested combina-tions were worse than others but there were many good combinations and theselection of best alternative was not clear. Perturbation method that choosesperturbation for one decision variable at a time from Laplace distribution withδ = (ximax − ximin)/10 was selected because it produced the highest mean ofhypervolume ratio but the difference was not statistically significant comparedto other methods.

Third parameter that could be modified is the number of samples to evaluatethe objective function. In real world simulation problems evaluation of objectivefunction is the computationally most expensive part in optimization. Thereforethe number of evaluations is the limiting factor in the running time of the algo-rithm. We assume that samples are IID and the noise is normally distributed, sowhen the number of samples grows large the sample mean approaches the noise-less objective function value and variance of sample mean approaches zero. Thismeans that with enough samples it is possible to solve a problem with any levelof noise. This leads to trade-off between how many times the objective functionis sampled and how many different solution candidates can be processed duringthe algorithm.

The same consideration is valid also for MOEA-RF and the number of sam-ples, L = 10 in PMOSA vs. L = 1 in MOEA-RF, is actually one of the maindifferences between the algorithms. It can be observed from the examples inappendix A that the objective function values are much closer to PFtrue inPMOSA executions. The ratio between the magnitudes of visible noise is givenby standard error of the mean and it is

√10. This is probably the main result

why PMOSA is able to find better solutions than MOEA-RF to problems withhigh levels of noise.

Different trials in altering the sample count in both MOEA-RF and PMOSAwould be interesting to conduct but those were not performed in this study.

4.2 Speed of convergence

50000 evaluations was enough for the algorithms to demonstrate their perfor-mance with the given parameters and benchmark problems. The fastest conver-

12

Page 17: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

gence in metrics was seen with DTLZ2B problem and slowest with FON withboth MOEA-RF and PMOSA (figures in appendix A: 13, 14, 15, 16, 17, 18).

MOEA-RF was able to find the approximate solutions to problems in lessevaluations than PMOSA, but the gap in the metrics reduces with more eval-uations. It should also be noted that unlike MOEA-RF, PMOSA algorithmalways returns the requested number of solutions that are more or less domi-nated by each other. Therefore the number of found solutions is not a validmetric in comparisons between MOEA-RF and PMOSA as it is to comparedifferent MOEAs.

From the figures it can be concluded that with low levels of noise MOEA-RF converges to the correct solution slightly faster than PMOSA. In FON withhigh levels of noise the metrics of PMOSA still seem to be growing at 50000evaluations so increasing the number of evaluations would probably result inbetter solutions. To put it other way, if the number of evaluations is limitedMOEA-RF is able to produce result with similar quality faster.

13

Page 18: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0.01 0.1 0.5 1 2 4 6 8 10 100 10000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Mean of HVR in found PF

Temperature

Hyp

ervo

lum

e R

atio

Noise 0.1%Noise 1%Noise 2%Noise 5%Noise 10%Noise 20%

Figure 1 — HVR metric of solution found by PMOSA to FON with different tem-peratures and different levels of noise. Unlike the other experiments, here archive sizeN = 50.

14

Page 19: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Mean and standard deviation of HVR in found PF

Hyp

ervo

lum

e R

atio

All + Uniform

+ 1/5

All + Uniform

+ 1/10

All + Uniform

+ 1/20

One + Uniform + 1/5

One + Uniform + 1/10

One + Uniform + 1/20

One + Laplace + 1/5

One + Laplace + 1/10

One + Laplace + 1/20

Noise 0.1%Noise 2%Noise 10%

Figure 2 — HVR metric of solution found by PMOSA to FON with different algo-rithms for generating the new solution from the current solution.“All” = All components of the decision variable were modified at the same time,“One” = One component of the decision variable was modified at a time,“Uniform” = The values were chosen from the uniform distribution,“Laplace” = The values were chosen from Laplace distribution,fraction is the size of δ in eq. 23 relative to ximax − ximin.

15

Page 20: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0.1 % 1 % 2 % 5 % 10 % 20 %0

0.2

0.4

0.6

0.8

1Mean HVR of found PF

Noise level

Hyp

ervo

lum

e R

atio

EA & FONSA & FONEA & DTLZ2BSA & DTLZ2BEA & KURSA & KUR

0.1 % 1 % 2 % 5 % 10 % 20 %0

0.05

0.1

0.15

0.2

0.25

0.3

0.35Mean GD of found PF

Noise level

Gen

erat

iona

l Dis

tanc

e

0.1 % 1 % 2 % 5 % 10 % 20 %10

20

30

40

50

60

70

80

90

100Mean size of found PF

Noise level

Par

eto

fron

t siz

e

Figure 3 — Mean values of hypervolume ratio (HVR), generational distance (GD)and number of solution found by MOEA-RF (EA) and PMOSA (SA) optimizationalgorithms to test problems FON, DTLZ2B, and KUR with different levels of noisewith 30 repetitions for each combination.

16

Page 21: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

4.3 Quality of solutions

The performance metrics show some differences in response to noise betweenMOEA-RF and PMOSA (figure 3). Hypervolume ratio (figure 4) examinedtogether with spacing metric (figure 5) proved to be to be a good metric to de-scribe the overall quality of the final solution. HVR describes how accurately thePareto front was found and S describes how evenly the solutions are distributealong the Pareto front.

As noted also by Goh and Tan the results of MOEA-RF were slightly betterwith low levels of noise compared to test setup with no noise [7]. In PMOSAalgorithm this phenomenon was even stronger. With noise levels less than 2%both algorithms were able to find the Pareto front of the benchmark problemsequally well, with small exception in PMOSA and FON combination. Thespacing of solutions along the Pareto front was consistently better with PMOSAthan with MOEA-RF. Specially in DTLZ2B the solutions found by MOEA-RFwere concentrated along the middle of the Pareto front as can be seen fromfigures 8 and 11 in appendix A.

KUR and DTLZ2B were both easier to solve than FON. With higher levelsof noise both algorithms usually found some solutions but failed to find thewhole Pareto front that is demonstrated by examples of PMOSA algorithmsolving KUR with different levels of noise in figure 12. With KUR problemthe PMOSA algorithm outperformed MOEA-RF with all noise levels. Speciallywith 10% and 20% noise PMOSA was still able to find a solutions to KUR andDTLZ2B with hypervolume ratio near 1, while GA was not. It can be concludedthat with this selection of parameters PMOSA can solve problems with higherlevels of noise than MOEA-RF.

FON proved to be the most difficult of the benchmark problems for bothalgorithms to solve. As can be seen in figure 10 the PMOSA algorithm hadsome difficulties in escaping the initial solution altogether. However, on averagethe results obtained with PMOSA to FON were better that results obtainedwith MOEA-RF with high levels of noise. With low levels of noise MOEA-RFwas more accurate. With lower temperature PMOSA would have showed moresimilar performance to MOAR-RF, but slightly higher temperature was chosento show PMOSA’s ability to handle high levels of noise.

Overall PMOSA produced better quality of solutions compared to MOEA-RF with all noise levels in KUR and DTLZ2B benchmark problems and withhigh noise levels in FON. MOEA-RF was better with FON problem in less noisyenvironments.

17

Page 22: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0

0.2

0.4

0.6

0.8

1

Noise 0.1 %H

yper

volu

me

Rat

io

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.2

0.4

0.6

0.8

1

Noise 1 %

Hyp

ervo

lum

e R

atio

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.2

0.4

0.6

0.8

1

Noise 2 %

Hyp

ervo

lum

e R

atio

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.2

0.4

0.6

0.8

1

Noise 5 %H

yper

volu

me

Rat

io

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.2

0.4

0.6

0.8

1

Noise 10 %

Hyp

ervo

lum

e R

atio

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.2

0.4

0.6

0.8

1

Noise 20 %

Hyp

ervo

lum

e R

atio

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

Figure 4 — Hypervolume ratio metric of solutions found by MOEA-RF (EA) andPMOSA (SA) optimization algorithms to test problems FON, DTLZ2B, and KURwith different levels of noise with 30 repetitions for each combination. In this box plotthe height of the box and line in the middle describe the upper and lower quartile andthe median of HVR values. The line extending from the box marks the rest of thedata range except for the distant outliers that are drawn as crosses.

18

Page 23: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0

1

2

3

4

5

x 10−4 Noise 0.1 %

Spa

cing

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

1

2

3

4

5

x 10−4 Noise 1 %

Spa

cing

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.5

1

1.5

2

2.5

3x 10

−3 Noise 2 %

Spa

cing

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

0.5

1

1.5

2

2.5

3x 10

−3 Noise 5 %S

paci

ng

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

2

4

6

8

10x 10

−3 Noise 10 %

Spa

cing

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

0

2

4

6

8

10x 10

−3 Noise 20 %

Spa

cing

EA & FON

SA & FON

EA & DTLZ2B

SA & DTLZ2B

EA & KUR

SA & KUR

Figure 5 — Spacing metric of solutions found by MOEA-RF (EA) and PMOSA (SA)optimization algorithms to test problems FON, DTLZ2B, and KUR with differentlevels of noise with 30 repetitions for each combination. Boxes are drawn as in figure4

19

Page 24: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 0.5 1

0

0.5

1

PMOSA & FON, noise = 5 %

f1

f2

0 0.5 1

0

0.5

1

PMOSA & FON, noise = 5 %

f1f2

0 0.5 1

0

0.5

1

PMOSA & FON, noise = 5 %

f1

f2

0 0.5 1

0

0.5

1

PMOSA & FON, noise = 5 %

f1

f2

0 0.5 1

0

0.5

1

PMOSA & FON, noise = 5 %

f1

f2

0 0.5 1

0

0.5

1

PMOSA & FON, noise = 10 %

f1

f2

Figure 6 — Six examples of PMOSA solution to FON with 5% noise. Found solutionsare drawn as circles, corresponding noiseless objective function values as crosses, andPFtrue as continuous line.

20

Page 25: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

5 Summary

A reference algorithm MOEA-RF was implemented in this study in order tocompare it with proposed PMOSA algorithm in presence of noise. The perfor-mance of implemented MOEA-RF was similar but slightly inferior compared tothe results that were provided by Goh and Tan in [7]. However, the MOEA-RFalgorithm implemented here produced better results than other MOEAs pre-sented by Goh and Tan. Therefore we can conclude that the MOEA-RF thatwas implemented is a representative multiobjective evolutionary algorithm thatcan be used as a reference point for PMOSA.

It was observed that selecting the optimal parameters for both MOEA andMOSA requires some work and experimentation and even small changes in algo-rithm parameters may have clearly observable effect on results. The algorithmswere not separately tuned for the different benchmark problems, but the sameparameters were used for solving all the problems.

For most experiments conducted in this study the PMOSA algorithm per-formed equally well or better than MOEA-RF algorithm. The only exceptionwas FON benchmark problem with low levels of noise. This was caused by thechoice of temperature parameter that was tuned towards more noisy problems.

PMOSA by Mattila et al. [6] proved to be very promising algorithm insolving different kinds of multiobjective optimization problems and further workon the algorithm is well justified. Introducing adaptive sample count and non-constant annealing schedule are likely to improve the PMOSA algorithm.

21

Page 26: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

References

[1] C. M. Fonseca and P. J. Fleming, “An overview of evolutionary algorithmsin multiobjective optimization,” Evolutionary Computation, vol. 3, pp. 1–16, 1995.

[2] S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by sim-ulated annealing,” Science, vol. 220, pp. 671–680, 1983.

[3] B. Suman and P. Kumar, “A survey of simulated annealing as a tool for sin-gle and multiobjective optimization,” Journal of the Operational ResearchSociety, vol. 57, pp. 1143–1160, October 2005.

[4] S. Bandyopadhyay, S. Saha, U. Maulik, and K. Deb, “A simulatedannealing-based multiobjective optimization algorithm: Amosa,” Evolu-tionary Computation, IEEE Transactions on, vol. 12, no. 3, pp. 269–283,2008.

[5] K. I. Smith, R. M. Everson, J. E. Fieldsend, C. Murphy, and R. Misra,“Dominance-based multiobjective simulated annealing,”Evolutionary Com-putation, IEEE Transactions on, vol. 12, no. 3, pp. 323–342, 2008.

[6] V. Mattila, K. Virtanen, and R. P. Hamalainen, “Scheduling periodicmaintenance of a fighter aircraft fleet using a multi-objective simulation-optimization approach,” tech. rep., Helsinki University of Technology, Sys-tems analysis laboratory, Jun 2009.

[7] C. K. Goh and K. C. Tan,“An investigation on noisy environments in evolu-tionary multiobjective optimization,” IEEE Transactions on EvolutionaryComputation, no. 3, pp. 354–381, 2007.

[8] M. H. Alrefaei and S. Andradottir, “A Simulated Annealing Algorithm withConstant Temperature for Discrete Stochastic Optimization,” MANAGE-MENT SCIENCE, vol. 45, no. 5, pp. 748–764, 1999.

[9] K. Bryan, P. Cunningham, and N. Bolshakova, “Application of simulatedannealing to the biclustering of gene expression data,” Information Tech-nology in Biomedicine, IEEE Transactions on, vol. 10, pp. 519–525, July2006.

[10] R. Marion, R. Michel, and C. Faye, “Atmospheric correction of hyperspec-tral data over dark surfaces via simulated annealing,” Geoscience and Re-mote Sensing, IEEE Transactions on, vol. 44, pp. 1566–1574, June 2006.

[11] E. Aggelogiannaki and H. Sarimveis, “A simulated annealing algorithm forprioritized multiobjective optimization – implementation in an adaptivemodel predictive control configuration,” Systems, Man, and Cybernetics,Part B: Cybernetics, IEEE Transactions on, vol. 37, pp. 902–915, Aug.2007.

[12] K. Deb, “An introduction to genetic algorithms,”tech. rep., Indian Instituteof Technology Kanpur, Kanpur, India, 1997.

22

Page 27: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

[13] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast and elitist multi-objective genetic algorithm: Nsga-ii,” IEEE Transactions on EvolutionaryComputation, no. 2, pp. 182–197, 2002.

23

Page 28: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

A Results of all experiments

0 0.5 1

0

0.5

1

EA & FON, noise = 0.1 %

f1

f2

0 0.5 1

0

0.5

1

EA & FON, noise = 1 %

f1

f2

0 0.5 1

0

0.5

1

EA & FON, noise = 2 %

f1

f2

0 0.5 1

0

0.5

1

EA & FON, noise = 5 %

f1

f2

0 0.5 1

0

0.5

1

EA & FON, noise = 10 %

f1

f2

0 0.5 1

0

0.5

1

EA & FON, noise = 20 %

f1

f2

Figure 7 — Examples of MOEA-RF solution to FON with different levels of noise.Found solutions are drawn as circles, corresponding noiseless objective function valuesas crosses, and PFtrue as continuous line.

24

Page 29: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 0.5 1

0

0.5

1

EA & DTLZ2B, noise = 0.1 %

f1

f2

0 0.5 1

0

0.5

1

EA & DTLZ2B, noise = 1 %

f1

f2

0 0.5 1

0

0.5

1

EA & DTLZ2B, noise = 2 %

f1

f2

0 0.5 1

0

0.5

1

EA & DTLZ2B, noise = 5 %

f1

f2

0 0.5 1

0

0.5

1

EA & DTLZ2B, noise = 10 %

f1

f2

0 0.5 1

0

0.5

1

EA & DTLZ2B, noise = 20 %

f1

f2

Figure 8 — Examples of MOEA-RF solution to DTLZ2B with different levels ofnoise. Symbols as in fig 7.

25

Page 30: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

−2 −1.5

−1

−0.5

0

EA & KUR, noise = 0.1 %

f1

f2

−2 −1.5

−1

−0.5

0

EA & KUR, noise = 1 %

f1

f2

−2 −1.5

−1

−0.5

0

EA & KUR, noise = 2 %

f1

f2

−2 −1.5

−1

−0.5

0

EA & KUR, noise = 5 %

f1

f2

−2 −1.5

−1

−0.5

0

EA & KUR, noise = 10 %

f1

f2

−2 −1.5

−1

−0.5

0

EA & KUR, noise = 20 %

f1

f2

Figure 9 — Examples of MOEA-RF solution to KUR with different levels of noise.Symbols as in fig 7. In reality PFtrue is disconnect with gaps in places where straighthorizontal line is drawn in in the pictures at f2 = 0 and f2 = −0.45.

26

Page 31: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 0.5 1

0

0.5

1

SA & FON, noise = 0.1 %

0 0.5 1

0

0.5

1

SA & FON, noise = 1 %

0 0.5 1

0

0.5

1

SA & FON, noise = 2 %

0 0.5 1

0

0.5

1

SA & FON, noise = 5 %

0 0.5 1

0

0.5

1

SA & FON, noise = 10 %

0 0.5 1

0

0.5

1

SA & FON, noise = 20 %

Figure 10 — Examples of PMOSA solution to FON with different levels of noise.Symbols as in fig 7.

27

Page 32: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 0.5 1

0

0.5

1

SA & DTLZ2B, noise = 0.1 %

0 0.5 1

0

0.5

1

SA & DTLZ2B, noise = 1 %

0 0.5 1

0

0.5

1

SA & DTLZ2B, noise = 2 %

0 0.5 1

0

0.5

1

SA & DTLZ2B, noise = 5 %

0 0.5 1

0

0.5

1

SA & DTLZ2B, noise = 10 %

0 0.5 1

0

0.5

1

SA & DTLZ2B, noise = 20 %

Figure 11 — Examples of PMOSA solution to DTLZ2B with different levels of noise.Symbols as in fig 7.

28

Page 33: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

−2 −1.5

−1

−0.5

0

SA & KUR, noise = 0.1 %

−2 −1.5

−1

−0.5

0

SA & KUR, noise = 1 %

−2 −1.5

−1

−0.5

0

SA & KUR, noise = 2 %

−2 −1.5

−1

−0.5

0

SA & KUR, noise = 5 %

−2 −1.5

−1

−0.5

0

SA & KUR, noise = 10 %

−2 −1.5

−1

−0.5

0

SA & KUR, noise = 20 %

Figure 12 — Examples of PMOSA solution to KUR with different levels of noise.Symbols as in fig 7.

29

Page 34: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 1 2 3 4 5

x 104

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7EA & FON

GD

evaluations

0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1EA & FON

MS

evaluations

0 1 2 3 4 5

x 104

0

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

0.018EA & FON

S

evaluations0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1EA & FON

HV

R

evaluations

0.1%1%2%5%10%20%

Figure 13 — Development of performance metrics over time in MOEA-RF solutionto FON with different levels of noise.

0 1 2 3 4 5

x 104

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7SA & FON

GD

evaluations

0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1SA & FON

MS

evaluations

0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1

1.2

1.4x 10

−4 SA & FON

S

evaluations0 1 2 3 4 5

x 104

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9SA & FON

HV

R

evaluations

0.1%1%2%5%10%20%

Figure 14 — Development of performance metrics over time in PMOSA solution toFON with different levels of noise.

30

Page 35: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 1 2 3 4 5

x 104

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35EA & DTLZ2B

GD

evaluations

0 1 2 3 4 5

x 104

0.975

0.98

0.985

0.99

0.995

1

1.005EA & DTLZ2B

MS

evaluations

0 1 2 3 4 5

x 104

0

1

2

3

4

5

6x 10

−3 EA & DTLZ2B

S

evaluations0 1 2 3 4 5

x 104

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1EA & DTLZ2B

HV

R

evaluations

0.1%1%2%5%10%20%

Figure 15 — Development of performance metrics over time in MOEA-RF solutionto DTLZ2B with different levels of noise.

0 1 2 3 4 5

x 104

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4SA & DTLZ2B

GD

evaluations

0 1 2 3 4 5

x 104

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SA & DTLZ2B

MS

evaluations

0 1 2 3 4 5

x 104

0

1

2

3

4

5

6x 10

−3 SA & DTLZ2B

S

evaluations0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1SA & DTLZ2B

HV

R

evaluations

0.1%1%2%5%10%20%

Figure 16 — Development of performance metrics over time in PMOSA solution toDTLZ2B with different levels of noise.

31

Page 36: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

0 1 2 3 4 5

x 104

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7EA & KUR

GD

evaluations

0 1 2 3 4 5

x 104

0.4

0.5

0.6

0.7

0.8

0.9

1EA & KUR

MS

evaluations

0 1 2 3 4 5

x 104

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035EA & KUR

S

evaluations0 1 2 3 4 5

x 104

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1EA & KUR

HV

R

evaluations

0.1%1%2%5%10%20%

Figure 17 — Development of performance metrics over time in MOEA-RF solutionto KUR with different levels of noise.

0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1

1.2

1.4SA & KUR

GD

evaluations

0 1 2 3 4 5

x 104

0.4

0.5

0.6

0.7

0.8

0.9

1SA & KUR

MS

evaluations

0 1 2 3 4 5

x 104

0

0.002

0.004

0.006

0.008

0.01

0.012

0.014SA & KUR

S

evaluations0 1 2 3 4 5

x 104

0

0.2

0.4

0.6

0.8

1SA & KUR

HV

R

evaluations

0.1%1%2%5%10%20%

Figure 18 — Development of performance metrics over time in PMOSA solution toKUR with different levels of noise.

32

Page 37: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

B Implementation of the benchmark problems

B.1 FON.m

function ans = FON(X)

% ans = FON(X)

%

% The FON test problem for multiobjective optimization.

% The row n X(n,:) contains one set of input parameters

% answers (f1,f2) for inputs X(n,1:8) are in ans(n , 1:2).

% Input variable range [-2, 2]

f(:,1) = 1 - exp(-1 * sum((X-(1/sqrt(8))).^2,2));

f(:,2) = 1 - exp(-1 * sum((X+(1/sqrt(8))).^2,2));

B.2 KUR.m

function ans = KUR(X)

% ans = KUR(X)

%

% The KUR test problem for multiobjective optimization.

% The row n X(n,:) contains one set of input parameters

% answers (f1,f2) for inputs X(n,1:3) are in ans(n , 1:2).

% Input variable range [-5, 5]

f(:,1) = (-10 * exp(-0.2 * sqrt(X(:,1).^2 + X(:,2).^2)) - ...

10 * exp(-0.2 * sqrt(X(:,2).^2 + X(:,3).^2))) / 10;

f(:,2) = sum(abs(X).^0.8 + 5 * sin(X.^3),2) / 10;

B.3 DTLZ2B.m

function ans = DTLZ2B(X)

% ans = DTLZ2B(X)

%

% The DTLZ2B test problem for multiobjective optimization.

% The row n X(n,:) contains one set of imput parameters

% answers (f1,f2) for inputs X(n,1:5) are in ans(n , 1:2)

% Input variable range [0, 1]

g = sum((x(:,2:end)-0.5).^2,2);

f= zeros(size(x,1),2);

f(:,1) = cos(x(:,1) .* pi/2) .* (1+g);

f(:,2) = sin(x(:,1) .* pi/2) .* (1+g);

33

Page 38: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

C Implementation of MOEA-RF algorithm

Enhanced Multiobjective Evolutionary Algorithm (MOEA-RF) defined by Gohand Tan [7].

C.1 moearf.m

function [PF_KNOWN, SOLUTIONS] = moearf(TEST_FUNC, LOWER, UPPER, NOISE)

% [PF_KNOWN, SOLUTIONS] = moearf(TEST_FUNC, LOWER, UPPER, NOISE)

%

% Inputs:

% =======

% TEST_FUNC - function reference to test function like FON

% LOWER - Lower limits of decision variables. Length of LOWER sets the

% number of dimensions in decision variables

% UPPER - Upper limits of decision variables. Size must agree with LOWER

% NOISE - standard deviation of normally distributed noise

%

% Outputs:

% ========

% PF_FOUND - objective function values of found pareto front

% SOLUTIONS - solutions corresponding rows of PF_FOUND

%----------------------------------

% Helper functions

dv_range = max(UPPER) - min(LOWER);

dv_below_zero = -1 * min([0 LOWER]);

gt_bits = 15;

% Convert population matrix from phenotype to genotype

function G = p2g(P)

G = uint32((P + dv_below_zero) ./ dv_range * (2^(gt_bits+1)-1));

end

% Convert population matrix from genotype to phenotype

function P = g2p(G)

P = double(G) ./ (2^(gt_bits+1)-1) .* dv_range - dv_below_zero;

end

% Fix limits. Some optimizations may move phenotypes over genotype rage

function P = enforce_limits(P)

too_low = find(P < repmat(LOWER,size(P,1),1));

too_high = find(P > repmat(UPPER,size(P,1),1));

low = repmat(LOWER,size(P,1),1);

high = repmat(UPPER,size(P,1),1);

P(too_low) = low(too_low);

P(too_high) = high(too_high);

end

%--------------------------------

%---------------------------------

% Configuration

generations = 500;

p_len = 100;

elite_size = 100;

elite_swap = 10;

mutation_prob = 1/15;

% Noise optimizations on/off

ELDP = 1;

GASS = 1;

34

Page 39: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

POSS = 1;

%---------------------------------

%---------------------------------

% 1. Initialize

population = unifrnd(repmat(LOWER, p_len, 1), repmat(UPPER,p_len,1));

population_prev = population;

population_prev_prev = population;

pf=[];

pf_solutions=[];

%---------------------------------

%---------------------------------

% Main Loop

for N = 1:generations

% 2. Evaluate: count noiseless objective function values for population

% and add noise. Noiseless objective function values are only used to

% calculate statistics

perf_without_noise = TEST_FUNC(population);

perf = perf_without_noise + normrnd(0, NOISE, size(population,1), 2);

% 2.1 Update Archive. Either use possibilistic archiving model

% with distance L or pareto ranking

pf = [pf;perf];

pf_solutions = [pf_solutions; population];

if POSS

L = NOISE;

pareto = find_pareto_front_possibilistic(pf,L);

else

pareto = find_pareto_front(pf);

end

pf = pf(pareto,:);

pf_solutions = pf_solutions(pareto,:);

% 2.2 Limit the size of archive by using diversity operator

[pf, pf_solutions] = diversity_operator(pf, pf_solutions, ...

elite_size, 1/p_len);

% 3. Select: Do tournament selection with radomized pairs.

% Don’t mess up the order of population.

ts_order = randperm(size(perf,1));

winners = tournament_selection(perf,ts_order);

perf = perf(winners,:);

population = population(winners,:);

% 3.1 Replace random members of population with best solutions from

% the archive

swap = min([elite_swap size(pf_solutions,1) size(population,1)]);

r_pop = randperm(size(population,1));

r_pop = r_pop(1:swap);

r_best = randperm(size(pf_solutions,1));

r_best = r_best(1:swap);

population(r_pop,:) = pf_solutions(r_best,:);

% 4. Crossover

c_order = randperm(size(population,1));

population = crossover(population, c_order, 0.5, @p2g, @g2p);

% 5. Mutate: Either ELDP or plain bitwise

if ELDP

35

Page 40: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

% Carry some momentum element-wise from previous generation

d_min = 0.0;

d_max = 0.1;

alpha = 1;

population_prev_prev = population_prev;

population_prev = population;

delta_prev = population_prev - population_prev_prev;

special_mutate = alpha * abs(delta_prev) > d_min & ...

alpha * abs(delta_prev) < d_max;

population = mutate(population, mutation_prob, @p2g, @g2p);

population(special_mutate) = population_prev(special_mutate) + ...

alpha * delta_prev(special_mutate);

population = enforce_limits(population);

else

population = mutate(population, mutation_prob, @p2g, @g2p);

end

if GASS & N > 150

% Gene adaptation to either diverge or converge solutions if necessary

c_limit = 0.6;

d_limit = 0.6;

prob = 1/8;

w = 0.1;

lowbd = min(pf_solutions);

uppbd = max(pf_solutions);

meanbd = mean(pf_solutions);

values_to_change = find(unifrnd(0,1,size(population)) < prob);

for K = 1: size(population,2)

c_values(:,K) = unifrnd(lowbd(K) - w * abs(meanbd(K)), ...

uppbd(K) + w * abs(meanbd(K)), ...

size(population,1), 1);

d_values(:,K) = unifrnd(meanbd(K) - w * abs(meanbd(K)), ...

meanbd(K) + w * abs(meanbd(K)), ...

size(population,1), 1);

end

if size(pf,1) > c_limit * elite_size

population(values_to_change) = c_values(values_to_change);

elseif size(pf,1) < d_limit * elite_size

population(values_to_change) = d_values(values_to_change);

end

population = enforce_limits(population);

end

end

%----------------------------

PF_KNOWN = pf;

SOLUTIONS = pf_solutions;

end

C.2 crossover.m

function new_population = crossover(old_population, order, prob, p2g, g2p)

%

% Uniform crossover for MOEA-RF

M = size(old_population,1);

cross_prob = rand(M,1) > prob;

crossover_mask = random_bit_matrix([M, size(old_population,2)], 0.5);

crossover_mask(cross_prob,:) = 0;

bit_population = p2g(old_population);

new_bit_population = zeros(size(bit_population),’uint32’);;

36

Page 41: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

for I = 1:2:M-1

x = order([I I+1]);

new_bit_population(x(1),:) = bitand(bit_population(x(1),:), ...

crossover_mask(x(1),:)) + ...

bitand(bit_population(x(2),:), ...

bitcmp(crossover_mask(x(1),:)));

new_bit_population(x(2),:) = bitand(bit_population(x(2),:), ...

crossover_mask(x(1),:)) + ...

bitand(bit_population(x(1),:), ...

bitcmp(crossover_mask(x(1),:)));

end

new_population = g2p(new_bit_population);

end

C.3 mutate.m

function new_population = mutate(old_population, prob, p2g, g2p)

%

% Bitwise mutation for MOEA-RF

% Inputs:

% prob - Probability of mutation for each bit

% p2g - function to convert population phenotype to genotype

% g2p - function to convert population genotype to phenotype

% old_population - Population before mutation in phenotype space

% Output:

% new_population - Population after mutation in phenotype space

bit_population = p2g(old_population);

new_bit_population = bitxor(bit_population, ...

random_bit_matrix(size(bit_population),prob));

new_population = g2p(new_bit_population);

C.4 tournament selection.m

function winners = tournament_selection(perf, pairing)

%

% Tournament selection based on pareto ranking of given matrix of

% performace measures and paring. Row indices of winners are returned.

function ans = x_is_non_dominated(perf_x, perf_y)

ans = all(all(perf_x <= perf_y));

end

M = size(perf,1);

winners = [];

for I = 1:2:M-1

o = pairing([I I+1]);

if x_is_non_dominated(perf(o(1),:), perf(o(2),:))

winners([end+1 end+2]) = [o(1) o(1)];

else

winners([end+1 end+2]) = [o(2) o(2)];

end

end

end

C.5 diversity operator.m

function [PF, solutions] = diversity_operator(PF, solutions, max_count, ...

niche_radius)

% [PF, solutions] = diversity_operator(PF, solutions, max_count, niche_radius)

37

Page 42: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

%

% Diversity operator for MOEA-RF. This function performs recurrent

% truncation operation on "PF" so that "max_count" members are left

% in PF and corresponding solutions

while size(PF,1) > max_count

niche_count = sum((squareform(pdist(PF)) + eye(size(PF,1)) * 10 ) < ...

niche_radius);

most_crowded = find(niche_count == max(niche_count));

to_remove = most_crowded(randperm(length(most_crowded)));

to_remove = to_remove(1);

PF(to_remove,:) = [];

solutions(to_remove,:) = [];

end

C.6 find pareto front possibilistic.m

function winners = find_pareto_front_possibilistic(perf, L)

%

% Necessity possiblistic pareto front finding algorithm for MOEA-RF

% Returns the row indices of given objective function

% values that belong to the Pareto set.

%

% NOTE: This function works only for two dimensional objective functions.

len = size(perf,1);

dominated = zeros(len,1);

mask = ones(len,1);

I = 1;

while I < len

mask(I) = 0;

dominated = dominated | ((perf(:,1) - unifrnd(0,L,len,1) >= perf(I,1)) & ...

(perf(:,2) - unifrnd(0,L,len,1) >= perf(I,2))) & mask;

index_not_dominated = (find(not(dominated)));

mask(I) = 1;

I=min([index_not_dominated(index_not_dominated > I) ;len]);

end

winners = find(not(dominated));

C.7 random bit matrix.m

function result = random_bit_matrix(s,prob)

bits = uint32(rand(s(1), s(2), 15) < prob);

result = zeros(s,’uint32’);

for I = 1:15

result = result + bits(:,:,I).*2^(I-1);

end

end

38

Page 43: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

D Implementation of PMOSA algorithm

Probabilistic Multiobjective Simulated Annealing Algorithm (PMOSA) definedand originally implemented by Mattila et.al [6]. This algorithm is modified toaccept different benchmark problems.

D.1 sa2ba.m

function [S,P,fo] = sa2ba(noise, rstate,r,M,T,gamma, TEST_FUNC, LOWER, UPPER)

%Simulated annealing for test problem dtlz2 with uncertainty

%

% PMOSA algorithm - additive version

% a maximum of gamma solutions are included in the non-dominated set

%Initialization-------------------------

d=size(LOWER,2); % Dimension of the vector of decision

% variables

l=LOWER; % Lower bounds of decision variables

u=UPPER; % Upper bounds

V=[noise^2 noise^2]; % Variances of objective function values

delta=(max(UPPER) - min(LOWER))/10; % Amount of change in generating

% new solutions

%r=10; % Number of samples to evaluate objectives

%T=2; % Temperature

%gamma=3; % Insert into non-dominated set, if

% probability of being dominated is

% less than gamma

fo=[];

fout=0;

cout=0;

%Initialization-end---------------------

%Initial solution-----------------------

k=1;

%rand(’state’,rstate); % Random state for initial solution

x=unifrnd(l,u); % Initial solution

y=x; % Store

fi=TEST_FUNC(x); % Actual objective function values

e1=normrnd(0,sqrt(V(1)),1,r); % Uncertainties

e2=normrnd(0,sqrt(V(2)),1,r);

f=[mean(fi(1)+e1) mean(fi(2)+e2)]; % estimated ofvs

v=[var(fi(1)+e1)/r var(fi(2)+e2)/r]; % variance estimates of means

fx=f; % Store

vx=v;

px=0; % Probability that x is dominated by

% the current S

py=px; % Store

p=1; % Acceptance probability

P=0; % Mutual probabilities of domination

acc=1; % Accept as current solution

ins=1; % Insert into non-dominated set

S=[x fx vx]; % Currently selected non-dominated set

Sn=1; % Size of S

39

Page 44: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

%Initial solution-end-------------------

%Iteratation----------------------------

while k<M

k=k+1;

%Generate new candidate solution

y=gen(x,delta,l,u);

%Evaluate

fi=TEST_FUNC(y); % Actual objective function values

e1=normrnd(0,sqrt(V(1)),1,r); % Uncertainties

e2=normrnd(0,sqrt(V(2)),1,r);

f=[mean(fi(1)+e1) mean(fi(2)+e2)]; % estimated ofvs

v=[var(fi(1)+e1)/r var(fi(2)+e2)/r]; % variance estimates of means

I=find(ismember(S(:,1:d),x,’rows’)); % Position of x in S

%Determine energy of x as probability of being dominated by S and y

if isempty(I) % x is not in S

px=0;

for i=1:size(S,1)

px=px+pdom(S(i,d+1:d+2),S(i,d+3:d+4),fx,vx);

end

else % x is in S

px=sum(P(I,:));

end

px=px+pdom(f,v,fx,vx); % Account for y

%Determine energy of y as probability of being dominated by S and x

py=0;

for i=1:size(S,1)

py=py+pdom(S(i,d+1:d+2),S(i,d+3:d+4),f,v);

end

%Insertion into non-dominated set

if Sn<gamma || py<max(sum(P,2))

ins=1;

else

ins=0;

end

%Remove the most dominated element if the size of S exceeds gamma

if ins==1 && Sn==gamma

[z,I2]=max(sum(P,2));

I3=[1:I2-1 I2+1:Sn];

S=S(I3,:);

P=P(I3,I3);

Sn=Sn-1;

end

%Update probabilities of being dominated within S

if ins==1;

S(end+1,:)=[y f v];

Sn=Sn+1;

P(Sn,Sn)=0; %Probability that y dominates y

for i=1:Sn-1

%Probability that y dominates ith member of S

P(i,Sn)=pdom(S(Sn,d+1:d+2),S(Sn,d+3:d+4),S(i,d+1:d+2),S(i,d+3:d+4));

%Probability that ith member of S dominates i

P(Sn,i)=pdom(S(i,d+1:d+2),S(i,d+3:d+4),S(Sn,d+1:d+2),S(Sn,d+3:d+4));

40

Page 45: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

end

end

%Acceptance of y

if isempty(I) %x is not in S

py=py+pdom(fx,vx,f,v); %Account for x

end

acc=0;

if py<=px

acc=1;

p=1;

else

p=exp(-(py-px)/T);

if rand<=p

acc=1;

end

end

%If accepted, set current solution,

if acc==1

x=y;

fx=f;

vx=v;

end

%Command line output

if cout

fprintf(’k: %2d f1: %4.3f f2: %4.3f px: %4.3f py: %4.3f’ ...

’p: %4.3f acc: %1d ins: %1d #S: %2d \n’, ...

k,f(1),f(2),px,py,p,acc,ins,size(S,1));

end

%Save values

if fout

fo(end+1,:)=[px py];

end

end

%Iteratation end------------------------

D.2 gen.m

function f=gen(x,d,l,u)

% Generate new solution from x

% l lower bounds

% u upper bounds

% Laplace distribution - one element perturbed

k=unidrnd(length(x));

x(k)=x(k)+laprnd(d);

f=x;

%Restore feasibility

if f(k)<l(k)

f(k)=2*l(k)-f(k);

elseif f(k)>u(k)

f(k)=2*u(k)-f(k);

end

41

Page 46: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

D.3 pdom.m

function f=pdom(x1,s1,x2,s2)

%Probability that design 1 dominates design 2 in context of minimization

%x1,x2 vector of estimates for epected objective function values

%s1,s2 vector of variance estimates

%Obervations of ofvs assumed normally distributed

f=prod(normcdf(0,x1-x2,sqrt(s1+s2)));

D.4 laprnd.m

function x=laprnd(b)

% Generates Laplace distributed random variates with

% location parameter equal to 0

% scale parameter equal to b

[m,n]=size(b);

u=unifrnd(-1/2,1/2,m,n);

x=-b.*sign(u).*log(1-2*abs(u));

42

Page 47: AALTO UNIVERSITY Mat-2.4108 Independent research projects ...salserver.org.aalto.fi/vanhat_sivut/Opinnot/Mat-2.4108/pdf-files/ekoi... · Mat-2.4108 Independent research projects in

E Implementation of metrics

Performance metrics defined by Goh and Tan [7].

E.1 stats.m

function [GD, MS, S, HVR] = stats(pf_known, pf_true)

%

% Scalar performace metrics GD, MS, S, and HVR between two

% sets of two vectors of two dimensional objective function values.

% GD - Generational Distance

klen=size(pf_known,1);

tlen=size(pf_true,1);

fcount=size(pf_true,2);

gdsum = 0;

for I = 1:klen

gdsum = gdsum + min(sum( (pf_true - repmat(pf_known(I,:),tlen,1)).^2,2));

end

GD = sqrt(gdsum / klen);

% MS - Maximum Spread

mssum = 0;

for I = 1:fcount

mssum = mssum + ((min(max(pf_known(:,I)), max(pf_true(:,I))) - ...

max(min(pf_known(:,I)), min(pf_true(:,I)))) / ...

(max(pf_true(:,I)) - min(pf_true(:,I))))^2;

end

MS = sqrt(mssum / fcount);

% S - Spacing

distances = squareform(pdist(pf_known));

distances = distances + eye(size(distances)) * 10;

min_distances = min(distances);

d_avg = mean(min_distances);

ssum = sum((min_distances-d_avg).^2);

S = sqrt(ssum / klen) / klen;

% HVR - Hypervolume Ratio

% Volume objective space that is dominated by pareto front.

% Calculated by creating a grid of zeros that fills the objective space

% and then going through the pareto fronts and setting all grid points to 1

% that are dominated by pareto points

if size(pf_known,2) == 2

grid_points = combvec(min(pf_true(:,1)):0.01:max(pf_true(:,1)), ...

min(pf_true(:,2)):0.01:max(pf_true(:,2)))’;

grid_known = zeros(size(grid_points));

grid_true = zeros(size(grid_points));

for I = 1 : size(pf_known,1);

grid_known(find(all([grid_points(:,1) >= pf_known(I,1), ...

grid_points(:,2) >= pf_known(I,2)],2)),:) = 1;

end

for I = 1 : size(pf_true,1);

grid_true(find(all([grid_points(:,1) >= pf_true(I,1), ...

grid_points(:,2) >= pf_true(I,2)],2)),:) = 1;

end

HV_known = sum(sum(grid_known));

HV_true = sum(sum(grid_true));

HVR = HV_known / HV_true;

else

HVR = 0;

end

end

43


Recommended