+ All Categories
Home > Documents > Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … ·...

Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … ·...

Date post: 14-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
23
Parameter Sensitivity Analysis and Optimization of an Agent Based Psychological Model JOHANNES CHRISTOF LEDERER DEPARTMENT OF PHYSICS ETH Z ¨ URICH CHRISTIAN LORENZ M ¨ ULLER (Advisor) PROF. IVO F. SBALZARINI (Supervisor) COMPUTATIONAL BIOPHYSICS LAB ETH Z ¨ URICH July, 2007 Abstract: Model simulation is a recent and powerful tool in the field of psychology. Researchers design models to prepare field studies and to pre- dict human behavior even when no real life surveys can be made. However, analytical examination of a psychological model revealing its features and stability has never been made so far. In this work we present the implemen- tation and usage of methods known in biology and economy and their im- provement. We obtained various interesting parameter settings using CMA- ES (Covariance Matrix Adaptation Evolution Strategy). A local sensitivity analysis then provided an insight into their stability. Moreover, we could screen the most important input factors applying an improved version of Morris Method. We show with this work that an analytical, computer-based examination of the models can help psychologists as well as other researchers to compare their simulation outputs with the data gathered and to interpret their results.
Transcript
Page 1: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

Parameter Sensitivity Analysis andOptimization of an Agent Based

Psychological Model

JOHANNES CHRISTOF LEDERER

DEPARTMENT OF PHYSICS

ETH ZURICH

CHRISTIAN LORENZ MULLER (Advisor)

PROF. IVO F. SBALZARINI (Supervisor)

COMPUTATIONAL BIOPHYSICS LAB

ETH ZURICH

July, 2007

Abstract: Model simulation is a recent and powerful tool in the field ofpsychology. Researchers design models to prepare field studies and to pre-dict human behavior even when no real life surveys can be made. However,analytical examination of a psychological model revealing its features andstability has never been made so far. In this work we present the implemen-tation and usage of methods known in biology and economy and their im-provement. We obtained various interesting parameter settings using CMA-ES (Covariance Matrix Adaptation Evolution Strategy). A local sensitivityanalysis then provided an insight into their stability. Moreover, we couldscreen the most important input factors applying an improved version ofMorris Method. We show with this work that an analytical, computer-basedexamination of the models can help psychologists as well as other researchersto compare their simulation outputs with the data gathered and to interprettheir results.

Page 2: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

Contents

Contents

I. Prerequisites 3

1. Introduction 31.1. General Introduction and Goal of the Thesis . . . . . . . . . . . . . . . . 31.2. Overview and Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2. The Model 42.1. Origin of the Data and the Model . . . . . . . . . . . . . . . . . . . . . . 42.2. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

II. Analysis 7

3. Optimization with CMA 73.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2. Why CMA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.3. The Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.4. Setting and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.5. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4. Sensitivity Analysis 114.1. Local Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.1.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.1.2. Standard Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.1.3. Difference Quotient Approach . . . . . . . . . . . . . . . . . . . . 124.1.4. Comment on the Results . . . . . . . . . . . . . . . . . . . . . . . 124.1.5. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.1.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4.2. Screening method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2.1. Standard Morris’ Method . . . . . . . . . . . . . . . . . . . . . . 144.2.2. The Continuous Morris Method . . . . . . . . . . . . . . . . . . . 154.2.3. Choosing the Best Trajectories* . . . . . . . . . . . . . . . . . . . 154.2.4. Probability Distribution and Analytical Test of the Method . . . 164.2.5. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.2.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5. Discussion 21

2

Page 3: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

1. Introduction

Part I.Prerequisites

1. Introduction

1.1. General Introduction and Goal of the Thesis

To understand and predict the complex dynamics of human behavior, psychologists cre-ate simulation models that can be calibrated with data gathered during behavior changecampaigns. In this work we used a model that simulates different ways of convincingpeople to show a specific behavior. Psychologists have already determined one set ofparameters by hand. One goal was to analyze the robustness of this set and to identifythe parameters that have the biggest influence on the model output. Furthermore, weaimed at finding and analyzing other parameter sets that provoke appropriate modeloutputs.

Model calibration and analysis by hand is cumbersome and inaccurate. We appliedknown computer algorithms like “CMA-ES” (Covariance Matrix Adaptation EvolutionStrategy) and “Morris Method” or enhanced them when necessary, implementing themon the computer thereafter.

1.2. Overview and Structure

This work is divided into four main parts. First, we described the model and its technicalimplementation in chapter 2. We then introduced and applied “CMA-ES”, a methodbased on the ideas of evolution, in order to find new parameter sets that reproduce thedata. This algorithm and its results were presented in chapter 3. In a next step, wecarried out a sensitivity analysis on the model. Methods of two different types were used,one dealing with deviations within a small range in chapter 4.1 (“Local Sensitivity”) andone dealing with deviations over the whole parameter space in chapter 4.2.1 (“GlobalSensitivity”). The thesis ends with a discussion of all the results obtained.

3

Page 4: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

2. The Model

2. The Model

2.1. Origin of the Data and the Model

A group of the EAWAG (Swiss Federal Institute of Aquatic Science and Technology)made it possible that we could work with a model that has been applied to many inves-tigations, though we could only include the data of one field study: A behavior changecampaign in Santiago de Cuba in the beginning of 2005. Psychologists tried to con-vince the inhabitants of a village to separate solid waste for recycling. Different typesof reminders were used to influence the inhabitants in the desired way. For example,small signs were placed inside or outside their houses to make them remember specificactivities. Furthermore, they had to answer questionnaires each day so that the psy-chologists could control the success of their interventions. The psychologists used thesequestionnaires to determine the “behavior intensity”, i.e. the fraction of separated solidwaste compared to the entire solid waste. Unfortunately, they also made interviews fromtime to time. This influenced the behavior of the people, so we excluded the data ofthose specific days. Furthermore, we had no measure to estimate the uncertainty of thedata we got from the study.

The model was build by the group of Dr. Robert Tobias using ATASIS (Applicationoriented Theory based Architecture for Simulation of Interventions in Social systems).ATASIS is a simple event based simulation which allows to simulate many agent systems(although only one agent that can have various characteristics, we call them “types”,is needed in this particular model). One can get further information in [1]. ChristianWurtzebesser used JSNS (Java Social Network Simulator) to provide the executablejava file we used. JSNS, a very simple but flexible event based simulation environmentfocusing on interconnectivity, is described in [2].

2.2. Implementation

The model input consisted of 25 parameters, 21 of them were “global” ones (meaningthat they are not dependent on the type of the agent) and 4 “individual” ones (meaningthat they differ from type to type). All, except “nSodII01Value”, were restricted to aninterval for our analysis - “nSodII01Value” took values of a discrete set. We mappedthem onto [0,1] for the calculations in order to be able to use the individual parts ofthe work independently. According to [4] we could separate the data into three differenttypes, provoked by three different sets of individual parameters. So we added to the21 global parameters the individual ones of each type to get one set of parameters towork with, that means we did our calculations on [0, 1]N , where N indicates the numberof dimensions in the parameter space, N = 21 (global parameters) + 3 · 4 (individualparameters) = 33. The names of the parameters and their boundaries as well as theirunnormalized nominal values can be found in table 1.

The implementation is sketched in figure 1, where one can also see the three partsimplemented in MATLAB. The “wrapper” provided an interface between the model andMATLAB. It provided the necessary data for the “calculations”. Here, the parameter

4

Page 5: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

2. The Model

variation and all the other calculation issues were performed which allowed the “resultevaluation” to determine the next steps.

Figure 1: Flowchart of the implementation.

5

Page 6: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

2. The Model

Number Parameter NominalValue

LowerBound

UpperBound

Global Parameters1 cicWH 0 0 12 anicCt 1 0 13 meccRPD 0 0 0.14 anicWHR 0.5 0 15 meccHPD 0.1 0 0.56 anicCIwt 1 0 17 meccCPVDD 0.25 0 18 bescIDRPR 0.65 0 19 meacRPSNR 0.65 0 110 smOwnSign 0.1 0 111 bescCRPRPUT 1 0 312 habitRaiseCom 0 0 113 memoryEvents-s 20 1 3014 memoryEvents-w 0.5 0.1 1015 habitRaise0IntSlow 1 0 116 target-behaviour-s 30 0 3017 target-behaviour-w 0.5 0 1018 target-behaviour-dsp 0.3 0 119 memory-events-idarbe 0.45 0 320 memory-reading-anicPIR 0.5 0 121 memory-reading-anicWIR 0.35 0 1

Individual Parameters22 startAccConst type 1 0.3 0 123 nSodII01Value type 1 0.2 0 124 cognitionIntensity type 1 0.4 0 125 commitment-1-intensity type 1 0.6 0 126 startAccConst type 2 0.1 0 127 nSodII01Value type 2 0 0 128 cognitionIntensity type 2 0.4 0 129 commitment-1-intensity type 2 1 0 130 startAccConst type 3 0.1 0 131 nSodII01Value type 3 0 0 132 cognitionIntensity type 3 0.4 0 133 commitment-1-intensity type 3 0.8 0 1

Table 1: Names of the parameters with their corresponding number, unnormalized valueand boundaries.

6

Page 7: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

3. Optimization with CMA

Part II.Analysis

3. Optimization with CMA

3.1. Introduction

When we speak of “optimization” we mean the problem of finding an extremum of agiven objective function. CMA, or CMA-ES (Covariance Matrix Adaptation EvolutionStrategy), is an evolution strategy that is designed to find minima on continuous domains(but could be generalized easily). First, an estimate of the inverse Hessian matrix anda starting point for the second generation are calculated with the points of an initialsampling in the parameter space that lead to the best results of the fitness function.Then, in every generation, the matrix and the starting point are used to determine anew sampling. The best points of this one lead to a new estimate of the matrix and anew starting point for the subsequent generation, i.e. the best sample points of everygeneration are used to improve the sampling and therefore to determine a preferreddirection in the parameter space, ideally the direction that leads to the global minimum.The CMA-ES doesn’t presume that a derivative of the objective function exists, it evendoesn’t presume that the objective function is continuous. This makes it usable for muchmore classes of functions compared to traditional methods like Quasi-Newton or relatedprocedures. For more details see the tutorial at [5].

3.2. Why CMA?

As stated in section 1, one set of parameters was found by hand calibration. First analysisshowed, that this set is a reasonable choice and fits the data in some degree. But thereare two major reasons to automate the search for new parameter sets. First of all, oneneeds a high level of experience and various assumptions to find good parameter sets byhand. This bears the risk of loosing appropriate parameter sets that where thought tobe inappropriate or simply not taken into account. Moreover, hand calibration is verytedious and inaccurate.

At first, we haven’t had any information about the model except for the list of inputsand the possibility to calculate a finite - and because of the high computational costs- small amount of outputs. The decisive argument for CMA is, besides its efficiency,its adaptability to a large class of functions. It doesn’t demand any properties of theobjective function apart from being well-defined on the given parameter space. In par-ticular, it doesn’t require a smooth objective function. That makes it a good choice for“black-box models”.

7

Page 8: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

3. Optimization with CMA

3.3. The Objective Function

In order to say whether a set of parameters is “good” or not, we need to have a measure.We call a parameter set x “good” if it leads to a model output resembling the data. Nowwe want to formulate this in a mathematically rigorous way, that is to find a measurefor the suitability of the parameters.

The model outputs can be divided into three groups that are characterized by theindividual parameters as mentioned in the first part of this work. So we want to find aparameter set that fits all three of them. Let

di : [a, b] ∩ N→ R (1)

be the function that yields the data for the i-th type at a particular day (here, the fieldstudy began at day a and ended at day b).Similarly, we have the model output

m : [0, 1]N × [a, b] ∩ N→ R (2)

where we work on the normalized parameter space that we get from the linear transfor-mation that maps the lower bound of a parameter on 0 and the upper bound on 1 justas addressed briefly above in chapter 2. Then we define the objective function f as

f : [0, 1]N → R+0

f(x) :=∑

i types

(∑

j days

(di(j)−m(x, j))2)2. (3)

So the “better” the parameter set the lower the objective function. We have a perfectfit if the objective function is equal to zero.

3.4. Setting and Notation

We use the CMA-ES implementation described in [5]1. The inputs comprise the objectivefunction (in our case the one described in the preceding chapter), a starting point andsome default settings. We introduce here the most important outputs and defaultsshortly:

TotalMin is the lowest value of the objective function achieved in this run.MinFEvals is the number of evaluations of the objective function needed to reach

the total minimum.FEvals is the total number of evaluations of the objective function in this run.Sigma is a scalar that determines the initial coordinate wise standard deviations for

the search.PopSize is the size of the initial population. It defines how many sample points are

evaluated in each generation. The standard value for PopSize is b4 + 3 ∗ log(N)c, whereN is the number of dimensions of the parameter space, e.g. 33 in our case.

Stopflag is the stopping criterion. “warnequalfunvals” means that at some time thedifference between the function evaluations is lower than a minimum value.

1It can be found on the web at http://www.bionik.tu-berlin.de/user/niko/cmaesintro.html.

8

Page 9: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

3. Optimization with CMA

3.5. Results

We began with 20 CMA runs using random starting points and default settings. CMAwas set to calculate at most 20000 model outputs in one run, but always terminatedearlier because the model outputs became to similar. The results were poor, even whenwe tried to adjust some of the settings later on. So we decided to take the nominalparameter set as starting point and to take Popsize = N and Sigma = 0.05. The valueof the objective function with the nominal (hand calibrated) parameter set was 0.86. 10of our 20 runs beat this result, the two best sets are described in table 3. In table 4 weplotted the behavior of the output with one of these new parameter sets, the nominalparameter set and the data. Table 2 gives an overview of the normalized values of thenominal and the new parameters. One can see significant differences for some particularparameters. For example parameter 26 is much larger in the CMA sets than in thenominal set.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 330

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Nor

mal

ized

Val

ue

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 330

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Nor

mal

ized

Val

ue

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 330

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Nor

mal

ized

Val

ue

Table 2: On the top: Normalized nominal, i.e. hand calibrated values of the parameters.On the left: Normalized parameter set P50313. On the right: Normalized parameter setP50314. The names of the parameter sets correspond to table 3.

Name TotalMin MinFEvals FEvals Sigma PopSize StopFlag

P50313 0.5585 6141 10529 0.05 33 warnequalfunvalsP50314 0.5590 11423 15083 0.05 33 warnequalfunvals

Table 3: Best results of the CMA-ES. For a description of the columns see chapter 3.4.

9

Page 10: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

3. Optimization with CMA

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 270

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Days

Beh

avio

ur In

tens

ity

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 220.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Days

Beh

avio

ur In

tens

ity

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 240

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Days

Beh

avio

ur In

tens

ity

Table 4: Comparison of the model output with the best parameter set of CMA-ES(continuous line), with the nominal parameter set (dashed line) and the data (dottedline). The first plot contains the data of type 1, the second one the data of type 2 andthe third one the data of type 3. On the x-axes the days of the interviews are left out(compare chapter 2).

3.6. Conclusion

One the one hand, we had relatively poor results using CMA-ES the standard way, eventhough we invested a considerable amount of computational effort. The parameter spacehad 33 dimensions, that are probably to many for an efficient CMA calculation. Thecrucial point is possibly the population size, which can be raised easily but with theeffect of much higher computational costs. A combination with a screening method toreduce the dimensions of the parameter space could eventually avoid this problem.

On the other hand, using the hand calibrated parameter set as initial parameters anddoing a “local” search with a small sigma, we had satisfactory results. The CMA-ES isdefined in such way that we can have considerable changes of the parameter values evenwith a small variance, see table 2.

So in our case, the method wasn’t useful as a stand alone tool, but as an efficient oneto improve existing parameter sets.

10

Page 11: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

4. Sensitivity Analysis

4.1. Local Sensitivity Analysis

4.1.1. Introduction

We have obtained various sets of parameters, so we want to know how small variations ofthese parameters affect the model output. Varying the parameters within a small rangeusually means to change the parameters (only one at a time!) by at most ± 5 percentof their value. Assuming the model to be differentiable, we can interpret this methodas an estimate of the partial derivatives with respect to the parameters. In general, wecall a parameter set “robust” if its model output is inert when we vary the parametervalues slightly. Such solutions are important in real life, since they are stable when smallperturbation affect the model.

4.1.2. Standard Method

Usually, we normalize the parameters and the function output in order to be able tocompare them. So the quantities we calculate look like

Dδ(xi) :=

∆ff

∆xi

xi

. (4)

For the deviation we use∆xi = xiδ (5)

where δ is, as mentioned, at most ± 5 percent. So in a more precise form

Dδ(xi) =f(x + xiδei)− f(x)

∆xi

· xi

f(x)(6)

where ei is the canonical unit vector in the i-direction. Dδ(xi) is a measure for therobustness of the model with respect to a variation in the direction defined by δei. Toget a more accurate measure one introduces the magnitude

D′(xi) :=∑

δ

Dδ(xi)/κ (7)

whereδ ∈ {±0.05,±0.02,±0.01}.

In equation (7) the summation does only imply summands, such that x + xiδei is stillwithin the boundaries. κ is then defined as the number of these summands.

This is probably not the best way of measuring the sensitivity in our case. One bigdisadvantage is the failure for xi = 0. The measure is zero for this value of xi howeverthe function looks like in the neighborhood. Moreover, changing the parameters like∆xi = xi · δ is dependent on the area of the parameter space. But we don’t want tooveremphasize particular regions of the parameter space, so we should try a differentapproach.

11

Page 12: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

4.1.3. Difference Quotient Approach

In order to get rid of the disadvantages stated above we go back to the basic definitionof derivatives. Assume for convenience that f is in C1. Then

∂if := limδ→0

f(x + δei)− f(x)

δ. (8)

We now estimate this by

l(xi) :=f(x + δei)− f(x)

δ. (9)

As in the Standard Method we normalize the values and obtain

Lδ(xi) :=f(x + δei)− f(x)

∆xi

· xi

f(10)

andL′(pi) :=

δ

Lδ(pi)/κ (11)

whereδ ∈ {±0.05,±0.02,±0.01}.

As above, the summation does only imply summands, such that x + xiδei is still withinthe boundaries. κ is then defined as the number of these summands.

4.1.4. Comment on the Results

The results of both methods show similarities as one can see later on. However, wehave some parameters with nominal value zero and we assume the distribution in ourparameter space to be uniform. The Difference Quotient Approach should thereforebe preferred, but we plotted the results of both methods for convenience. They arenormalized in order to get comparable results. For the Standard Method we plottedD′(pi)/ max

jD′(xj), where x is the nominal parameter set found by hand calibration.

Similarly, we plotted L′(pi)/ maxj

L′(xj) for the Difference Quotient Approach.

4.1.5. Results

A sensitivity analysis has been made for three different parameter sets. First, for thehand calibrated parameters, then for the two best CMA-ES outputs (which can be foundin table 3). The results are collected in table 5. In the left column one can find theresults for the Standard Method, in the right one the results for the Difference QuotientApproach. The names of the parameters are noted in table 3. We collected the mostsensitive parameters of the nominal set according to the Difference Quotient Approachin table 6. We notice furthermore, that only about half of the parameters are sensitiveand that the CMA optima are more robust that the nominal one.

12

Page 13: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 340

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Sen

sitiv

ity

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 340

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Sen

sitiv

ity1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Sen

sitiv

ity

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 330

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Sen

sitiv

ity

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 330

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Sen

sitiv

ity

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 330

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Parameter

Sen

sitiv

ity

Table 5: Local sensitivity of the nominal parameter set as in chapter 4.1.2. The leftcolumn contains the results of the standard local sensitivity analysis, the right one theresults of the Difference Quotient Approach. On the top, one can find the results for thenominal parameter set, in the middle for P50313, on the bottom for P50314. The namesof the parameter sets are according to table 3. Note that the values are normalized withrespect to the most sensitive one of the nominal set for each method.

Sensitivity (unnormalized) Parameter Number Parameter Name

6.31 10 smOwnSign4.43 7 meccCPVDD3.47 14 memoryEvents-w2.69 2 anicCt

Table 6: The most locally sensitive parameters of the psychological model according tothe Difference Quotient Approach as described in chapter 4.1.3.

13

Page 14: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

4.1.6. Conclusion

The results of the two methods have differences, so this is why one should take care toapply the right one. In our case we preferred the Difference Quotient Approach describedin chapter 4.1.3.

The parameter sets found with the CMA algorithm are more robust than the hand cal-ibrated ones. This makes them, besides the fact that they fit the data more adequately,the better choice.

4.2. Screening method

4.2.1. Standard Morris’ Method

In this part, we want to examine the importance of the individual parameters for themodel output. Very often, only a small subset of all parameters have considerable effectson the system. Morris proposed an economical analysis to “screen” them changing onlyone parameter at a time (“OAT”) and using the model outputs twice: As starting pointand as endpoint for two different “elementary effect” as explained shortly.

Morris thought of the parameter space [0, 1]N being covered by an imaginary rect-angular, equidistant grid containing the boundaries. Let 1/p be the smallest spacingbetween two parallel lines of the grid. p is called the “level” of the grid. We choose onearbitrary point x on the intersections of the grid and we define the “elementary effect”on the i-th factor as

di :=f(x + ∆ei)− f(x)

∆(12)

where ei is the canonical unit vector in the i-direction and

∆ =n

p, n ∈ N.

In contrast to the local sensitivity methods, Morris tries to measure the effects of chang-ing the parameters over the whole parameter space, so ∆ is usually larger than 0.5. Theleft picture in table 7 illustrates the calculation of an elementary effect in two dimensions.

Furthermore, we call a N + 1-tuple of vectors {xj}, xj ∈ [0, 1]N , a “trajectory” if

∀j ∃!ij : xjm − xj−1

m = ±∆ · δmi ∧ ik 6= il∀k 6= l. (13)

A trajectory in three dimension is shown in table 7.Morris also demands that xj

m is a multiple of 1/p for all m, k. Now we are able tocalculate one elementary effect for each parameter by calculating all model outputs f(xj)along the trajectory. Knowing f(xj) for all j we can deduce the elementary effects withthe equation

di =f(xj)− f(xj−1)

∆(14)

where i and j fulfill 13. To get one full set of elementary effects we need N + 1 modelruns. The idea is now, that we average over many of these sets.

14

Page 15: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

Table 7: On the left: Single elementary effect in two dimensions. On the right: Atrajectory in three dimensions.

4.2.2. The Continuous Morris Method

First of all, we drop the confinement of xim being a multiple of 1/p for all m, i. This has

one major reason: The risk of getting deceptive results when the grid is too coarse (thinkof a grid covering a fast oscillating sine-wave for example), called “aliasing”. Anotherintuitive alteration of the original method is to define the elementary effects like

d∗i := |f(xj)− f(xj−1)

∆| (15)

following [6]. Now we can find a mean for d∗i by evaluating the model for several tra-jectories and taking the canonical mean. This makes (15) reasonable, since we want toavoid cancellation of the single elementary effects.

For the calculations one has to choose an appropriate ∆. In [7] the authors propose∆ = p/(2(p− 1)). We can interpret the continuous parameter space as limit for p →∞.Then ∆ → 0.5. This seems to be a reasonable choice, for we want a global analysis tool.

A more technical problem is the creation of random trajectories. In [7, chapter 4.4]one can find a straightforward way to create them. Random permutation matrices areused to determine which items of the vectors xj−1 and xj differ from each other.

4.2.3. Choosing the Best Trajectories*

In most cases, the main performance problem is due to the model evaluation. Then wecan neglect the additional calculations and describe the efficiency with the number ofmodel evaluations needed. It is therefore reasonable to reduce this number by choosinga small number of representative trajectories. Following [8], a trajectory is suitable, ifit is separated wide enough from the other trajectories. But different from [8] we define

15

Page 16: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

the separation s(x,y) of two trajectories x, y as

s(x,y) := |x− y|2. (16)

A set {xi} of n trajectories is now constructed iteratively:

1. Create a set {yk} of n · r random trajectories, r ∈ N2. x1 := y1, {yk} := {yk} \y1

3. xi+1 := yj; j :∑m

s(yj,xm) ≥∑m

s(yl,xm)∀yl ∈ {yk}

4. {yk} := {yk} \yj

5. Repeat step 3 and 4 until |{xi}|=n.

That means that in each step we take the trajectory out of the randomly created setthat fits best the already chosen ones. (We notice that we will have strong performanceproblems with increasing n. Then the computational effort excluding the model evalua-tions is not negligible any more.) So the idea is, that because of the efficient distributionof trajectories we will be able to decrease the model evaluation by a large factor. Ad-ditionally, we implemented a norm for the separation that needs much less computingtime than the one proposed in [8]. We didn’t test the efficiency and reliability of thealgorithm, so in this thesis it just serves as a proposal.

4.2.4. Probability Distribution and Analytical Test of the Method

We don’t assume the probability distribution of the parameters to have some specificshape and for simplicity we took an uniform distribution. In many cases, it is supposed tobe Gaussian, or at least to be centered around a point in the parameter space. So manytest cases are developed for such prerequisites ([6]). In order to test our implementationwe changed the distribution of the trajectories to a 15 dimensional Gaussian distributioncentered around 0, but leaving all the other parts untouched. We took a test functionthat can be found in [9] and in [6]:

η(x) = aT1 x + aT

2 cos(x) + aT3 sin(x) + xT Mx (17)

where x is a 15 dimensional input vector while a1, a2 and a3 are three fixed 15 dimen-sional row vectors and M is a fixed square matrix with 15 rows and columns. The valuesfor these factors can be found in [6]. The test function is made for a 15 dimensionalparameter space and is designed to have three groups of input factors: Five parametersare relatively important, five ones unimportant and five ones have medium importance.The major goal was to allocate the parameters to the right group.

16

Page 17: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

Parameter Analytics Rank 1. Run 2. Run 3. Run

1 9 10 10 102 8 8 9 93 13 14 14 144 11 11 11 115 15 15 15 156 12 12 12 127 10 9 8 88 7 7 7 79 6 6 6 610 14 13 13 1311 2 1 1 112 3 2 2 213 4 3 3 414 5 5 5 515 1 4 4 3

Table 8: Sensitivity analysis results of the test case for the traditional method given inchapter 4.2.1, that means we used a grid to select the points in the parameter space. Wetook p = 10 for the grid level and computed 10 000 runs. The results are given in termsof a ranking, i.e. the parameter with number 1 is the most sensitive one.

Parameter Analytics Rank 1. Run 2. Run 3. Run

1 9 10 10 102 8 9 9 93 13 14 14 144 11 11 11 115 15 15 15 156 12 12 12 127 10 8 8 88 7 7 7 79 6 6 6 610 14 13 13 1311 2 1 1 112 3 2 2 213 4 4 4 314 5 5 5 515 1 3 3 4

Table 9: Sensitivity analysis results of the test case for the Continuous Morris Methodgiven in chapter 4.2.2 where we used a totally random selection of the points in the pa-rameter space. We took ∆ = 0.5 and calculated 10 000 runs. The results are given interms of a ranking, i.e. the parameter with number 1 is the most sensitive one.

17

Page 18: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

Traditional Morris Method At first, we implemented and tested the standard MorrisMethod as described in chapter 4.2.1. We took ∆ = p/(2(p − 1)) with p = 10 andk = 10 000, where k := |{xi}| is the number of runs. The trajectories were randomlychosen without any further selection. k was large enough to reproduce stable resultsand was small enough to have relatively low computational effort. One can see in table8 that we could allocate all parameters to the right group as mentioned in the sectionabove.

Continuous Morris Method Now we omitted the grid and implemented the methodas described in chapter 4.2.2. We could also allocate all parameters to the right group,as one can see in table 9.

Conclusion Both the standard Morris Method and the Continuous Morris Methodallocated all parameters to the right group. They were stable, i.e. the number of runs waslarge enough. Moreover, the Continuous Morris Method showed the same performanceas the standard method - so it is a reasonable choice. We also notice, that the resultsof both methods had permutations within the groups. They were due to the very smalldifferences in the importance of the parameters.

4.2.5. Results

We now applied the Continuous Morris Method to our psychological model. The resultsgiven in table 10 were stable and indicated therefore that the number of runs is largeenough. According to figure 2 where we plotted the standard deviation versus theaveraged impact of the parameters (= elementary effects), one can classify the mostimportant and the least important parameters of the first run as in table 11 and table12.

4.2.6. Conclusion

The standard Morris Method and the adapted algorithm yielded stable results that werevery similar for the test case. So it is reasonable to omit the grid Morris introduced.It’s important to make clear that the local sensitivity analysis in the previous chapterand the global sensitivity analysis tools answer different questions. One can see fromthe results, that a high global sensitivity does not imply a high local sensitivity and viceversa.

18

Page 19: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

Parameter 1. Run 2. Run 3. Run

1 4 5 52 1 1 13 29 29 294 2 2 25 24 23 236 5 4 47 12 12 128 17 16 169 19 19 1910 14 13 1311 28 26 2612 30 30 3013 33 33 3314 31 31 3115 32 32 3216 26 27 2717 16 15 1518 25 25 2519 20 20 2020 7 7 721 8 8 822 3 3 323 15 17 1724 6 6 625 11 11 1126 10 10 1027 27 28 2828 13 14 1429 22 22 2230 21 21 2131 23 24 2432 9 9 933 18 18 18

Table 10: Sensitivity analysis results of the psychological model with the ContinuousMorris Method given in chapter 4.2.2. The number of runs is 10 000. The results are givenin terms of a ranking, i.e. the parameter with number 1 is the most sensitive one.

19

Page 20: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

4. Sensitivity Analysis

Rank Parameter Number Parameter Name

1 2 anicCt2 4 anicWHR3 22 startAccConst type 14 1 cicWH5 6 anicCIwt6 24 cognitionIntensity type 17 20 memory-reading-anicPIR8 21 memory-reading-anicWIR

Table 11: The eight most sensitive parameters of the psychological model correspondingto the first run.

Rank Parameter Number Parameter Name

30 12 habitRaiseCom31 14 memoryEvents-w32 15 habitRaise0IntSlow33 13 memoryEvents-s

Table 12: The four least sensitive parameters of the psychological model correspondingto the first run.

0 2 4 6 80

2

4

6

8

10

12

14

Impact

Sta

ndar

d D

evia

tion

Figure 2: The standard deviation versus the impact of the parameters for the first run.

20

Page 21: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

5. Discussion

5. Discussion

In this work we analyzed a psychological model with various techniques:First, we used CMA-ES to find new parameter sets that reproduced the data we had

from a campaign in Cuba. This method was not successful as a stand alone tool - thereason for this was perhaps the small population size in the high dimensional parameterspace. In contrast, CMA-ES provided good results using a certain starting point thatwas determined by hand calibration.

We could then answer the question of global sensitivity with an improved version ofMorris Method. The most sensitive and least sensitive parameters are collected in table11 and table 12, respectively. We didn’t test the proposed search for suitable trajectories,so it has not been used but could be applied to minimize the computational costs infuture projects.

For the determination of the local sensitivity, two different methods have been used.The adapted method we call “Difference Quotient Approach” should be preferred to theStandard Method for this project. The parameters that are locally most sensitive aregiven in table 6. It’s important to mention that the local sensitivity methods should notbe mistaken for the global methods, for example the parameter “memoryEvents-w” islocally sensitive, but globally very insensitive.

We want to give a short insight into the model by stating the meaning of the mostsensitive and influential parameters (more informations can be found in [3]):

The most influential (or global sensitive) parameters:

anicCt describes the accessibility threshold, i.e. it describes whether certain accessibil-ities are always remembered or not.

anicWHR is the weight of habit in remembering. It determines the influence of habitsto remember the specific behavior.

startAccConst type 1 sets start conditions of type one, especially the start behavior,by defining the accessibility of the behavior at the simulation start.

cicWH produces a delay in the change of behavior preference due to an intervention,a high values means that an action is easy to perform.

anicCIwt represents how much a high mobilization of cognition resources affects thebehavior.

cognitionIntensity type 1 describes the capability and will to think about what be-havior has to be executed. It is low, if a person is tired, distracted or not motivated totry remembering.

21

Page 22: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

5. Discussion

memory-reading-anicPIR defines the area of behavior intensities in which the decayof the behavior intensity is slower. For values ≥ 0 and ≤ 1 the behavior intensities decayslower if they are smaller. For values > 1 the slower decay occurs for higher behaviorintensities.

memory-reading-anicWIR specifies how the difficulty rises to remember higher be-havior intensities in spite of random effects. The higher the value, the more and thestronger random effects are modeled.

The least influential parameters:

habitRaiseCom describes how much faster habits develop if stronger commitments areformed.

memoryEvents-w influences the effect of random effects on the behavior performance.

habitRaise0IntSlow describes the weight of the effect of not showing the behavior atall, that means that no desired actions are performed.

memoryEvents-s another parameter that influences the effect of random inferences onthe behavior performance.

The parameters with the highest local sensitivity:

smOwnSign models the effectiveness of the used reminders.

meccCPVDD defines the speed of forgetting, i.e. the number of days after which (ifever) a certain behavior is forgotten.

memoryEvents-w influences the effect of random inferences on the behavior perfor-mance.

anicCt described above.

Finally, we point out the problem of the reliability of the data. In order to get moreprecise statements about the model and the methods introduced, it would be desirableto have access to the data of more studies of this kind.

22

Page 23: Parameter Sensitivity Analysis and Optimization of an Agent Based ... - MOSAIC … · 2017-12-06 · campaigns. In this work we used a model that simulates difierent ways of convincing

References

Acknowledgements

Thanks to Christian Lorenz Muller for his excellent mentoring. Also thanks to Dr.Robert Tobias and Christian Wuerzebesser for the friendly cooperation. Finally, thanksto Prof. Ivo Sbalzarini for offering me an insight into the everyday life in his group andThomas Lanz for proofreading the draft of this work.

References

[1] R. Tobias and H.-J. Mosler. How do commitments work? An agentbased simulationusing data from a recycling campaign in Santiago de Cuba. Proceedings of ICAI’07- The 2007 International Conference on Artificial Intelligence. Las Vegas, USA.

[2] www.eawag.ch/research/siam/sozialesysteme/group websites/introduction/jsns.html

[3] Personal communication with Robert Tobias.

[4] R. Tobias. Situational Cognitive Effects on Behavior-Selection - Empirically FoundedComputer-Simulation of the Effects of Habits, Memory-Aids, Implementation Inten-tions, Self-Commitment and Situational Norms. Dissertation of the University ofZurich, Switzerland, 2006.

[5] N. Hansen. The CMA Evolution Strategy: A Tutorial, Nov 2005.

[6] F. Campolongo, J. Cariboni, A. Saltelli, and W. Schoutens. Enhancing the morrismethod. In K. Hanson and F. Hemez, editors, Proceedings of the 4th InternationalConference on Sensitivity Analysis of Model Output (SAMO 2004), pages 369-379.Los Alamos National Laboratory, Los Alamos, U.S.A., 2005.

[7] A. Saltelli, K. Chan, and E.M.Scott. Sensitivity Analysis, John Wiley and Sons,2000.

[8] F. Campolongo. Screening methods in sensitivity analysis The Morris method andits applications, JRC-Ispra, 22-23 November, 2005.

[9] J. Oakley, and A. O’Hagan. Probabilistic sensitivity analysis of complex models: abayesian approach. J. Roy. Stat. Soc. B, 66:751-769, 2004.

23


Recommended