+ All Categories
Home > Documents > Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory...

Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory...

Date post: 22-Jun-2020
Category:
Upload: others
View: 3 times
Download: 3 times
Share this document with a friend
40
Evolutionary Game Theory Notes James Massey These notes are intended to be a largely self contained guide to everything you need to know for the evolutionary game theory part of the EC341 module. During lectures we will work through these notes, starting with a quick skim through Chapter 2 (which should be a revision of already known material for most of you) before moving on to Chapters 3 and 4 where we will spend most of our time. One notable thing missing from these notes are diagrams. You will find these very useful for making sense of these notes and should make sure to copy these down together with anything else covered in lectures that may help your understanding. These notes also contain many exercises, which I recommend you attempt. Separately I will also give you some further exercises and a sample exam question to give an idea of the kind of thing I might ask in an exam. Good luck!
Transcript
Page 1: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

Evolutionary Game Theory Notes

James Massey

These notes are intended to be a largely self contained guide toeverything you need to know for the evolutionary game theory

part of the EC341 module. During lectures we will workthrough these notes, starting with a quick skim throughChapter 2 (which should be a revision of already known

material for most of you) before moving on to Chapters 3 and4 where we will spend most of our time. One notable thing

missing from these notes are diagrams. You will find these veryuseful for making sense of these notes and should make sure to

copy these down together with anything else covered inlectures that may help your understanding. These notes alsocontain many exercises, which I recommend you attempt.Separately I will also give you some further exercises and asample exam question to give an idea of the kind of thing I

might ask in an exam. Good luck!

Page 2: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

Chapter 1

Overview

While non-cooperative game theory assumes hyper-rational agents predictingand best responding to each others’ play to lead to equilibria, evolutionarygame theory takes a somewhat different approach. Here we dispose withsuch assumptions in favour of a milder assumption of evolutionary pressuretowards the increasing representation of better strategies and study the re-sults.1. Take for example the question of which side of the road to drive acar (absent any laws which solve the problem). If two drivers are headingin opposite directions towards each other, we can identify two mutually ad-vantageous outcomes: where they pass each other with both driving on theleft; and where they pass each other with both driving on the right. Whilenon-cooperative game theory identifies both as Nash Equilibrium, it is notobvious how two players should be able to coordinate in this way. The evolu-tionary game theory way is to look at the environment, see how the variousstrategies fare and how the environment is therefore likely to change whenwe take evolutionary pressure into account. If we start at a point in timewhereby more people drive the left than the right, then those agents whoadopt this strategy are doing better than those who stick to the right. So

1Here better is defined with respect to the environment, in much the same way as itdepends on the strategies of one’s opponents in a classical non-cooperative game

1

Page 3: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 1. OVERVIEW 2

evolutionary pressures would increase the representation of agents stickingto the left until everybody favoured this strategy.

In this and many other examples the predictions of evolutionary gametheory coincide with those of non-cooperative game theory, while often help-ing to explain them. In fact as will be seen, the central evolutionary gametheoretic concepts of Evolutionarily Stable Strategies and stability in thereplicator dynamics give slight refinements to Nash Equilibrium. In an eco-nomic context there are three main different ways that I will mention onhow one can interpret evolutionary pressure: i) the biological interpretation;ii) imitating other more successful agents; iii) observing and myopically bestresponding to one’s environment.

The biological interpretation would be based on natural selection, wherewe interpret payoffs to strategies as reproductive fitness. Agents with strate-gies giving higher than average reproductive fitness expand quicker and sothe proportion of agents following this strategy grows. To give this credence,consider a setting consisting of several firms who do business by interactingwith one another in a market place. A firm adopting a successful strategywill expand and so form a larger part of the market, whereas firms adopt-ing unsuccessful strategies go out of business. The imitative interpretationwould allow for slightly more sophisticated firms, who can switch strategies.They can see what strategies others adopt, how well they fare, and can switchstrategy to mimic more successful firms than themselves. The third approachis to allow for agents, who are more sophisticated still. Under this form ofevolutionary pressure a firm would be able to look at what all other firms aredoing and strategically choose a best response to the current distribution ofother firms’ strategies.

Ever since 1859, with the work of Charles Darwin, biologists have recog-nised the fact that the genetic make-up of animals and plants evolves overtime, with genes more favourable to their environment being selected by na-ture. It is unsurprising therefore, that the field of evolutionary game theory

Page 4: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 1. OVERVIEW 3

originated in the biological literature with the seminal work of Maynard-Smith and Price in 1973. They developed a mathematical framework leadingto the central definition of Evolutionarily Stable Strategy (ESS) in order toexplain the level of aggression in animal populations.

The concept of Evolutionarily Stable Strategy attempts to capture theidea of resistance to mutations in a static environment. It asks the followingquestion: if we introduce a small number of mutants into our population,how will they fare? If this is a beneficial mutation, they will have higherreproductive fitness than the overall population and hence proportionatelygrow. On the other hand, if the mutants have lower reproductive fitness,then with time the proportion of agents with this mutation will decline andtend to zero, and we say this mutation dies out. An ESS is a populationstate which is resistant to all such small mutations in this way.

Since then, attention has been focused on more dynamic models. In 1978Taylor and Jonker introduced what is now known as the replicator dynamicto explicitly model the process of natural selection. The idea is that agentsreproduce asexually according to their reproductive fitness, giving offspringwith the same genetically determined behaviour as themselves. The conse-quence of this is that the proportion of agents with characteristic i, say xichanges in proportion with xi and is either increasing or decreasing dependingon whether agents with characteristic i do better or worse than the averagepayoff in the whole population. As well as this biological interpretation thereplicator dynamic, there are several other interpretations, including agentsimitating other more successful agents. So, while there are other selectiondynamics, we will focus on the replicator dynamic due to its widespreadappeal.

In the 1990s economists built on this work to create dynamic modelsallowing for both mutation and selection. The introduction of mutationnecessarily means that these are stochastic models, as opposed to the deter-ministic replicator dynamic. The solution concept used is stochastic stability

Page 5: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 1. OVERVIEW 4

which looks at what state we would expect to observe in the long run, as theprobability of mutations tends to zero. While ESS refines the set of NashEquilibria, stochastic stability refines the set of ESS. It is somewhat akin torisk-dominance and so will often select a unique outcome where the formerdo not. Unfortunately, due to time constraints, none of this material willbe covered, although students interested in this are welcome to ask me forreferences on this work.

Chapter 2 will introduce the general evolutionary game theory frameworkand go over some useful results from non-co-operative game theory. Chapter3 looks at the static notion of ESS and its compatriot, NSS. We show howthey refine the set of Nash Equilibria. In Chapter 4 we turn our focus to thereplicator dynamic and discuss basins of attraction and stability. We showhow stability under the replicator dynamic compares with Nash Equilibriumand ESS.

Page 6: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

Chapter 2

Framework and Notation

There are three parts to this chapter: the first lays out the basic notationwhich will be followed throughout these notes and gives a brief coverage ofsome non-co-operative game theory that will be useful later. The secondand third parts explain the evolutionary game theory model in one and twopopulation settings.

2.1 Notation and basics of non-cooperativegame theory

Throughout these notes, the focus will be on games between two players, eachwith a finite number of strategies. This allows the following representationof a game G as a bimatrix game:

Definition 2.1.1. A bimatrix game is G = (A,B) where both A and Bare m× n matrices (where both m and n are integers greater than or equal

5

Page 7: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 6

to 1).

G = (A,B) =

a11, b11 a12, b12 . . . a1n, b1n

a21, b21. . . ...

... . . . ...am1, bm1 . . . . . . amn, bmn

Player 1 has the pure strategy set S1 = {1, . . . ,m} and his strategies cor-respond to choosing between the m rows. Player 2 has the pure strategyset S2 = {1, . . . , n} and his strategies correspond to choosing between the ncolumns. If they choose the ith row and jth column, then aij and bij are thepayoffs to Players 1 and 2 respectively.

These payoffs are interpreted as von-Neumann Morgenstern payoffs toavoid any need for considerations of risk aversion or the such like. This isnecessary to allow us to define the payoffs of mixed strategies as the weightedaverages of payoffs for pure strategies as we will do so below.

Definition 2.1.2. The set of mixed strategies for player 1 is

∆1 ={

x = (x1, . . . , xm)T ∈ Rm≥0 :

m∑i=1

xi = 1}

where Rm≥0 is the set of m-dimensional vectors with all arguments non-

negative and the “T” superscript denotes “transpose”. The interpretationof xi is the probability with which strategy i is played. Similarly, the set ofmixed strategies for player 2 is

∆2 =

y = (y1, . . . , ym)T ∈ Rn≥0 :

n∑j=1

yj = 1

A strategy x ∈ ∆i is called pure if xj = 1 for some j, and we denote this

strategy ej. Let ∆ = ∆1 ×∆2 denote the joint mixed strategy space.Given a strategy x, define the support (or carrier) of x as C (x) =

{j | xj > 0}.

Page 8: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 7

We can define the interior of ∆1 as all the strategies not on the bound-ary1:

int (∆1) ={

x = (x1, . . . , xm)T ∈ Rm>0 :

m∑i=1

xi = 1}

We can define int (∆2) similarly and int (∆) = int (∆1)× int (∆2).

I will use the notation i for a player in {1, 2} and −i, for the opposingplayer. Let ui (σi, σ−i) to denote the utility of i playing strategy σi when hisopponent plays strategy σ−i. With this notation, when player 1 plays x andplayer 2 plays y, the utility player 1 receives is

u1 (x,y) = x · Ay =m∑i=1

n∑j=1

xiyjaij

and the utility player 2 receives is the expected payoff

u2 (y,x) = y·BTx = x·By =m∑i=1

n∑j=1

xiyjbij

Exercise 2.1.1. Consider the generalm = n = 2 game, G = a11, b11 a12, b12

a21, b21 a22, b22

and let player 2’s strategy be y = (y1, y2). Find the payoffs to player 1 fromemploying strategies e1, e2 and x = (x1, x2).

With this in place, we can now define the notions of weak and strictdominance.

Definition 2.1.3. Strategy x ∈ ∆i strictly dominatesw ∈ ∆i if ui (x,y) >ui (w,y) ∀y ∈ ∆−i.

Strategy x ∈ ∆i weakly dominates w ∈ ∆i if ui (x,y) ≥ ui (w,y) ∀y ∈∆−i. and ∃y′ ∈ ∆−i such that ui (x,y′) > ui (w,y′).

A strategy x ∈ ∆i is strictly (weakly) dominant if it strictly (weakly)dominates all other strategies in ∆i.

1That is with xi 6= 0 for all i. In other words all completely mixed strategies

Page 9: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 8

Note that strict dominance implies weak dominance.

Example 2.1.1. Let G =

1, 1 4, 24, 0 1, 12, 0 2, 0

, then

i) for player 2: e2 weakly dominates e1.ii) for player 1: (1/2, 1/2, 0) strictly dominates e3.

Given one player’s strategy, we can describe the other players best repliesas follows:

Definition 2.1.4. Given a strategy y ∈ ∆−i denote the best responsecorrespondence of player i by BRi (y) ⊆ ∆i

BRi (y) = {x ∈ ∆i | ui (x,y) ≥ ui (w,y) ∀w ∈ ∆i}

A Nash Equilibrium is a pair of strategies (x, y) ∈ ∆1 ×∆2 = ∆ suchthat x is a best response to y, and y is a best response to x.

To be a strict Nash Equilibrium, we require x and y to be the uniquebest responses to each other.

Exercise 2.1.2. Show the following:i) Only pure strategies can be strictly dominant.ii) If both players follow strictly dominant strategies then this constitutes

a strict Nash Equilibrium, and there are no other Nash Equilibria.iii) If both players follow weakly dominant strategies then this must

constitute a Nash Equilibrium. Must this Nash Equilibrium necessarily beunique?

I state the following well-known result without proof2:

Theorem 2.1.1. Every bi-matrix game has at least one Nash Equilibrium2Students interested in a proof can find one in Weibull or any standard game theory

textbook

Page 10: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 9

Exercise 2.1.3. Find all Nash equilibria of the following games and calculateplayers’ payoffs at these equilibria:

i) 4, 4 0, 5

5, 0 2, 2

ii) 4, 4 0, 5

5, 0 −1,−1

iii)

2, 3 0, 00, 0 1, 1

iv)

1, 0 0, 10, 1 1, 0

Symmetric games

In evolutionary game theory, we are often particularly interested in symmet-ric games and symmetric equilibria.

Definition 2.1.5. An m×n bimatrix game G = (A,B) is symmetric ifm = n and B = AT , where AT is the transpose of A.

A Nash Equilibrium (x, y) is symmetric if x = y.

If the game is symmetric, then both players have the same strategy space,which I denote by ∆ and the payoff function is symmetric. So I drop theplayer subscript and denote u (x,y) = x · Ay to be the utility from playingstrategy x ∈ ∆ against y ∈ ∆.

In a symmetric game, we denote the set of symmetric Nash Equilibria by∆NE ⊆ ∆, and the set of strict symmetric Nash Equilibria by ∆NE> ⊂ ∆.Note that any strict Nash Equilibria must be in pure strategies. Generallyin a symmetric game there can be equilibria which are not symmetric, butthe relevant equilibria for our purposes are the symmetric ones. One usefulresult (proof not given - see Weibull) is that every symmetric bimatrix gamehas at least one symmetric Nash Equilibrium:

Proposition 2.1.1. For every symmetric bi-matrix game ∆NE 6= Ø.

Page 11: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 10

Exercise 2.1.4. For each of the following symmetric games find the set ofall Nash Equilibria and ∆NE and ∆NE>:

i) 4, 4 0, 5

5, 0 2, 2

ii) 4, 4 0, 5

5, 0 −1,−1

iii) 1, 1 0, 0

0, 0 0, 0

iv) 1, 1 0, 0

0, 0 2, 2

v)

0, 0 1,−1 −1, 1−1, 1 0, 0 1,−11,−1 −1, 1 0, 0

2.2 Evolutionary games of one population

We assume there is a large population of agents, each of whom is geneticallyhardwired to play a particular strategy and discuss the biological interpre-tation. Agents meet randomly in pairs and the payoff each agent receives isthe expected payoff from being matched against a randomly selected oppo-nent. As we shall argue, in this setting, it is natural to restrict attention tosymmetric games and symmetric equilibria

If we had an asymmetric game, when two players meet, one must bedrawn in the player 1 role and the other in the player 2 role. But since theyare both randomly drawn from the same population, there is no clear way toassign them. One could suppose that in every bilateral meeting each agentoccupies each of the two player roles with equal probability, but this has theeffect of turning the asymmetric game into a symmetric one.

In this setting we can think of the environment (population state) asdescribing the prevalence of each of the pure strategies in the population.Hence there is a one-to-one-correspondence between the set of population

Page 12: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 11

states and the set of mixed strategies.

Definition 2.2.1. A population state x ∈ ∆ lists the prevalence of eachof the pure strategies in the population.

Remark 2.2.1. There are two possible interpretations for this:1. Every agent is genetically hardwired to play the mixed strategy x.2. Every agent is genetically hardwired to play a pure strategy: for each

i ∈ {1, . . . ,m}, xi is the proportion of agents following the ith pure strategy,ei.

Given that agents meet randomly in pairs, and the population is suffi-ciently large, the probability that a specific agent’s randomly selected op-ponent plays the ith pure strategy can be taken to be xi3. Thus an agentfollowing the strategy y ∈ ∆ when the population state is x ∈ ∆ will receivean expected payoff of u (y,x), while the average payoff in the population isu (x,x). Logically an agent following a strategy that does better than av-erage will grow quicker and proportionately expand, shifting the populationstate. Before we move on to the more formal analysis in Sections 3 and 4,it may be useful to consider how we would expect the population state tochange in some simple examples.

Exercise 2.2.1. Considering the interpretation where each agent follows apure strategy. For each of the following examples, draw a graph with theproportion of agents following the first strategy on the horizontal axis andthe utilities to each pure strategy on the vertical axis. How would you expectthe dynamics of the population state to change?

i) 4, 4 0, 5

5, 0 2, 2

3Note the role of the assumption that the population is large. Sincein most applications

it makes sense to assume an agent never plays against itself, the actual probability ofmeeting an agent playing strategy strategy i maybe slightly different from xi. But as thepopulation size increases, this difference tends to zero.

Page 13: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 12

ii) 0, 0 1, 1

1, 1 0, 0

iii)

4, 4 0, 55, 0 −1,−1

iii) 1, 1 0, 0

0, 0 2, 2

What are the rest points of the dynamics?

In these examples one should be able to see similarities between the restpoints and the symmetric Nash Equilibria in these games. As demonstratedin Chapters 3 and 4, this is not merely coincidental. Also notice that anyasymmetric Nash Equilibria such as those in the second example do notenter our analysis. This is because all agents are randomly drawn to faceeach other, meaning that there is a chance two e1 agents are drawn together,and a chance that two e2 agents are drawn together. If it was the case that e1

agents are only drawn to play with e2 agents then something resembling theasymmetric equilibrium can be achieved. However this would be to go downthe route of two distinct populations of agents, with only inter-populationinteraction. This is the topic of the next section.

2.3 Evolutionary games of two populations

We assume two large populations of agents, one of row playing agents rep-resenting player 1, the other of column playing agents representing player 2.This setting can be used to analyse asymmetric games, or symmetric gamesin which players take distinct roles in the game, so we can distinguish whetheran agent is in the player 1 or player 2 role. A population state defines theprevalence of each pure strategy in each population. Once again, in each pop-ulation, there is a one-to-one correspondence between the population stateand the set of mixed strategies.

Page 14: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 2. FRAMEWORK AND NOTATION 13

Definition 2.3.1. A population state (x,y) ∈ ∆1 × ∆2 = ∆ lists theprevalence of each of the pure strategies in each population.

There is a random inter-population matching, so that each population 1agent is randomly matched with a population 2 agent and vice-versa. So,given population state (x,y) ∈ ∆, a population 1 agent following strategy xreceives payoff u1 (x,y) and a population 2 agent following strategy y receivespayoff u2 (y,x), while the average payoffs of population 1 and population 2agents are u1 (x,y) and u2 (y,x) respectively. The evolutionary dynamicssay that strategies with above average payoff should experience an increasein their proportion of the population. In chapter 4, we will see similaritiesbetween the rest points of the population state under such a dynamic andthe set of Nash Equilibria (this time including asymmetric equilibria).

Page 15: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

Chapter 3

ESS

3.1 What is an ESS?

In this Chapter, we build on Section 2.2. All the analysis is with respect toagents being randomly matched in pairs from a single population. If we areat a Nash Equilibrium then each pure strategy present gets the same payoffand so applying an evolutionary dynamic would not change the populationstate. In this sense, absent mutations, such a state is a true equilibrium:once entered we won’t leave. However as we will see, some equilibria aremore stable than others.

An Evolutionarily Stable Strategy (ESS) is a mixed strategy, or popula-tion state, that is resistant to small mutations. It asks the following question:if everyone is playing this mixed strategy, say x ∈ ∆, is this resistant to theintroduction of small mutations? To be more precise, if we introduce a smallproportion of mutants playing playing y ∈ ∆, will the mutants obtain lowerpayoff than the rest of the population, and hence this mutation be drivenout? If the proportion of mutants is ε then the new population state is(εy + (1− ε) x) and we are asking whether the following equation holds:

u (x, εy + (1− ε) x) > u (y, εy + (1− ε) x) (3.1.1)

14

Page 16: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 15

If for every strategy other than x, we can find an invasion barrier suchthat all mutations of a lower proportion are driven out, then we say that xis an ESS.

Definition 3.1.1. x ∈ ∆ is ESS if for every strategy y ∈ ∆, y 6= x, thereexists some εy such that for all ε ∈ (0, εy) equation (3.1.1) holds.

Intuitively, when ε is small, we can see that whether a mutation is drivenout will primarily depend on how it fares against the incumbent population.The following proposition shows this to be the case, while how agents fareagainst mutants is only of secondary importance.

Proposition 3.1.1. A strategy x ∈ ∆ is ESS if and only if it meets thefollowing first-order and second-order best reply conditions:

u (y,x) ≤ u (x,x) ∀y (3.1.2)

u (y,x) = u (x,x)⇒ u (y,y) < u (x,y) ∀y 6= x (3.1.3)

Proof. The key formula for ESS is

u (x, εy + (1− ε) x) > u (y, εy + (1− ε) x) (3.1.4)

Take y ∈ ∆, y 6= x. There are four possible scenarios:i) u (y,x) < u (x,x). If u (y,y) ≤ u (x,y) then clearly equation

(3.1.4) holds ∀ε ∈ (0, 1). If u (y,y) > u (x,y) then define

εy =(u (y,y)− u (x,y)u (x,x)− u (y,x) + 1

)−1

(3.1.5)

Then εy ∈ (0, 1) and a few lines of routine algebra show that equation (3.1.4)holds ∀ε ∈ (0, εy).

ii) u (y,x) = u (x,x) and u (y,y) < u (x,y). Clearly equation(3.1.4) holds ∀ε ∈ (0, 1).

Page 17: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 16

iii) u (y,x) = u (x,x) and u (y,y) < u (x,y). Clearly equation(3.1.4) is violated ∀ε ∈ (0, 1).

iv) u (y,x) > u (x,x). If u (y,y) ≥ u (x,y) then clearly equation(3.1.4) is violated ∀ε ∈ (0, 1). If u (y,y) < u (x,y) then it can be shown thatfor sufficiently small ε equation (3) is violated - eg ε ∈ (0, εy).

Thus we have shown that for each y ∈ ∆, y 6= x, we can find εy such thatequation (3.1.4) holds ∀ε ∈ (0, εy) if and only if y satisfies equations (3.1.2)and (3.1.3).

Exercise 3.1.1. Show that if x ∈ ∆ is weakly dominated then it can’t bean ESS.

Proposition 3.1.1 gives us an alternative, and in many cases easier toapply chacterisation of ESS. We first compare how both the incumbent andmutant populations fare against the incumbent, and only if they do equallywell, use the payoffs against mutants as a tie breaker.

To be an ESS, we require that the mutants be driven out. A weaker, butlinked condition is that the proportion of mutants does not expand. This isthe notion of Neutrally Stable Strategy (NSS)

Definition 3.1.2. x ∈ ∆ is NSS if for every strategy y ∈ ∆, y 6= x, thereexists some εy such that for all ε ∈ (0, εy),

u (x, εy + (1− ε) x) ≥ u (y, εy + (1− ε) x)

Proposition 3.1.2. A strategy x ∈ ∆ is NSS if and only if it meets thefollowing first-order and second-order best reply conditions:

u (y,x) ≤ u (x,x) ∀y (3.1.6)

u (y,x) = u (x,x)⇒ u (y,y) ≤ u (x,y) ∀y 6= x (3.1.7)

Exercise 3.1.2. Prove Proposition 3.1.2

Page 18: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 17

Define ∆ESS and ∆NSS to be the sets of Evolutionarily Stable and Neu-trally Stable Strategies respectively.

Exercise 3.1.3. Show that for any symmetric bimatrix game G, the follow-ing is true:

∆NE ⊇ ∆NSS ⊇ ∆ESS ⊇ ∆NE>

Further, show why no “⊇” can be replaced by “=” in the above statement.

(Hint: use the game G = α, α 1, 0

0, 1 1, 1

and consider 3 cases: i) α < 0,

ii) α = 0, iii) α > 0.)

3.2 ESS in some popular symmetric games

First a note on notation: Since all the games considered here are symmetric,for a game G = (A,B), we know that B = AT . So to describe a game, itsenough to describe the Aij matrix,

A =

a11 a12 . . . a1m

a21. . . ...

... . . . ...am1 . . . . . . amm

Although I will often still add the Bij matrix for clarity.

Prisoner’s dilemma

This game is given by m = 2, a11 > a21, a12 > a22 and a11 < a22. Inwords this means the first strategy strictly dominates the second. However,when both players play the first strategy, it gives an outcome which is Paretodominated by both playing the second. There are many applications of thisgame, for example two firms deciding whether to co-operate by restricting

Page 19: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 18

output and so raise price, or rival nations stockpiling weapons. The one Igive below is a biological application.

Example 3.2.1. Large (trait 1) and Small (trait 2) BeetlesConsider a large population of beetles wondering around discovering food

sources. When a beetle is seen eating, it attracts a neighbouring beetle wholooks to share the food (of size 10). There are two kinds of beetle: small andlarge. When two beetles of the same size meet, they share the food equally.However when a large beetle meets a small one it is able to use its greatersize to intimidate the smaller beetle into giving it most of the food (say 9units to 1). However, due to their larger bodies, the larger beetles also needmore food to maintain their metabolism (a cost of 2 units). This gives thefollowing payoff matrix:

G = 3, 3 7, 1

1, 7 5, 5

We see that trait 1 (large) strictly dominates trait 2 (small). So for any

population state, large beetles have a higher reproductive fitness than smallbeetles. From this it is easy to see that the unique ESS will consist of theentire population being large, e1.

Hawk-Dove

This is the classic environment Maynard-Smith and Price developed the toolof ESS to analyse. The scenario is as follows: We have a large population ofanimals contesting scarce resources (eg food, nesting sites, territory). Whentwo animals meet, some will be prepared to fight for these resources, whileothers will merely display aggression but if pushed, will back down and con-cede the resource. The aggressive trait (or strategy) is termed “Hawk”, whilethe more passive strategy is termed “Dove”. When two Doves meet they eachhave equal chance of getting the resource of value V > 0 (or alternatively wecan assume they share the resource); when a Hawk meets a Dove the former

Page 20: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 19

intimidates the latter and so takes the resource. If two Hawks meet thenthey fight for the resource, each winning half the time, but sustaining injurywith cost C > 0 the other half of the time. Letting the first trait (strategy)be Hawk, this gives the payoff matrix:

G = V−C

2 , V−C2 V, 00, V V

2 ,V2

(3.2.1)

If V > C then Hawk is a strictly dominant strategy, and so the gamebecomes a prisoner’s dilemma. The norm is to consider C > V which turnsit into a game of chicken1. In this game there are two asymmetric NashEquilibria, where one player is a Hawk and the other a Dove. However asargued in Chapter 2.2, they are unachievable in our one population setup.The relevant Nash Equilibrium is the symmetric one in mixed strategies andit turns out that this is an ESS.

Lemma 3.2.1. The Hawk-Dove Game (equation 3.2.1), with C > V , hasone ESS. This is x =

(VC, 1− V

C

)Proof. Let x =

(VC, 1− V

C

)and note that

i) u (y,x) = u (x,x) ∀y ∈∆ii) u (y,y) < u (x,y) ∀y ∈∆, y 6= x (can argue this diagramatically)Then by Proposition 3.1.1, x =

(VC, 1− V

C

)is ESS. Furthermore condition

ii) implies there are no other symmetric Nash Equilibria and so this is theunique ESS.

This game has a nice interpretation: In a population consisting of mainlyDoves, a Hawk can come and bully the Doves, most of the time getting thefull resource value V without a fight. However if the proportion of Hawks istoo high, a Hawkish strategy runs a high chance of a fight and injury andso the Doves do better. So from any starting position, the population stateshould return to the balance of Hawks and Doves predicted by the ESS.

1A symmetric two strategy game is a game of Chicken if a12 > a22 > a21 > a11.

Page 21: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 20

Co-ordination games

These are games where the two players want to co-ordinate on the samestrategy. In the two strategy case, these are given by a11 > a21 and a22 > a12.More generally, in games with any number of strategies, this requires thataii > aij for any j 6= i. These games have symmetric strict Nash Equilibriawhere all players play the same strategy. Since these are strict equilibria,they are also ESS. In addition, there are also mixed strategy Nash equilibria.These are not ESS but do have some relevance, as they determine an ESS’sresistance to mutations (that is the ε in equation 3.1.1).

Exercise 3.2.1. Consider the game G = α, α 0, 0

0, 0 1, 1

where α > 0.

i) Find ∆NE.ii) Verify that e1 and e2 are ESS and find the largest proportion of mu-

tations that each can withstand.

The ESS notion is unable to distinguish between strict Nash Equilibria.2

Rock-Paper-Scissors

This is a symmetric game, often played by kids, where the first strategy(rock) loses to the second strategy (paper) which loses to the third strategy(scissors) which in turn loses to the first strategy. The general form of thisgame is given by

G =

α, α 0, 2 2, 02, 0 α, α 0, 20, 2 2, 0 α, α

(3.2.2)

where α ∈ [0, 2], is the payoff from a draw, and is often set equal to 1.2To do this, we would need to consider stochastic stability. When there are only two

strategies as in the above example, stochastic stability selects the ESS with the higherinvasion barrier. In Ex 3.2.1 this is e1 when α > 1 and e2 when α < 1.

Page 22: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 21

Lemma 3.2.2. In the Rock-Paper-Scissors game, given by equation 3.2.2,we have the following:

Case ∆NE ∆NSS ∆ESS

0 ≤ α < 1{(

13 ,

13 ,

13

)} {(13 ,

13 ,

13

)} {(13 ,

13 ,

13

)}α = 1

{(13 ,

13 ,

13

)} {(13 ,

13 ,

13

)}Ø

1 < α < 2{(

13 ,

13 ,

13

)}Ø Ø

α = 2{e1, e2, e3,

(13 ,

13 ,

13

)}Ø Ø

Proof. One can show ∆NE column using standard techniques from non-co-operative game theory.

For α = 2: e1 is not ESS since it can be invaded by e2. e2 is not ESSsince it can be invaded by e3. e3 is not ESS since it can be invaded by e1.(

13 ,

13 ,

13

)is not ESS since it can be invaded by e1.

For 1 < α < 2:(

13 ,

13 ,

13

)is not ESS since it can be invaded by e1.

For α = 1: Define x =(

13 ,

13 ,

13

). Clearly by the zero-sum nature of this

game u (x,x) = 1 and for any y ∈ ∆: u (y,x), u (x,y), u (y,y) are all alsoequal to 1. The result then follows directly from Proposition 3.1.1 and 3.1.2.

For 0 ≤ α < 1: Define x =(

13 ,

13 ,

13

). Clearly u (y,x) = u (x,x) for all

y ∈ ∆. Let α = 1 − δ. Then clearly u (x,y) = α+23 = 1 − δ

3 for all y ∈ ∆,while

u (y,y) =(y2

1 + y22 + y2

3

)(1− δ) + 2y1y2 + 2y1y3 + 2y2y3

= (y1 + y2 + y3)2 − δ(y2

1 + y22 + y2

3

)= 1− δ

(y2

1 + y22 + y2

3

)Now, since

miny∈∆

(y2

1 + y22 + y2

3

)= 1

3

and is only achieved at y = x =(

13 ,

13 ,

13

)this means that for any y ∈ ∆r{x},

u (y,y) < 1 − δ3 . So we have shown u (y,y) < u (x,y) for all y ∈ ∆ r {x}.

Hence by Proposition 3.1.1, x =(

13 ,

13 ,

13

)is ESS.

Page 23: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 22

This Lemma shows that, unlike ∆NE, there are finite games where theset of ESS (and even NSS) is empty. On the other hand in games with onlytwo purestrategies, NSS must always exist but ESS might not:

Exercise 3.2.2. Let m = 2. Give an example of a symmetric game in whichand there are no ESS. What must all such games have in common? Show thatevery game has at least one NSS. (Hint: Easiest to argue this diagramaticallygoing through all possible cases.)

3.3 Structure of the ESS set

As we have already shown, unlike Nash Equilibria, the set of ESS may some-times be empty. Another difference is that the set of ESS must be finite.

Example 3.3.1. Infinite ∆NE and ∆NSS

Consider the game

G = 1, 1 0, 1

1, 0 0, 0

In this game every strategy is both a Nash Equilibrium and NSS. That

is, ∆NE = ∆NSS = ∆.

Recall that for a strategy x ∈ ∆, we define its carrier (or support), C (x)to be the set of pure strategies used with positive probability. We show thatthe support of an ESS cannot contain the support of another ESS.

Proposition 3.3.1. If x ∈ ∆ESS and C (y) ⊆ C (x) for some strategy y 6= xthen y /∈ ∆NE (and hence y /∈ ∆ESS).

Proof. Suppose x ∈ ∆ESS. Then u (x,x) = u (ei,x) for all i ∈ C (x) and ify ∈ ∆ is such that C (y) ⊆ C (x) then u (x,x) = u (ei,x) for all i ∈ C (y).Thus u (x,x) = u (y,x). By Proposition 3.1.1 and x ∈ ∆ESS, we must haveu (y,y) < u (x,y) and hence y /∈ ∆NE.

Page 24: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 3. ESS 23

Corollary 3.3.1. Finite ESS:i) Any symmetric bimatrix game G has a finite number of ESS.ii) If x ∈ int (∆) is ESS then ∆ESS = {x}.iii) Further, if the number of pure strategies is m ≤ 3 then the game has

at most m ESSs.

Proof. i) One conclusion we can draw from Proposition 3.3.1 is that eachpossible support has at most one ESS. Since the number of possible supportsis the number of non-empty subsets of {1, 2, . . . ,m} which is 2m − 1, thenumber of ESSs is bounded above by 2m−1. (In fact this isn’t a particularlytight bound.)

ii) If x ∈ int (∆) then C (x) = {1, 2, . . . ,m} and by Proposition 3.3.1,there are no other symmetric Nash Equilibria.

iii) For m = 2: If there is an ESS x with C (x) = {1, 2} then this is theonly ESS. Hence the maximum number of ESSs is two, one with support {1}and the other support {2}.

For m = 3: Seven possible supports: {1}, {2},{3},{1, 2},{1, 3},{2, 3},{1, 2, 3}.The most which are simultaneously compatible with Proposition 3.3.1 isthree.

Exercise 3.3.1. Consider the class of symmetric games with m = 4 strate-gies. For each n ∈ {0, 1, 2, 3, 4} find an example with n ESSs. Furtherquestion: Is it possible to have more than four ESSs?

Page 25: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

Chapter 4

Replicator Dynamics

In general, an evolutionary process combines two basic elements: a mutationmechanism and a selection mechanism. While the ESS criterion highlightedthe role of mutations, the replicator dynamics focuses on selection. Robust-ness against mutations is handled indirectly by appealing to dynamic stabilitycriteria.

Recall from Remark 2.2.1 that there are two interpretations of a popula-tion state. In Chapter 3, we could work with either, and indeed it was ofteneasier to think in terms of the first interpretation. In this Chapter, we thinkexclusively in terms of the second: every agent is hardwired to play a purestrategy. The Replicator Dynamics then model how the proportion of agentsplaying each strategy changes assuming that each agent asexually reproducesoffspring who play the same strategy as their parent, where the number ofsuch offspring depends on their reproductive fitness (payoff).

4.1 One Population: The Replicator Dynamic

As in Chapter 3, building on the discussion of Section 2.2 we consider allagents being drawn randomly, and matched in pairs, from a single largepopulation. Thus we focus on symmetric games, G = (A,B), where B = AT

24

Page 26: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 25

and so are able to define everything in terms of the payoff matrix A.At any point in time t, let p (t) be the size of the population and pi (t) be

the size of the subpopulation following the ith pure strategy. Note that theproportion of the population following the ith pure strategy is xi (t) = pi(t)

p(t) .When the population state is x ∈ ∆, the expected payoff to an agent followingstrategy i is u (ei,x), while the average payoff is u (x,x)1.

Payoffs represent the incremental effect from playing the game on an indi-vidual’s fitness, measured as the number of offspring per time unit. Supposealso that each offspring inherits its single parent’s strategy - that is strategiesbreed true. We suppose reproduction takes place continuously over time, sothat the birth rate of a strategy i individual at time t is β + u (ei,x (t)),where β ≥ 0 is the background fitness of all individuals in the population(independent of the strategies individuals follow in the game in question).Similarly, let the death rate be δ ≥ 0 for all individuals. With dots fortime derivatives and suppressing the time argument, this gives the followingpopulation dynamics:

pi = pi[β + u

(ei,x

)− δ

](4.1.1)

Now, taking the time derivative of both sides of the identity p (t)xi (t) =pi (t) gives2 pxi + pxi = pi. This implies

pxi = pi − pxi = pi[β + u

(ei,x

)− δ

]− p [β + u (x,x)− δ]xi (4.1.2)

Simplifying and dividing both sides by p gives

xi = xi[u(ei,x

)− u (x,x)

](4.1.3)

1Note that strictly speaking, in games where one does not interact with oneself, this isan approximation. However as the population size increases, the probability of an agentdrawing oneself tends to zero and so these expected payoffs do indeed tend to u

(ei,x

)and u (x,x).

2Notation: xi = ∂xi

∂t ,pi = ∂pi

∂t and p = ∂p∂t

Page 27: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 26

This equation (4.1.3) defines the replicator dynamic. It states that thegrowth rate xi

xiof the population share using strategy i equals the difference

between the strategy’s current payoff and the current average payoff in thepopulation as a whole. Exploiting the linearity of the payoff function, wecould write the replicator dynamic slightly more concisely as

xi = xi[u(ei − x,x

)](4.1.4)

Properties

Note that the replicator dynamics give the following two properties

m∑i=1

xi = 0 and xi = 0⇒ xi = 0

This means that starting from any initial state x0 ∈ ∆, the shares of thepopulation following each pure strategy will continue to add up to one andno population share can ever be negative. Thus the replicator dynamicsdefine a dynamic system over the space ∆.

More formally, from any initial population state x0 ∈ ∆ the system of idifferential equations given by equation 4.1.3 defines a trajectory ξ : R×∆→∆ where ξ (t,x0) ∈ ∆ defines the population state at time t. It will be usefulto refer to ξi (t,x0) ∈ [0, 1] as the proportion of individuals following strategyi at this time t population state.

Note the following two key properties of the replicator dynamic3:

1. If a strategy is absent from the population, then it must always havebeen absent and will always be absent at any time in the future.

2. If a strategy is present in the population then it must always have beenpresent and will always remain present at any time in the future.

3See Weibull for proof of these claims

Page 28: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 27

While the second statement says that for any strategy i which is present inthe initial population state, ξi (t,x0) > 0 at any time t ∈ R, it is possiblethat some strategies may approach extinction as time tends to infinity: Thatis, we can have

limt→∞

ξi(t,x0

)= 0

Invariance under payoff transformations

Suppose that we replace the payoff function u by u = λu+κ (in other wordsmultiply each entry in the payoff matrix by λ and add κ), where λ > 0 andκ ∈ R. Under u, the replicator dynamic becomes

xi = xi[u(ei − x,x

)]= xiλ

[u(ei − x,x

)]Thus we conclude that the replicator dynamic is invariant under positive

affine transformations of payoffs, modulo a change of time scale. In otherwords, all the solution orbits are exactly the same under both dynamics, theonly change is the velocity, with progress under u being λ times faster.

Similarly, we can show that local shifts of payoff functions do not affectthe replicator dynamics:

Exercise 4.1.1. Show that if we add some constant c ∈ R to all entries insome column j of the payoff matrix A then this has no effect on the replicatordynamic. (In fact we can link this to the fact the replicator dynamic isinvariant to changes in the birth rate β or death rate δ.)

Stability concepts

Stationarity is a minimal requirement of stability. For x to be stationary werequire that once we reach x, we will never leave (without mutations).

Definition 4.1.1. The set of stationary states is given by

∆St = {x ∈ ∆ : xi = 0 ∀i}

Page 29: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 28

Remark 4.1.1. {e1, . . . , em} ⊆ ∆St ⊆ ∆

Lyapunov stability asks the more demanding question of what happensto the trajectory once we perturb the state slightly away from our state: Dowe remain close to it or drift away?

Definition 4.1.2. A state x ∈ ∆ is Lyapunov Stable if every neighbour-hood4 B of x contains a neighbourhood Bo of x such that

x0 ∈ Bo ∩∆⇒ ξ(t,x0

)∈ B ∀t ∈ R

Let ∆LS ⊆ ∆ denote the set of Lyapunov Stable states.

For those not well-versed, in the mathematics of metric spaces, this defi-nition may seem hard to comprehend, so I give a more amenable one:

Consider the standard Euclidean metric (function measuring distance)over Rn:

∀x,y ∈ Rn d (x,y) =√

(x1 − y1)2 + (x2 − y2)2 + . . .+ (xn − yn)2

Then Lyapunov stability requires that for any ε > 0, it is possible to findδ > 0 such that: if we start δ-close to x, then we will forever remain ε-closeto x. Note that δ can depend on ε. Often, the smaller ε is, the smaller weneed δ to be.

Definition. (Assuming Euclidean metric) A state x ∈ ∆ is Lyapunov Sta-ble if ∀ε > 0, ∃δ > 0 such that

d(x0,x

)< δ =⇒ d

(ξ(t,x0

),x)< ε ∀t ∈ R

4Mathematically speaking, for a topological space X with element x ∈ X, B is aneighbourhood of x if there exists an open set U such that x ∈ U ⊆ B ⊆ X. AlthoughI’m not going to go too deep into the theory of topology here. The necessary skill whichis examinable is just to make a coherent argument as to why some things are stable andothers are not.

Page 30: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 29

A stricter requirement is Asymptotic stability. This requires that after asmall perturbation from x, in addition to remaining close, we will eventuallyhead back towards x.

Definition 4.1.3. A state x ∈ ∆ is Asymptotically Stable if it is Lya-punov Stable and there exists a neighbourhood B∗ of x such that

x0 ∈ B∗ ∩∆⇒ limt→∞

ξ(t,x0

)= x

Let ∆AS ⊂ ∆ denote the set of Asymptotically Stable states.

Similarly, we can translate this definition, assuming the Euclidean metric:

Definition. (Assuming Euclidean metric) A state x ∈ ∆ is AsymptoticallyStable if it is Lyapunov Stable and there exists some κ > 0 such that

d(x0,x

)< κ =⇒ lim

t→∞ξ(t,x0

)= x

Proposition 4.1.1. Lyapunov Stability implies stationarity

Proof. I show the contrapositive: if x is not stationary it is not LyapunovStable. Since x is not stationary, there exists some finite t such that ξ (t,x) =y, where y ∈ ∆ and y 6= x. Since y and x are a finite distance away, we canfind a neighbourhood B of x not including y and hence the system leaves Bin finite time if started at x, contradicting Lyapunov Stability.

Some Examples

Generally, stability under the replicator dynamics gives similar results to theESS concept of Chapter 3.

As we have already seen, in the prisoner’s dilemma, everyone playing thestrictly dominant strategy is the unique ESS.

Example 4.1.1. Prisoner’s Dilemma

Page 31: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 30

Let G = 3, 3 7, 1

1, 7 5, 5

We can derive the replicator dynamics and show thati) ∆St = {e1, e2},ii) ∆LS = ∆AS = ∆ESS = {e1}.

In a Hawk-Dove (or Chicken) game we have shown that the ESS is for amixture of Hawks

Example 4.1.2. Hawk-Dove (V = 4, C = 8)

Let G = −2,−2 4, 0

0, 4 2, 2

We can derive the replicator dynamics and show thati)∆St =

{e1, e2,

(12 ,

12

)},

ii) ∆LS = ∆AS = ∆ESS ={(

12 ,

12

)}.

In co-ordination games we found all pure strategies are ESS

Example 4.1.3. Co-ordination game

Let G = 1, 1 0, 0

0, 0 2, 2

We can derive the replicator dynamics and show that ∆St =

{e1, e2,

(23 ,

13

)}and

∆LS = ∆AS = ∆ESS ={e1, e2

}As we have shown in Chapter 3, weakly dominated strategies can’t be

part of an ESS.

Example 4.1.4. Game with NE in weakly dominated strategies

Let G = 1, 1 0, 0

0, 0 0, 0

We can derive the replicator dynamics and show thati)∆St = {e1, e2} = ∆NE,ii) ∆LS = ∆AS = ∆ESS = {e1}.

Page 32: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 31

In Rock-Paper-Scissors we have found that ESS will fail to exist.

Exercise 4.1.2. Let G =

α, α 0, 2 2, 02, 0 α, α 0, 20, 2 2, 0 α, α

Compute the replicator dynamics and verify the results in the table:

Case ∆St ∆LS ∆AS

0 ≤ α < 1{e1, e2, e3,

(13 ,

13 ,

13

)} {(13 ,

13 ,

13

)} {(13 ,

13 ,

13

)}α = 1

{e1, e2, e3,

(13 ,

13 ,

13

)} {(13 ,

13 ,

13

)}Ø

1 < α < 2{e1, e2, e3,

(13 ,

13 ,

13

)}Ø Ø

α = 2{e1, e2, e3,

(13 ,

13 ,

13

)}Ø Ø

The following is the counterpart of exercise 3.2.1

Exercise 4.1.3. Let G = α, α 1, 0

0, 1 1, 1

and consider 3 cases: i) α < 0, ii)

α = 0, iii) α > 0.In each case compute the replicator dynamics and find ∆St, ∆LS, ∆AS.

4.2 Stability in the Replicator Dynamic: somegeneral results

Link to Nash Equilibrium

Our first result shows the similarities between Stationarity and Nash Equi-librium. Recall that int (∆) is the interior of ∆, that is population states inwhich every pure strategy is present.

Proposition 4.2.1. Stationarity and NE.i) {e1, . . . em} ∪∆NE ⊆ ∆St

ii) ∆NE ∩ int (∆) = ∆St ∩ int (∆) which is a convex set.

Page 33: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 32

Proof. i) Firstly, to argue {e1, . . . em} ⊆ ∆St. Let x = ei . Then u (ei,x) =u (x,x) and so xi = 0. Also, for all j 6= i, xj = 0 and so xj = 0. To argue∆NE ⊆ ∆St, note that x ∈ ∆NE is equivalent to the following two conditions:

u(ei,x

)= u (x,x) ∀i ∈ C (x) (4.2.1)

u(ei,x

)≤ u (x,x) ∀i /∈ C (x) (4.2.2)

While just (4.2.1) is sufficient for stationarity.ii) To argue ∆NE ∩ int (∆) = ∆St ∩ int (∆): Take x ∈ int (∆), then

xi > 0 ∀i thus xi = 0⇔ u (ei,x) = u (x,x). Also, with x ∈ int (∆) condition(4.2.2) becomes redundant and so condition (4.2.1) is necessary and sufficientfor both x ∈ ∆NE and x ∈ ∆St.

The convexity argument runs as follows: Take x,y ∈ ∆St ∩ int (∆) andconsider z = αx + (1− α) y, where α ∈ [0, 1]. Clearly z ∈ int (∆) and so allthat is left to show is stationarity.

u(ei, z

)= αu

(ei,x

)+ (1− α)u

(ei,y

)= αu (x,x) + (1− α)u (y,y)

Hence all pure strategies earn the same payoff against z. So for all i,u (ei, z) = u (z, z) and hence z is stationary.

Exercise 4.2.1. Find an example where there is a state x ∈ ∆St which isnot in {e1, . . . em} ∪∆NE. (Hint: using the above proposition, it cannot bein the interior.)

The next result says that Lyapunov Stability implies Nash Equilibrium.

Proposition 4.2.2. ∆LS ⊆ ∆NE

Proof. I show the contrapositive: Suppose x /∈ ∆NE and show that x /∈ ∆LS.If x /∈ ∆St then by Proposition 4.1.1, x /∈ ∆LS and we are done. If x is

Page 34: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 33

stationary but not a Nash Equilibrium then

u(ei,x

)= u (x,x) ∀i ∈ C (x)

∃j /∈ C (x) with u(ej,x

)> u (x,x)

So by continuity of u (since payoff function is linear), there exists δ > 0and a neighbourhood U of x bounded away from ej such that

u(ej,y

)− u (y,y) ≥ δ ∀y ∈ U

Now for ε > 0, consider a move to (1− ε) x + εej := xo. If this stateis inside U then following the replicator dynamics’ trajectory from here wemust eventually leave U . This is since the proportion playing strategy j

increase will increase at exponential rate since

yj ≥ yjδ ∀y ∈ U

⇒ ξj (t,xo) ≥ εeδt

for any t until we leave U . Applying the definition of Lyapunov Stable whereB = U and noting that for any neighbourhood Bo of x, there is ε smallenough such that Bo contains (1− ε) x + εej := xo, we see that x cannot beLyapunov Stable.

Exercise 4.2.2. Find an example where there is a state x ∈ ∆NE which isnot in ∆LS.

The next proposition relates to limit states.

Definition 4.2.1. A state x ∈ ∆ is a limit state if there exists x0 ∈ int (∆)such that

limt→∞

ξ(t,x0

)= x

Page 35: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 34

Proposition 4.2.3. If x ∈ ∆ is a limit state then x ∈ ∆NE.

Proof. Suppose that x0 ∈ int (∆) and limt→∞ ξ (t,x0) = x but x /∈ ∆NE andaim to generate a contradiction. By our supposition:

∃i with u(ei,x

)− u (x,x) ≥ ε > 0

Since limt→∞ ξ (t,x0) = x and u is continuous, ∃T ∈ R such that

u(ei, ξ

(t,x0

))− u

(ξ(t,x0

), ξ(t,x0

))≥ ε

2 ∀t ≥ T

Hence

xi (t) > xi (t)ε

2 ∀t ≥ T

⇒ ξi(t,x0

)> ξi

(T,x0

)e(t−T )ε/2

⇒ ξi(t,x0

)→ ∞

and so we have generated a contradiction. Hence x ∈ ∆NE.

Exercise 4.2.3. Argue that any x ∈ ∆NE ∩ int (∆) must be a limit state.Find an example where there is a state x ∈ ∆NE which is not a limit state.

The next exercise and example shows that Lyapunov Stability and beinga limit state are two distinct concepts.

Exercise 4.2.4. Find an example where there is a state x ∈ ∆LS which isnot a limit state.

Example 4.2.1. Lyapunov Stability and Limit states

Let G =

0, 0 1, 0 0, 00, 1 0, 0 2, 00, 0 0, 2 1, 1

e1 is a Nash Equilibrium and a limit state. In fact it’s the unique limit

state sincelimt→∞

ξ(t,x0

)= e1 ∀x0 ∈ int (∆)

Page 36: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 35

However e1 is not Lyapunov Stable.

Link to ESS

The following proposition says that being an NSS implies Lyapunov stabilityin the Replicator dynamic. The proof is a bit too advanced for this courseand so is omitted but anyone interested can see the Weibull book.

Proposition 4.2.4. ∆NSS ⊆ ∆LS

An analogous result holds for ESS and being asymptotically stable in thereplicator dynamic. Again the proof is too advanced and so omitted, but isin Weibull.

Proposition 4.2.5. ∆ESS ⊆ ∆AS

These results should make sense on an intuitive level: For x ∈ ∆ to beNSS we require that mutants not expand their share of the population, whileLyapunov Stability says that after a mutation causing a small perturbationaway from x the population state should not drift further away. For x ∈ ∆to be ESS we further require that mutants do worse than the incumbentpopulation and so are driven out, thus returning us to our original state asis required by Asymptotic Stability.

Perhaps somewhat surprisingly, the reverse implications do not hold. Infact it is possible for a state to be Asymptotically Stable but not even NSS,as the following example shows:

Example 4.2.2. Asymptotically stable but not NSS

Let G =

1, 1 5, 0 0, 50, 5 1, 1 5, 05, 0 0, 5 4, 4

x =

(318 ,

818 ,

718

)is Asymptotically Stable but not NSS. It is not NSS due

to invasion by e3 mutants. However if we draw a sample trajectory from apoint of the form (1− ε) x+εe3, we can see that it never deviates too far from

Page 37: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 36

x and will head back towards x: It’s initial movement is a loss of strategy 1and a gain to strategies 2 and 3 - especially strategy 3. Next strategy 3 startsto decline (since fewer e1). Next after the growth of e2 and decline of e3,the advantage switches to e1 agents. Then after the growth spurt of e1, theadvantage then shifts to e3 agents and the cycle repeats. Although it can beshown these spiral take us back toward x and therefore it is asymptoticallystable.

Replicator Dynamics and dominated strategies

The first result tells us that any strictly dominated strategy goes extinct inthe long run, provided all strategies are initially present in the population(crucially including the one that strictly dominates it).

Proposition 4.2.6. If a pure strategy i is strictly dominated then for anyx0 ∈ int (∆)

limt→∞

ξi(t,x0

)= 0

Proof. See Weibull

In fact, we can go further than this and say that any strategy that isiteratively strictly dominated goes extinct in the long run

Theorem 4.2.1. If a pure strategy i is iteratively strictly dominated then forany x0 ∈ int (∆)

limt→∞

ξi(t,x0

)= 0

A formal proof of this Theorem is beyond the scope of this course, but wecan see the intuition behind the result: Let Kn be the set of pure strategieseliminated in the nth round of elimination. Then after a sufficiently longtime the population state will be, and forever remain, arbitrarily close to theface of ∆ where xi = 0 for all i ∈ K1. Next, all the strategies in K2 willapproach extinction and so on. To make this argument clearer, think of thefollowing example:

Page 38: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 37

Exercise 4.2.5. Consider the following game. Let the initial state be(

810 ,

110 ,

110

).

Draw a rough sample trajectory.

G =

5, 5 0, 6 0, 06, 0 1, 1 1, 20, 0 2, 1 2, 2

What happens with weak domination is less clear cut. Often weakly

dominated strategies will be eliminated, but as the next example shows thereis no guarantee.

Example 4.2.3. Survival of weakly dominated strategy

Let G =

1, 1 1, 1 1, 01, 1 1, 1 0, 00, 1 0, 0 0, 0

.The second pure strategy is weakly dominated by the first. However,

starting from any interior point of ∆, both will survive in the long run.The reason is that first, strategy 3 becomes almost extinct, removing theadvantage strategy 1 has over strategy 2.

4.3 Replicator Dynamics with two popula-tions

This Section builds on Section 2.3. We have one population of player 1agents (henceforth known as population 1), and another population of player2 agents (population 2). At any point in time, the current fitness to a pop-ulation i agent is the expected fitness from meeting a randomly selectedpopulation j 6= i agent. Thus the population state in one’s own populationhas no direct effect on payoff, however it can still have an indirect effect onfuture payoffs by influencing how the other population evolves.

I denote a typical population state of population 1 by x ∈ ∆1; and for

Page 39: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 38

population 2 by y ∈ ∆2. The derivation of the replicator dynamic followsthe same principle as in the one population case and gives the pair

xi = xi[u1(ei,y

)− u1 (x,y)

]yi = yi

[u2(ei,x

)− u2 (y,x)

](4.3.1)

Example 4.3.1. We compute and illustrate the replicator dynamics for thegame

G = 3, 3 0, 1

2, 0 2, 1

The definitions of stationarity, Lyapunov and asymptotic stability, and

limit states all carry over, with the slight modification that the state spaceis now ∆ = ∆1 ×∆2, with typical element (x,y).

The two population model allows us to study asymmetric as well as sym-metric games. Contrary to what one might expect, in symmetric games, wecan get very different results depending on whether we have one populationor two. A classic example of this is the Hawk-Dove game.

Example 4.3.2. Two Population Hawk-Dove Game

Let G = −2,−2 4, 0

0, 4 2, 2

, (from V = 4, C = 8).

We compute and illustrate the replicator dynamics. We find that the onlystable states are (x,y) = (e1, e2) and (x,y) = (e2, e1). That is the two purestrategy asymmetric Nash Equilibria. The interpretation of this in the Hawk-Dove setup is that animals are able to identify themselves as playing one oftwo roles and everyone in role 1 does one strategy, while everyone in role 2does the other. For example consider battles over nesting sites. Each animalcan identify themselves as being in one of two roles: occupier or intruder. Theanalysis here shows that there are two stable outcomes: occupiers are Hawks,while intruders are Doves; or occupiers are Doves while intruders are Hawks.The empirical evidence from the biological literature does support this modeland shows that in most animal species the first equilibrium outcome prevailswhere the occupier keeps the nesting site. Although there are one or two

Page 40: Evolutionary Game Theory Notes - University of Warwick · 2016-07-28 · Evolutionary Game Theory Notes JamesMassey Thesenotesareintendedtobealargelyselfcontainedguideto everythingyouneedtoknowfortheevolutionarygametheory

CHAPTER 4. REPLICATOR DYNAMICS 39

animal species in which the second equilibrium outcome prevails and so thenesting site changes hands.

Further Reading

• Primary reference is “Evolutionary Game Theory” by Jorgen Weibull

• Most general Game Theory texts will have chapters on Evolutionarygames. Two examples, available online are

– http://www.cs.cornell.edu/home/kleinber/networks-book/networks-book-ch07.pdf

– “Game Theory a multi-leveled approach” by Hans Peters, whichis available as an e-book from the library.


Recommended