Evolutionary Game Theory Slides - Warwick · 2016. 7. 28. · What is Evolutionary Game Theory We...

Post on 27-Feb-2021

7 views 3 download

transcript

Evolutionary Game Theory Slides

James Massey

July 28, 2016

1

Class Game (coordination)

Players are you, the class.

You each have two actions: stand up or sit down. Everyonemoves at the same time. No communication between players isallowed

Each player's payo� is equal to the number of players who takethe same action as them.

I.e. suppose x players sit down; y players stand up. Thenthose who sit down get x ; those who stand up get y .

Each player tries to maximise their own payo�.

We play the game, observe the outcome and repeat multipletimes.

2

Class Game (coordination)

What do we observe? Can we illustrate this on a diagram?

How do we explain this?

What are the Nash Equilibria of this game?

When was a Nash Equilibrium played? Can we explain why?

3

Class Game (dis-coordination)

Change the game as follows:

Now each player's payo� is equal to the number of players whotake the opposite action from them.

I.e. suppose x players sit down; y players stand up. Thenthose who sit down get y ; those who stand up get x .

Again, we play the game, observe the outcome and repeatmultiple times.

What happens? Draw diagram

What are the Nash Equilibria of this game?

4

What is Evolutionary Game Theory

We drop the assumption of hyper-rational players, able toperfectly predict each other's thought process.Instead, the distribution of players' actions changes towardsthose that are (currently) better. We could think of this as anevolutionary dynamic.

Three justi�cations of evolutionary dynamic:

1 Biological interpretation: equate payo� with reproductive�tness. So agents following better strategies have morechildren, and so proportion playing such strategy grows overtime. In Economics, unsuccessful �rms are driven out ofbusiness, while successful ones expand.

2 Imitation: Players can look around, see whether a rival playeris doing better than themselves and if so copy their strategy

3 Best Respond: Observe how the game is currently beingplayed, and play the current best response to it.

5

Symmetric two player games

Imagine two drivers heading in opposite directions down aroad.

If both drive on their left, or both on their right they passsafely. But if they choose opposite sides, they must stop orcrash.

Gives payo� table:

Left Right

Left 1,1 0,0

Right 0,0 1,1

Now suppose a population of N agents. Each agent meetsevery other agent (including oneself) exactly once.

Familiar?

What 2 player game gives the dis-coordination game?

6

Symmetric two player games

Any two player game with �nitely many pure strategies can bedepicted by a payo� matrix of the form

G = (A,B) =

a11, b11 a12, b12 . . . a1n, b1n

a21, b21. . .

......

. . ....

am1, bm1 . . . . . . amn, bmn

Interpretation: player 1 chooses the row; player 2 chooses thecolumn. If the i th row and j th column are chosen, payo�s are(u1, u2) = (aij , bij)The game is symmetric if m = n and A = BT (i.e. aij = bjifor all i and j).We also require that the labelling and interpretation of thestrategies be the same.From now on, unless otherwise stated, we will be working withsymmetric two player games.7

EGT (biological interpretation)

Assume a large population of players each of whom isgenetically hardwired to play a certain strategy.

Players are randomly matched from a large population to playa symmetric two player game. (Note: doesn't matter which isdrawn as player 1 and which as player 2, since symmetric).

This gives utility to each agent as function of their strategyand that of others. We deal with expected payo� from aninteraction.

Players' payo�s are von-Neumann Morgenstern utilities so wedon't need to concern ourselves with considerations of riskaversion.

Apply evolutionary dynamic: Agents reproduce/grow accordingto expected utilities. Thus strategies with higher payo�sproportionately expand; those with lower payo�s contract.

8

Population States

Symmetry implies that both players have the same (mixed)strategy set. Assuming n pure strategies, this is

∆ =

{x = (x1, . . . , xn) ∈ Rn

≥0 :n∑

i=1

xi = 1

}This forms an (n − 1)-dimensional space, since

x ∈ ∆ =⇒ xn = 1−n−1∑i=1

xi

A population state describes how the players are currentlyplaying the game, by telling us the proportion who play eachstrategy.There is a one-to-one correspondence between the set ofmixed strategies and population states. Thus ∆ is thepopulation state space.

9

Population States

When we say the population state is x, we can think of two maininterpretations:

1 Every agent plays the mixed strategy x.

2 Every agent plays a pure strategy: for each i ∈ {1, . . . ,m}, xiis the proportion of agents following the ith pure strategy, ei .

Although we can still de�ne population state without this:

Example

Suppose 3 pure strategies. 10% of the population play the �rststrategy; 30% play the second strategy; 60% mix uniformlyamongst all three. Then the population state is

(3

10, 5

10, 2

10

)

10

Notation

As a shorthand, I use ei to denote the i th unit vector (1 in thei th position and 0's elsewhere).

Note ∆ is the convex hull of{e1, . . . , en

}. Draw ∆ for n = 2

and n = 3.

Note di�erent ways of representing population states: Forn = 3, 1

3e1 + 2

3e2 =

(1

3, 23, 0)

Since the games we consider are symmetric, we can de�neu (x, y) as the utility to an agent playing strategy x whenfacing an agent playing strategy y.

11

Payo�s given population state

We would like:

At population state y for each agent the probability of facingan agent taking the i th action is given by yi .We can then de�ne u (x, y) to be utility an agent playingstrategy x obtains under population state y, where x and y areindependent of each other.

With a �nite population, we get an issue:

Suppose 3 people, 3 strategies. One plays e1; one plays e2; theother plays e3.

Suppose you are not matched against yourself: so each playerhas a half chance of being drawn against each of other two.Each faces a di�erent population state.Suppose you are matched against yourself. Then your strategyx impacts the population state y.That is we get u (x, y (x)).

Solution: assume an in�nite population.

12

Nash Equilibrium

De�nition

A strategy x is a symmetric Nash Equilibrium if u (x, x) ≥ u (y, x)for all y ∈ ∆.Denote the set of symmetric NE by ∆NE .

In words, x is a best reponse to itself.

De�nition

A strategy x is a strict symmetric Nash Equilibrium ifu (x, x) > u (y, x) for all y ∈ ∆, y 6= x.Denote the set of strict symmetric NE by ∆NE>

In words, x is the unique best reponse to itself.

13

Symmetric-ness

Theorem

Every �nite symmetric game has a symmetric Nash Equilibrium.

Symmetric games can also have asymmetric Nash Equilibriahowever these are not relevant due to the assumption ofrandom matching from a single population from which bothour player 1 and player 2 are drawn.

Later when we consider two population games, where player 1is randomly drawn from one population and player 2 randomlydrawn from another, asymmetric Nash Equilibria will berelevant.

ESS is only de�ned for the single population model and so onlysymmetric Nash Equilibria are relevant.

We will see ESS is a re�nement of symmetric Nash Equilibrium.

14

ESS: introduction

ESS stands for �Evolutionarily Stable Strategy�.

For ESS it is more convenient to think of the �rstinterpretation of population state: everyone playing the mixedstrategy x.

ESS asks the following question

Is population state x stable against small mutations?

To be more precise, if a mutation takes us away

from x, will the evolutionary dynamics lead us to

return or not?

15

The ESS question

Suppose a proportion ε mutate away from x to y.

This shifts the new population state to (1− ε) x + εy.

Now, big question:

At this new population state, do the incumbent

population playing x or the mutants playing y fare

better? To be ESS, we need the incumbent

population playing x to fare better so that the

mutants die out.

In other words we need

u (x, (1− ε) x + εy) > u (y, (1− ε) x + εy)

If satis�ed, we can say that x can resist an ε sized mutationtowards y

16

ESS de�nition

For x to be ESS we require that for any y ∈ ∆, x can resist allsu�ciently small mutation towards y.

De�nition

(Def 3.1.1) x ∈ ∆ is ESS if for every strategy y ∈ ∆, y 6= x, thereexists some εy such that for all ε ∈ (0, εy ):

u (x, εy + (1− ε) x) > u (y, εy + (1− ε) x) (2.1)

Denote the set of ESS by ∆ESS

17

ESS Theorem

It can be shown that (proof in notes)

Theorem

(Prop 3.1.1) A strategy x ∈ ∆ is ESS if and only if it meets the

following �rst-order and second-order best reply conditions:

u (y, x) ≤ u (x, x) ∀y (2.2)

u (y, x) = u (x, x)⇒ u (y, y) < u (x, y) ∀y 6= x (2.3)

This provides a useful way to check for ESS...

First, we check how well the incumbent and mutant fareagainst the incumbent.If equal, we use performance against the mutant as thetie-breaker.

18

NSS

A weaker criterion is NSS.

Instead of requiring mutants are driven out, this simplyrequires that mutants don't expand.

De�nition (Def 3.1.2) obtained by replacing strict inequalitywith weak inequality in equation (1).

Denote the set of NSS by ∆NSS

Theorem follows too: (Prop 3.1.2) Simply replace strictinequality with weak inequality in equation (3).

19

Examples

For each of the following examples:

Draw a diagram to indicate expected evolutionary dynamicFind ∆NE , ∆NE>, ∆ESS , ∆NSS ,

Prisoner's Dilemma:

(3, 3 7, 11, 7 5, 5

)Coordination:

(2, 2 0, 00, 0 1, 1

)Dis-coordination

(0, 0 1, 22, 1 0, 0

)Weakly dominant strategy:

(1, 1 0, 00, 0 0, 0

)

20

Relationship to NE

Fact

∆NE ⊇ ∆NSS ⊇ ∆ESS ⊇ ∆NE>

We show why no �⊇� can be replaced by �=� in the abovestatement.

Use the game G =

(α, α 1, 00, 1 1, 1

)and consider 3 cases:

i) α < 0,ii) α = 0,iii) α > 0.)

21

Finite ESS

∆NE and ∆NSS can be in�nite - �nd eg.

∆ESS must be �nite as one can conclude from the followingTheorem.

Let C (x) be carrier or support of x. That is the set of purestrategies played with positive probability.

Theorem

(Prop 3.3.1) If x ∈ ∆ESS and C (y) ⊆ C (x) for some strategy

y 6= x then y /∈ ∆NE (and hence y /∈ ∆ESS).

22

Non-existence

As already seen there are examples with no ESS even whenn = 2.

When n = 2 there will always be NSS.

When n ≥ 3 even NSS may fail to exist.

Example α, α 0, 2 2, 02, 0 α, α 0, 20, 2 2, 0 α, α

When α = 2 there are no NSS.

(Taken from Lemma 3.2.2)

23

Replicator dynamic

ESS highlighted role of mutations. Replicator dynamics focuson selection mechanism.Suppose each agent plays a pure strategy. So if populationstate is x, it means xi proportion of agents use the purestrategy ei .Utilities correspond to reproductive �tness.We derive formula for how population state evolves with time(p.25 of notes)

xi = xi[u(ei , x

)− u (x, x)

]Thus the growth rate xi

xiof the population share using strategy

i equals the di�erence between the strategy's current payo�and the current average payo� in the population as a whole.

Apply to G =

(1, 1 2, 00, 2 3, 3

)24

Properties

Clearly xi = 0⇒ xi = 0. So non-existent strategies cannotappear.

Starting from any initial state x0 ∈ ∆, the shares of thepopulation following each pure strategy will continue to addup to one since

∑mi=1

xi = 0

Thus the replicator dynamics de�ne a dynamic system over thespace ∆.

Let ξ(t, x0

)∈ ∆ de�nes the population state at time t, when

started from x0. and ξi(t, x0

)∈ [0, 1] to be its i th co-ordinate

(the proportion following strategy i at time t).

If strategy i is initially present (i ∈ C(x0)), then it will always

remain so. Although, it is possible to tend to extinction:

limt→∞

ξi(t, x0

)= 0

25

Stability

A minimal stability requirement is that, if reached, we will neverleave:

De�nition

State x is stationary if xi = 0 ∀i . Let ∆St be the set of stablestates.

Two stronger requirements (de�ned formally on p.28 of notes) are:

Lyapunov Stability: if we start �close� to x we will foreverremain �close� to x. Denote the set of Lyapunov Stable statesby ∆LS

Asymptotic Stability: In addition to being Lyapunov Stable, forany x0 �close� to x, we eventually return to x :

limt→∞

ξ(t, x0

)= x

Denote the set of Asymptotically Stable states by ∆AS

26

Examples

For each of the following examples:

Calculate the replicator dynamics.Illustrate them on a diagram.Find ∆St , ∆LS , ∆AS .

Prisoner's Dilemma:

(3, 3 7, 11, 7 5, 5

)Coordination:

(2, 2 0, 00, 0 1, 1

)Dis-coordination

(0, 0 1, 22, 1 0, 0

)Weakly dominant strategy:

(1, 1 0, 00, 0 0, 0

)

27

Link to ESS and NE

Should have seen similarities between ∆LS & ∆NSS , and also∆AS & ∆ESS ,

∆St ⊇ ∆NE ⊇ ∆LS ⊇ ∆NSS

∆LS ⊇ ∆AS ⊇ ∆ESS

Surprisingly a state can be Asymptotically Stable but not evenNSS (Example 4.2.2 of notes)

Can we �nd an example of Stationary but not NE?

28

Two population model

Suppose agents can identify what role they play in the game(whether player 1 or player 2)

One population of agents in the player 1 role; another separatepopulation of agents in the player 2 role.

Random inter-population matching.

Changes from one population model. Now we can

support asymmetric equilibria in symmetric games.analyse asymmetric games.

29

Formula

For i ∈ {1, 2} let ∆i be player i strategy space (populationstate space)

Denote typical elements x ∈ ∆1, y ∈ ∆2 and utilities byu1 (x, y) and u2 (y, x)

Proportional growth rate of strategy i is current payo� minuscurrent average payo� in own population:

xi = xi[u1(ei , y

)− u1 (x, y)

]yi = yi

[u2(ei , x

)− u2 (y, x)

]

30

Examples

1 Let G =

(0, 0 2, 11, 2 0, 0

). Calculate and illustrate the replicator

dynamics. Also �nd ∆St , ∆LS , ∆AS when:

1 Agents are randomly matched from a single population.2 There are separate populations for players 1 and 2, with

inter-population matching.

2 Let G =

(1, 0 2, 11, 2 0, 0

). There are separate populations for

players 1 and 2, with inter-population matching. Calculate andillustrate the replicator dynamics. Also �nd ∆St , ∆LS , ∆AS

31