+ All Categories
Home > Documents > Using “Games” to Understand Incentives

Using “Games” to Understand Incentives

Date post: 24-Feb-2016
Category:
Upload: ayasha
View: 47 times
Download: 0 times
Share this document with a friend
Description:
Using “Games” to Understand Incentives. Game theory models strategic behavior by agents think about how other agents may behave and how that should influence one’s own choice. It is where the players think strategically. Useful to study - PowerPoint PPT Presentation
97
Click to edit Master title style Using “Games” to Understand Incentives Game theory models strategic behavior by agents think about how other agents may behave and how that should influence one’s own choice. It is where the players think strategically. Useful to study Company behavior in imperfectly competitive markets (such as Coke vs. Pepsi). Military strategies. – Bargaining/Negotiations. – Biology • Aumann: Interactive Decision Theory • In perfectly competitive settings , we typically do not have to think strategically because everyone is “small” and no one can affect the economic environment. • But in settings where players are “large”, their own actions can impact the economic environment – as can their consideration of others’ actions (and reactions). • Our basic notion of equilibrium will not change all that much: an equilibrium is reached when everyone is doing the best he/she can given her circumstances and given what everyone else is doing.
Transcript
Page 1: Using “Games” to Understand Incentives

Click to edit Master title styleUsing “Games” to Understand Incentives

• Game theory models strategic behavior by agents think about how other agents may behave and how that should influence one’s own choice. It is where the players think strategically.

• Useful to study– Company behavior in imperfectly competitive markets (such as Coke vs. Pepsi).– Military strategies.– Bargaining/Negotiations.– Biology

• Aumann: Interactive Decision Theory• In perfectly competitive settings, we typically do not have to think strategically

because everyone is “small” and no one can affect the economic environment.• But in settings where players are “large”, their own actions can impact the

economic environment – as can their consideration of others’ actions (and reactions).

• Our basic notion of equilibrium will not change all that much: an equilibrium is reached when everyone is doing the best he/she can given her circumstances and given what everyone else is doing.

Page 2: Using “Games” to Understand Incentives

Click to edit Master title styleTypes of “Games”

We can broadly categorize games into four categories, with games differing across two dimensions:

complete information

incomplete information

simultaneous moves sequential moves

“rock, paper, scissors” chess

sealed bid art auction ascending bid art auction

Complete information games are games in which players know how other players evaluate the possible outcomes of the game.

Sequential move games are games in which at least some players observe the actions of another player prior to having to choose their own actions.

Simultaneous move games are games in which all players must choose their actions at the same time (and thus without observing any other player’s action).

Incomplete information games are games in which at least some players do not know at least some other player’s evaluation of possible outcomes.

Page 3: Using “Games” to Understand Incentives

Click to edit Master title styleThe Structure of a Complete Information Game

Any complete information game has 4 elements:

Players

Actions

Sequence and Information

Payoffs

Who are the players? How many are there?

What action(s) can each of the players “play”?

Example

Is there a discrete number of possible actions? orAre the actions defined by some continuous interval?

Do all the players choose their action simultaneously?Do the players play in sequence?What do players know about earlier moves by others?

What do players get at the end of the game? How does this depend on how the game is played?

Player 1: employer Player 2: job applicant

offer a dollar wage waccept or reject offer

employer moves first; applicant moves next

knowing the wage offer from stage 1

employer: (MRP – w) if offer is accepted; 0 otherwiseapplicant: w if accepts; next best alternative otherwise

w from continuumaccept/reject – discrete

Page 4: Using “Games” to Understand Incentives

Click to edit Master title style2-Player Simultaneous Move Game

A 2-player simultaneous move game is often represented in a payoff matrix.

Players

Actions

Sequence and Information

Payoffs

Player 1 is represented on the left and Player 2 at the top of the matrix.

Player 1 has 2 possible actions – and .Player 2 also has 2 possible actions – and .

Players move simultaneous with no information about the other’s chosen action.

Each player’s payoff depends on her own as well as her opponents’ action.Player 1’s payoff from playing when player 2 plays is , ,and player 2’s payoff from the same actions is .

Page 5: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck Sea• We want to model the Battle of the Bismarck Sea.• Two Admirals: Imamura (Japan) and Kenny (US).• Japan is in retreat. • Imamura wants to transport troops in a convoy from Rabaul to Lae • Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch searching routes.

Page 6: Using “Games” to Understand Incentives

Click to edit Master title styleImamura wants to run convoy from Rabaul to Lae

Page 7: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

This representation is called a Normal form Game.•Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Page 8: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

• Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Players

Page 9: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

• Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Players

Imamura’sStrategies

Page 10: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

• Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Players

Imamura’sStrategies

Kenny’sStrategies

Page 11: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

• Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Imamura’sPayoffs:Each day ofBombing =-1 in payoff

Page 12: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

• Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Imamura’sPayoffs:Each day ofBombing =-1 in payoff

Kenny’sPayoffs:Each day ofBombing =1 in payoff

Page 13: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of Bismarck SeaImamura

Kenny1

North

2-2

2-2

3-1 -3

South

North

South

This representation is called a Normal form Game.•Imamura wants to transport troops.• Kenny wants to bomb Japanese troops. .• North route is two days, Southern route is three days.• It takes one day for Kenny to switch routes.

Page 14: Using “Games” to Understand Incentives

Click to edit Master title style

Imamura

Kenney1

sail North

2-2

2-2

3-1 -3

sail South

search North

search South

Battle of Bismarck Sea

For a 2x2 game always draw the arrows!

Page 15: Using “Games” to Understand Incentives

Click to edit Master title styleExample: A Coordination Game

Suppose player 1 and player 2 have to choose a side of the road on which to drive.

Neither has an inherent preference of one side of the road over the other – they just don’t want to cash into each other by driving on opposite sides.

Players

Actions

Sequence and Information

Payoffs

The game then has 2 players.

Both players can choose from the actions Left and Right.

The players make their choice at the same time without knowing what the other is doing.

The players get 10 each if they choose the same side and 0 each if they choose opposite sides.

A game of this kind is called a coordination game because the main objective of the two players is to coordinate their actions to be the same.

Page 16: Using “Games” to Understand Incentives

Click to edit Master title style2-Player Sequential Move Game

A 2-player sequential move game is often represented in a game tree.

Players Actions

Sequence and Information

Payoffs

Player 1 and Player 2.Player 1 can play or .Player 2 can play or .

Player 1 makes her decision first.Player 2 observes Player 1’s choice before choosing.

Each player’s payoff depends on her own as well as her opponents’ action.

Player 1’s payoff from playing when player 2 plays is ,

and player 2’s payoff from the same actions is .

Page 17: Using “Games” to Understand Incentives

Click to edit Master title styleInformation Nodes and Sets

In a sequential move game, players that move later in the game have multiple information nodes or information sets.

In our game, player 2 makes his decision from on of two possible information nodes – where the information is the action by player 1 that has been observed.

If player 2 were to make his choice without knowing player 1’s action, both these nodes would be in the same information set – implying player 2 cannot tell which node he is playing from when making his choice.

This would be indicated in the game tree by encircling the two nodes into one information set.

Such a game would then be equivalent to a simultaneous move game.

Page 18: Using “Games” to Understand Incentives

Click to edit Master title styleA Sequential Version of the Right/Left Game

In a sequential move version of our game in which players choose on which side of the road to drive, player 1 chooses her action first.

After observing player 1 on the road, player 2 then chooses his side,…

… choosing from the left information node if observing player 1 on the left

… and choosing from the right information node if observing player 1 on the right.The payoffs are then again 10 for both if they choose the same side and 0 if they choose opposite sides.

Page 19: Using “Games” to Understand Incentives

Click to edit Master title styleStrategies

Caution: The two games below are VERY DIFFERENT. The bimatrix game corresponding to the extensive form game on the right is a 4x2, not a 2x2 game!

Page 20: Using “Games” to Understand Incentives

Click to edit Master title styleStrategies

A strategy is a complete plan for how to play the game prior to the beginning of the game.

But in a sequential move game, a complete plan of action (prior to the beginning of the game) implies a plan for what action to take from each information set.

Player 2’s strategy must therefore specify what he plans to do if he observes player 1 going Left …

… AND what he plans to do if he observes player 1 going Right!

Page 21: Using “Games” to Understand Incentives

Click to edit Master title styleStrategies in Sequential Move GamesBecause he moves first (and thus has no information about what player 2 does when he makes his choice), player 1 still has only two possible pure strategies: Left and Right.But player 2 and two possible actions from two possible information sets – implying four possible pure strategies:

(Right, Left) – a plan to go Right from the left Left from the right node

Plan from left node

Plan from right node

(Left, Right) – a plan to go Left from the left and Right from the right node

Plan from left node

Plan from right node

Plan from left node

Plan from right node

(Right, Right) – a plan to go Right from both nodes

(Left, Left) – a plan to go Left from both nodes

Plan from left node

Plan from right node

As we next define an equilibrium, it will

become important to keep in mind the

difference between the actions and the

strategies for a player.

Page 22: Using “Games” to Understand Incentives

Click to edit Master title styleBest Response Strategies

For instance, if player 2 plays the strategy Left,

A player is playing a best response strategy to the strategies played by others if and only if he has no other possible strategy that would result in a higher payoff for him.

player 1 best responds by also playing the strategy Left.

But if player 2 plays the strategy Right, player 1 best responds by also playing the strategy Right.

In the sequential version,

if player 2 plays the strategy (Left, Left), player 1 best responds by playing the strategy Left.

If player 2 instead plays the strategy (Left, Reft), both Left and Right are best responses for player 1.

Page 23: Using “Games” to Understand Incentives

Click to edit Master title styleNash Equilibrium

A Nash equilibrium is a set of strategies – one for each player – such that every player is playing a best response strategy given the strategies played by others.

A less formal way of saying the same thing is: A Nash equilibrium occurs when everyone is doing the best he/she can given what everyone else is doing.

To find a pure Nash equilibrium in a 2-payer payoff matrix,

– Begin with player 2’s first strategy and ask

– Then, for this best response by player 1, ask“what is player 1’s best response?”

“what is player 2’s best response to it?”

If player 2’s first strategy is a best response to player 1’s best response to player 2’s first strategy, then we have found a Nash equilibrium.

– Then move to player 2’s second strategy and go through the same steps again.

In out Left/Right game, we then find a second pure strategy Nash equilibrium

Page 24: Using “Games” to Understand Incentives

Click to edit Master title styleNash Equilibria NOT always Efficient

5 , 5

Now suppose that both players were raised in England where people drive on the left side of the road. As a result, the players have an inherent preference for driving on the left, but driving on the right is still better than crashing into one another.

In our payoff matrix, this implies a lower payoff for both players in the lower right box.

Both players playing Left is still a Nash equilibrium – and it is efficient.

But if player 2 plays Right, player 1’s best response isto also play Right. And if player 1 plays Right, player 2’s best response is to still play Right.

Both players playing Right therefore is also still a Nash equilibrium – but it is not efficient.

Would these two pure strategy Nash equilibria remain if one person has an inherent preference for the left side of the road and one has an inherent preference for the right?

Yes. The basic incentives of the coordination game would remain unchanged unless someone prefers crashing into the other over driving on his less-preferred side.

Page 25: Using “Games” to Understand Incentives

Click to edit Master title styleDominant Strategy Nash equilibrium

In some games, a single strategy is always the best response to any strategy played by others.Such a strategy is called a dominant strategy.

Now suppose the (0,0) payoffs are changed to (7,7).

Both players playing Up remains a pure strategy equilibrium.

But when player 2 plays Down, it is now not the case that Down is player 1’s best response.

Consider the Up/Down game – which we construct by beginning with the last Left/Right game (where we found two pure strategy Nash equilibria).

Rather, player 1’s best response is to play Up. And player 2’s best response to that is to go Up as well, leading us back to the one remaining Nash equilibrium.

When every player in a game has a dominant strategy, there is only a single Nash equilibrium in the game.

In our case here, this Nash equilibrium is efficient, but that is not always the case …

This results directly from Up being a dominant strategy for both players: Regardless of what the other player does, it is always a best response to play Up.

Page 26: Using “Games” to Understand Incentives

Click to edit Master title styleInefficient Dominant Strategy Equilibrium

Let’s begin again with the coordination game where both players playing Up as well as both players playing Down are Nash equilibria.

But then suppose we change the (0,0) payoffs to (15,0) on the bottom left and to (0,15) on the top right.

We can now check whether the efficient outcome (10,10) can still arise in a Nash equilibrium.

If player 2’s strategy is Up, player 1’s best response is to play Down. So (10,10) cannot arise in equilibrium.

Down is in fact a dominant strategy for player 1,

as it is for player 2.

The unique Nash equilibrium of this game is thus for both players to play Down,which results in the inefficient payoff pair (5,5).

Note that the reason each player is playing Down is NOT because each player anticipates that the other will play Down. Rather, regardless of what the other player does, each

player’s best response is to play Down.

We will later call a game of this type a Prisoner’s Dilemma.

Page 27: Using “Games” to Understand Incentives

Click to edit Master title styleMixed Strategies

So far, we have dealt only with pure strategies – i.e. strategies that involve choosing an action with probability 1 at each information set.

A mixed strategy is a game plan that settles on a probability distribution that governs the choice of an action at an information set.

Pure strategies are in fact mixed strategies that select a degenerate probability distribution that places all weight on one action.

As we will see, games with multiple pure strategy equilibria (like the Right/Left game) also have (non-degenerate) mixed strategy equilibria.

And games that have no pure-strategy equilibrium always have a mixed strategy equilibrium.

It is often not easy interpret what it is that we really mean by a mixed strategy equilibrium.

One way to interpret the concept of a mixed strategy equilibrium in a complete information game is to recognize it as a pure strategy equilibrium in an incomplete information game that is almost identical to the complete information game.

Page 28: Using “Games” to Understand Incentives

Click to edit Master title styleMatching Pennies

Consider a game in which Me and You can choose Heads or Tails in a simultaneous move setting. If we choose the same action, I get a

payoff of 1 and You get a payoff of –1.

If we choose different actions, I get a payoff of –1 and You get a payoff of 1.

If You choose Heads, my best response is to also choose Heads.

But if I choose Heads, Your best response is to choose Tails.

But if You choose Tails, my best response is to also choose Tails.

But if I choose Tails, Your best response is to choose Heads.

Thus, there is no pure strategy Nash Equilibrium in this game.

This game, called the Matching Pennies game, is then often used to motivate the idea of a mixed strategy equilibrium.

Page 29: Using “Games” to Understand Incentives

Click to edit Master title styleBest Response Functions with Mixed Strategies

Suppose I believer you will play Heads with probability l, and I try to determine the my best response in terms of probability r with which I play Heads.

0 l 0.5 r 0

l 0.5 0 r 1

0.5 l 1 r 1

My goal is to maximize the chance that I will match what you do …

… which means I will choose Tails if you are likely to choose Tails and Heads if you are likely to choose Heads.

My best response probability rto your l is then:

The only time I am indifferent between Heads and Tails is if l =0.5.

Page 30: Using “Games” to Understand Incentives

Click to edit Master title styleMixed Strategy Nash Equilibrium

Your goal is to maximize the chance that you will NOT match what I do …

0 r 0.5 l 1

r 0.5 0 l 1

0.5 r 1 l 0

In a Nash equilibrium, we have to best-respond to one another ...

… which means our best response functions must intersect.

At the intersection, r= l = 0.5.

The only Nash equilibrium to the “Matching Pennies” game is a

mixed strategy Nash equilibrium with both of us playing each of our

actions with probability 0.5.

Page 31: Using “Games” to Understand Incentives

Click to edit Master title styleExistence of Nash equilibrium

• Nash showed that every game with finitely many players and strategies has a Nash equilibrium, possibly in mixed strategies.

Page 32: Using “Games” to Understand Incentives

Click to edit Master title stylePenalty KickGoalie

Kicker1Kick R

Dive L

-1-1

11

-1-1 1

Dive R

Kick L

• A Kicker can kick a ball left or right.• A Goalie can dive left or right.

Page 33: Using “Games” to Understand Incentives

Click to edit Master title styleMixed Strategy equilibrium

• Happens in the Penalty kick game.• Notice that if the Kicker kicks 50-50 (.5L+.5R), the

Goalie is indifferent to diving left or right.• If the Goalie dives 50-50 (.5L+.5R), the Kicker is

indifferent to kicking left or right.• Thus, (.5L+.5R,.5L+.5R) is a mixed-strategy N.E.• Nash showed that there always exists a Nash

equilibrium.

Page 34: Using “Games” to Understand Incentives

Click to edit Master title styleDo you believe it?

• Do they really choose only L or R? Yes. Kickers 93.8% and Goalie 98.9%.• Kickers are either left or right footed. Assume R means kick in “easier”

direction. Below is percentage of scoring.

• Nash prediction for (Kicker, Goalie)=(38.54L+61.46R, 41.99L+58.01R)

• Actual Data =(39.98L+60.02R, 42.31L+57.69R)

92.91Kick R

Dive L

58.3 94.97

69.92

Dive R

Kick L

Page 35: Using “Games” to Understand Incentives

Click to edit Master title styleThe equilibrium strategy for player 2

u1(L,(1-q)L+qR)= u1(R,(1-q)L+qR)

58.3(1-q)+94.7q=92.91(1-q)+69.92q

58.3+(-58.3+94.97)q=92.91+(-92.91+69.92)q

58.3+36.67q=92.91-22.99q

59.56q=34.61

q*=34.61/59.56=0.5801

92.91Kick R

Dive L

58.3 94.97

69.92

Dive R

Kick L 1-pp

q1-q

Page 36: Using “Games” to Understand Incentives

Click to edit Master title style

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

60

70

80

90

q

u1

LR

Page 37: Using “Games” to Understand Incentives

Click to edit Master title styleThe equilibrium strategy for player 1

u2((1-p)L+pR,L)= u2((1-p)L+pR,R)

-58.3(1-p) -92.91p=-94.97(1-p) -69.92p

-58.3-34.61p=94.97+25.05p

59.66p=36.67

p*=36.67/59.66=0.6146

-92.91Kick R

Dive L

-58.3 -94.97

-69.92

Dive R

Kick L 1-pp

q1-q

Page 38: Using “Games” to Understand Incentives

Click to edit Master title style

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

-90

-80

-70

-60

p

u2

LR

Page 39: Using “Games” to Understand Incentives

Click to edit Master title styleParking Enforcement GameStudent Driver

University5Don’t

Park OK

-5-95

5-5

0-5 5

Park in Staff

Check

• Student can decide to park in staff parking.• University can check cars in staff parking lot.

Page 40: Using “Games” to Understand Incentives

Click to edit Master title styleWhat happens?

• If the University checks, what do the students do?• If the students park ok, what does the Uni do?• If the uni doesn’t check, what do the students do?• If the students park in the staff parking, what does

the uni do?• What is the equilibrium of the game?• What happens if the university makes it less harsh a

punishment to only –10. Who benefits from this? Who is hurt by this?

Page 41: Using “Games” to Understand Incentives

Click to edit Master title styleBest replies of students

• Suppose university controls with probability 1-p.

• Payoff for student – If he parks ok: -5– If he parks in staff area: -95(1-p)+5p

• It is better to park ok if-5> -95(1-p)+5p90>100p, p<0.9, 1-p>0.1i.e. if student gets controlled with more than 10%

probability.

Page 42: Using “Games” to Understand Incentives

Click to edit Master title style

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

-90

-80

-70

-60

-50

-40

-30

-20

-10

0

p

u2

notok

Page 43: Using “Games” to Understand Incentives

Click to edit Master title styleBest replies of students

p

qq=0 q=1

p=0

p=1p=0.9

Page 44: Using “Games” to Understand Incentives

Click to edit Master title styleTerminology

• IO: Response function• (not really a function): best reply

correspondence

Page 45: Using “Games” to Understand Incentives

Click to edit Master title styleBest replies of university

• Suppose student parks in staff area with probability q.

• Payoff for university – If university checks: -5(1-q)+5q– If not: 5(1-q)

• It is better to check if-5(1-q)+5q>5(1-q); 5q>10(1-q);15q>10; q>2/3i.e. if student parks in staff area with a probability

greater than 66.6%

Page 46: Using “Games” to Understand Incentives

Click to edit Master title style

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0-5

-4

-3

-2

-1

0

1

2

3

4

5

q

u1

notwatch

Page 47: Using “Games” to Understand Incentives

Click to edit Master title styleBest replies of university

It is optimal for the university • to check if and only if q>2/3;• not to check if and only if q<2/3• to randomize between the two options in any

way if and only if q=2/3

Page 48: Using “Games” to Understand Incentives

Click to edit Master title styleBest replies of university

p

qq=0 q=1

p=0

p=1

q: probability to park in staff area

q=2/3

Page 49: Using “Games” to Understand Incentives

Click to edit Master title styleNash equilibrium

p

qq=0 q=1

p=0

p=1p=0.9

Nash equilibrium in mixed strategies: (0.1C+0.9NC,(1/3)OK+(2/3)S)

q=2/3

Page 50: Using “Games” to Understand Incentives

Click to edit Master title styleNash Equilibrium 1

• Student parks legally 1/3 of the time and the Uni checks 1/10 of the time.

• Penalty:95

Page 51: Using “Games” to Understand Incentives

Click to edit Master title styleChange in penalty?

• Suppose the because of students parking the penalty has been recently increased from 10 to 95. To find out how this has improved the situation we have to calculate the previous Nash equilibrium at the penalty 10….

Page 52: Using “Games” to Understand Incentives

Click to edit Master title styleParking Enforcement GameStudent Driver

University5Don’t

Park OK

-5-10

5-5

0-5 5

Park in Staff

Check

• Student can decide to park in staff parking.• University can check cars in staff parking lot.

Page 53: Using “Games” to Understand Incentives

Click to edit Master title style

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

-90

-80

-70

-60

-50

-40

-30

-20

-10

0

p

u2

notok

Page 54: Using “Games” to Understand Incentives

Click to edit Master title style

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0-10

-8

-6

-4

-2

0

2

4

p

u2

not

ok

Page 55: Using “Games” to Understand Incentives

Click to edit Master title styleNash equilibrium

p

qq=0 q=1

p=0

p=1p=0.9

Nash equilibrium in mixed strategies: ((1/3)C+(2/3)NC,(1/3)OK+(2/3)S)

q=2/3

Page 56: Using “Games” to Understand Incentives

Click to edit Master title styleAnswer

• With lower penalty, student parked legally 1/3 of the time and the uni checked 2/3 of the time.

• Who’s expected payoff changes? No one.• Parking “inspectors” were more “lazy”,

probability of “illegal” parking has not changed…

Page 57: Using “Games” to Understand Incentives

Click to edit Master title styleThe watchman and the thief

• Experimental observations confirm with comparative statics, but there are strong own-payoff effects.

Page 58: Using “Games” to Understand Incentives

Click to edit Master title styleBack to the Right/Left Game

In the Right/Left game, my goal continues to be to match what you do.

Letting rand lbe the probabilities you and I place on Left, my best response function then looks as it did for “Matching Pennies”. But your best response function now also has you trying to match what I do …

… resulting in three intersections of our best response functions.

r 0

r 1our previous pure strategy Nash equilibria

r 0.5 a new mixed strategy Nash equilibrium

Whenever there are multiple pure strategy Nash equilibria, there is also at

least one mixed strategy equilibrium.

Me

You

Page 59: Using “Games” to Understand Incentives

Click to edit Master title styleMixed Strategy NE with Different Probabilities

5 , 5

Now suppose that the payoffs from coordinating on Right are 5 instead of 10, with rand lstill the probabilities you and I place on Left.

Me

You

My expected payoff from going Left is then 10l, and my expected payoff from going Right is 5(1 – l).

These are equal to one another when l=1/3, implying I am indifferent between Left and Right when l=1/3,

1/3

will best-respondwith Left when l> 1/3,and with Right when l< 1/3.

You face the same incentives, giving rise to the same best response l to my r.

1/3 The mixed strategy equilibrium now has us both playing Heads with

probability 1/3.

We still see the same pure strategy Nash equilibria, but now …

Page 60: Using “Games” to Understand Incentives

Click to edit Master title styleStag huntHunter 2

Hunter 125Rabbit

Stag

4025

540

255 25

Rabbit

Stag

Example based on Rousseau

3 Nash equilibria: (S,S), (R,R) and ((7/9)S+(2/9)R, (7/9)S+(2/9)R)

Page 61: Using “Games” to Understand Incentives

Click to edit Master title stylePure coordinationHarry

Amelia14Bottom

(fun)

L (help)

x14

4x

04 0

R (fun)

Top (help)

Page 62: Using “Games” to Understand Incentives

Click to edit Master title styleCoordination ProblemJim

Sean0.5Beta

VHS

10.5

01

20 2

Beta

VHS

• Jim and Sean want to have the same VCR.• Beta is a better technology than VHS.

*

*

Page 63: Using “Games” to Understand Incentives

Click to edit Master title styleBattle of the sexesAlice

Bob0

Boxing

Boxing

20

01

10 2

Ballet

Ballet

All these games have also mixed strategy equilibria.Here: Both play their preferred strategy with 2/3Probability. Then each party gets the same expectedpayoff with each strategy and so all strategies are optimal

Page 64: Using “Games” to Understand Incentives

Click to edit Master title styleChickenTeen 2

Teen 17Dare

Chicken

57

45

04 0

Dare

Chicken

Example by Bertrand RusselMotivated by film with James DeanHalbstarker = Teddies?

Page 65: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

5

6

1

3

4

6

2

3

0

-5 -3 -2

-6 -4 -3

-1 -6 -0

L M R

T

C

B

Page 66: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

5

6

1

3

4

6

2

3

0

-5 -3 -2

-6 -4 -3

-1 -6 -0

L M R

T

C

B

Page 67: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

5

6

1

3

4

6

2

3

0

-5 -3 -2

-6 -4 -3

-1 -6 -0

L M R

T

C

B

T DOMINATED BY C, ELIMINATE T

Page 68: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

6

1

4

6

3

0

-6 -4 -3

-1 -6 -0

L M R

C

B

M DOMINATED BY R, ELIMINATE M

Page 69: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

6

1

3

0

-6 -3

-1 -0

L R

C

B

B DOMINATED BY C, ELIMINATE B

Page 70: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

6 3-6 -3

L R

C

L DOMINATED BY R, ELIMINATE L

Page 71: Using “Games” to Understand Incentives

Click to edit Master title styleIterated elimination of dominated strategies

3-3

R

C

UNIQUE NASH EQUILIBRIUM (C,R)

Page 72: Using “Games” to Understand Incentives

Click to edit Master title style

1

2

(-10,5)

Extensive games

(5,3)

(0,0)

L

l

r

R

Page 73: Using “Games” to Understand Incentives

Click to edit Master title style

frog

scorpion

(-10,5)

The frog and the scorpion

(5,3)

(0,0)

L

l

r

R

(carry)

(sting)

Page 74: Using “Games” to Understand Incentives

Click to edit Master title style

frog

scorpion

(-10,5)

The subgame-perfect equilibrium point (Selten 1965)

(5,3)

(0,0)

L

l

r

R

(carry)

(sting)

(do not carry, sting)

Page 75: Using “Games” to Understand Incentives

Click to edit Master title styleNash Equilibria in Sequential Move GamesNow consider a sequential version of the Left/Right game where both players are British and thus have an inherent preference for driving on the left side of the road. Recall that player 2 has four possible strategies, and a Nash equilibrium is a set of

such that both players are best-responding to one another.

Suppose player 2 plays (Left,Left) – i.e. she always plans to go Left.

Player 1 then best-responds by playing Left. And (Left,Left) is a best response for player 2 to Left. The outcome (10,10) is then a

Nash equilibrium outcome.

But suppose player 2 plays (Right, Right) – i.e. she always plans to go Left.

Then player 1 best-responds by going Right.

And (Right, Right) is a best response for player 2 to Right …

… giving us a second possible

Nash equilibrium outcome.

Page 76: Using “Games” to Understand Incentives

Click to edit Master title styleNash Equilibria in Sequential Move GamesNow consider a sequential version of the Left/Right game where both players are British and thus have an inherent preference for driving on the left side of the road. Recall that player 2 has four possible strategies, and a Nash equilibrium is a set of

such that both players are best-responding to one another.Suppose player 2 plays (Left,Left) – i.e. she always plans to go Left.

Player 1 then best-responds by playing Left. And (Left,Left) is a best response for player 2 to Left. The outcome (10,10) is then a

Nash equilibrium outcome.

But suppose player 2 plays (Right, Right) – i.e. she always plans to go Left.

Then player 1 best-responds by going Right.

And (Right, Right) is a best response for player 2 to Right …

… giving us a second possible

Nash equilibrium outcome.

Page 77: Using “Games” to Understand Incentives

Click to edit Master title styleFinding all NE in Sequential Games

One way to find all the pure-strategy Nash equilibria in a sequential game is to depict the game in a payoff matrix – with strategies on the axes of the matrix.

When players end up driving on opposite sides of the road, they get 0 payoff each.When both players end up driving on the right side of the road, they get a payoff of 5 each.And when both players drive on the left, they both get a payoff of 10.

If player 2 picks either (Left,Left) or (Left,Right), player 1 best-responds with Left,

and the original player 2 strategies are best responses to Left – giving us two Nash equilibrium outcomes. But if player 2 (Right,Right), player 1 best-responds with Right, to which the original (Right,Right) is a best response for player 2 – giving us a third Nash equilibrium.

Page 78: Using “Games” to Understand Incentives

Click to edit Master title styleNon-Credible Plans and Threats

There is, however, a problem with the Nash equilibrium under which all players end up choosing Right.Although every player is indeed best-responding, the equilibrium requires that player 1 thinks that player 2 plans to choose Right from her left node after observing player 1 choosing Right!

Since player 2 never reaches that node in the equilibrium, planning to always go Right is still a best-response if player 1 plays Right.

But such a player 2 plan should be viewed as non-credible by

player 1.

10 5

510

So far, player 2 has no incentive to threaten such a plan … … but that changes if player 2 is American and not British! Player 2’s “plan” to play (Right,Right) is now intended as a threat to player 1 …

… and player 1 best-responds by playing Right IF he believes the threat.

But the threat is still just as

non-credible.

Page 79: Using “Games” to Understand Incentives

Click to edit Master title styleEliminating Non-Credible Threats

We can eliminate Nash equilibria that are based on non-credible threats by assuming player 1 will only consider credible strategies by player 2.

Player 1 therefore looks down the game tree and knows player 2 will play Left after observing Left from player 1, and he similarly knows player 2 will play Right after observing Right.

The only credible strategy for player 2 is then (Left,Right). Such credible strategies are called subgame-perfect.

The name “subgame-perfect” is motivated the fact that we are considering sub-games that begin at each information node …

… and we insist that the “sub-strategies” from each node are

best responses from that

node.

Page 80: Using “Games” to Understand Incentives

Click to edit Master title styleSubgame Perfect Nash Equilibrium

Subgame-perfect strategies (Selten 1965) are then strategies that involve best responses at every information node – whether that node is reached in equilibrium or not.

As a result, all non-credible threats are eliminated when we focus only on subgame-perfect strategies.

A subgame-perfect Nash equilibrium is a Nash equilibrium in which only subgame-perfect strategies are played.

When a sequential game has a finite number of stages, it is easy to solve for the subgame-perfect Nash equilibrium:

– Begin with the last stage (at the bottom of the tree) and ask what actions the player will take from each of her information node.

– Go to the second-to-last stage and ask what actions the player will take from each information node given he knows the next player’s credible strategy.

– Keep working backwards along the tree in the same way.

(If a player cannot tell which node he is playing from, we can extend this definition by defining subgames as starting at an information node that is also an information set.)

Page 81: Using “Games” to Understand Incentives

Click to edit Master title styleAnother example

Consider the case of an Existing Firm that is threatened by competition from a Potential Firm that might choose to enter the market.

The Existing Firm moves first and sets either Low or a High price. The Potential Firm then either Enters or Doesn’t Enter. If it Enters after the Existing Firm sets a Low price, its expected profit is negative (b/c of a fixed entry cost) while the first firm’s profit is positive. If it Doesn’t Enter at a low price, it makes zero profit while the first firm has higher profits.

If the Potential Firm Enters following a High price set by the Existing Firm, it can price lower and make positive profit while causing the first firm to make losses.

But if the Potential Firm Doesn’t Enter, the Existing Firm gets the highest possible profit (and the second firm gets zero.)

Page 82: Using “Games” to Understand Incentives

Click to edit Master title styleSolving for the Subgame-Perfect Equilibrium

To solve for the subgame-perfect Nash equilibrium, we solve from the bottom up.

We begin at the Potential Firm’s first node and see that, following a Low price, it does better by not entering. And from its second node (following High), the Potential Firm does better by Entering.

The Potential Firm’s subgame-perfect strategy is then (Don’t Enter, Enter).

The Existing Firm can anticipate this and therefore chooses between a payoff of 20 for Low and a payoff of -10 for High.

In the subgame–perfect equilibrium, the Existing Firm sets a Low price and the Potential Firm stays out of the market.

Page 83: Using “Games” to Understand Incentives

Click to edit Master title styleThe Prisoner’s Dilemma

The Prisoner’s Dilemma is the most famous game in game theory because it applies to an extraordinarily large number of real world settings.

It derives its name from the following scenario:

– Two prisoners were caught committing a minor crime, but the prosecutor is quite convinced that they also committed a more major crime together.

– The prosecutor puts them in separate rooms and tells each of them that they can either Deny the major crime or Confess to it.

– If they both Deny, they will both be convicted of the minor crime and receive a 1-year jail sentence.

– If they both Confess, the prosecutor will agree to a plea bargain that will get both of them a 5-year jail sentence.

– If prisoner 1 Confesses while prisoner 2 Denies, the prosecutor will use the confession to get a 20-year jail sentence for prisoner 2 while letting prisoner 1 go free.

– If prisoner 2 Confesses while prisoner 1 Denies, prisoner 1 gets the 20-year jail sentence while prisoner 2 goes free.

Page 84: Using “Games” to Understand Incentives

Click to edit Master title styleNash Equilibrium in Prisoner’s Dilemma

Now consider prisoner 1’s incentives:

– If prisoner 2 Denies, prisoner 1’s best response is to Confess and get no jail.

– If prisoner 2 Confesses, prisoner 1’s best response is to Confess and get 5 years in jail (rather than 20).

Confessing is therefore a dominant strategy for both prisoners.

Prisoner 2 has the same incentives to Confess regardless of what prisoner 1 does.

In the only Nash equilibrium to the game, both prisoners therefore confess and both end up with 5-year jail sentences …

… but both would much prefer the outcome of only a 1-year jail sentence in the top left corner of the payoff matrix.

Cooperating with one another by Denying the crime is the efficient outcome in this game – but it cannot be sustained in an equilibrium.

Page 85: Using “Games” to Understand Incentives

Click to edit Master title styleHiring an Enforcer

Suppose the two prisoners entered into an agreement prior to committing the major crime – an cooperative agreement to Deny the crime if questioned.

In the absence of an enforcement mechanism, such an agreement cannot be sustained in equilibrium …

… because it remains a dominant strategy to Confess when the time comes.

It then becomes in each player’s best interest to hire an enforcer to their agreement.

The two could, for instance, join a mafia organization that changes the payoffs to Confessing to X that is much worse than 20 years in jail.

X X

X

X

It then becomes a dominant strategy to Deny – with “efficiency” restored.

In many real-world applications of the Prisoner’s Dilemma incentives, the “outside enforcer” we hire is the government.

Examples: Taxes for public goods; combating negative externalities

In other cases, social norms and social pressures can take the role of altering the incentives of a Prisoner’s Dilemma.

Page 86: Using “Games” to Understand Incentives

Click to edit Master title style“Unraveling of Cooperation” in Repeated PD

It may seem that repeated interactions may also induce cooperation. Consider then a different version of the Prisoner’s Dilemma, with Me and Youas players and with payoffs in terms of utility or dollar values.

The only Nash equilibrium is again for neither of us to cooperate – thus getting payoffs (10,10) when we could have (100,100) by cooperating.

Now suppose we play this game N different times – giving us a sequential move game in which we repeatedly play a simultaneous move game at each of N stages.

In the final stage, the game becomes the same as if we only played once – implying a dominant strategy of NOT cooperating in the final stage.

To find the subgame-perfect equilibrium, we now have to work backwards.

In the second-to-last stage, we know there will be no cooperation in the last stage – which makes it a dominant strategy to NOT cooperate in the second-to-last stage.

Continuing the same reasoning, we see that cooperation unravels from the bottom of the sequential game up.

Page 87: Using “Games” to Understand Incentives

Click to edit Master title style“Infinitely Repeated” Prisoner’s Dilemmas

But now suppose that the game is repeated infinitely many times, or, more realistically, that each time we encounter each other we know that there is a good chance we’ll encounter each other again to play the same game.

We can then no longer “solve the game from the bottom up” – because there is no “bottom”.

Suppose that I then propose we play the following strategy:

I will Cooperate the first time, and I will continue to Cooperate as long as you cooperated in all previous stages. But if you ever Don’t Cooperate, I will NEVER cooperate again.

Is it a best response for you to play the same strategy?

Yes, assuming the chance of us meeting again is sufficiently large so that the present discounted value of future cooperation outweighs the immediate one-time benefit from not cooperating.

Such a strategy is called a trigger strategy because it uses an action by the opposing player to “trigger” a change in behavior.

Page 88: Using “Games” to Understand Incentives

Mars

Venus-1Shoot

Not Shoot

-5-1

-15-5

-10-15 -10

Shoot

Not Shoot

Repeated Games

*

Page 89: Using “Games” to Understand Incentives

Experiments on PD• Pure one shot game versus random

matching: cooperation dies out quickly• Mild gender effects• Does Studying Economics Inhibit

Cooperation? Frank, Gilovich, Regan claim that economic students are less cooperating than other students

• Now: THEORY of repeated PD’s with fixed matching

Page 90: Using “Games” to Understand Incentives

Repeated games• 1. if game is repeated with same players,

then there may be ways to enforcea better solution to prisoners’ dilemma

• 2. suppose PD is repeated 10 times and people know it– then backward induction says it is a dominant

strategy to cheat everyround

• 3. suppose that PD is repeated an indefinite number of times– then it may pay to cooperate

• 4. Axelrod’s experiment: tit-for-tat

Page 91: Using “Games” to Understand Incentives

Continuation payoff• Your payoff is the sum of your payoff today

plus the discounted “continuation payoff”• Both depend on your choice today• If you get punished tomorrow for bad

behaviour today and you value the future sufficiently highly, it is in your self-interest to behave well today

• Your trade-off short run gains against long run gains.

Page 92: Using “Games” to Understand Incentives

Infinitely repeated PD• Discounted payoff, 0<d<1 discount

factor (d0=1)• Normalized payoff: (d0u0+ d1u1+

d2u2+… +dtut+…)(1-d)• Geometric series: (d0+ d1+ d2+… +dt+…)(1-d)=(d0+ d1+ d2+… +dt+…)-(d1+ d2+ d3+… +dt+1+…)= d0=1

Page 93: Using “Games” to Understand Incentives

Infinitely repeated PD• Constant “income stream” u0=

u1=u2=… =u each period yields total normalized income u.

• Grim Strategy: Choose “Not shoot” until someone chooses “shoot”, always choose “Shoot” thereafter

• Is non-forgiving, problem: Not “renegotiation proof”

Page 94: Using “Games” to Understand Incentives

• Payoff if nobody shoots:(-5d0- 5d1-5d2-… -5dt+…)(1-d)=-5=-5(1-d)-5d• Maximal payoff from shooting in first

period (-15<-10!):(-d0-10d1-10d2-… -10dt-…)(1-d)=-1(1-d)-10d• -1(1-d)-10d< -5(1-d)-5d iff 4(1-d)<5d

or 4<9d d>4/9 0.44• Cooperation can be sustained if d> 0.45,

i.e. if players weight future sufficiently highly.

Page 95: Using “Games” to Understand Incentives

Selten / Stöcker,1986• Students play 5 times a 10-round fixed

pair repeated PD• New, random assignment for each play of

the repeated game.• Results initially: chaos, players learn to

cooperate and use punishments• With experience: cooperation emerges• With more plays: players learn to defect in

the last periods (end-effect) • Final periods of defection get longer

Page 96: Using “Games” to Understand Incentives

Click to edit Master title styleEquilibrium Trigger Strategies

Each of us playing this trigger strategy is therefore a Nash Equilibrium to the infinitely repeated Prisoner’s Dilemma – with cooperation emerging in equilibrium

Is it subgame-perfect?

Since we can no longer solve for the subgame-perfect equilibrium from the bottom up, we have to return to a more general definition of subgame-perfect:

A Nash equilibrium is subgame-perfect if all subgames – whether they are reached in equilibrium or not – also involve Nash equilibrium strategies.

Note that each subgame of the infinitely repeated Prisoner’s Dilemma is also an infinitely repeated Prisoner’s Dilemma!

Since each of us playing our trigger strategy is a Nash equilibrium to the infinitely repeated Prisoner’s Dilemma, it is therefore also a Nash equilibrium in every subgame.

Cooperation can therefore be sustained as a subgame-perfect Nash equilibrium in infinitely repeated Prisoner’s Dilemma games.

Page 97: Using “Games” to Understand Incentives

Click to edit Master title styleMore “Forgiving” Trigger Strategies

The trigger strategy that punishes one act of non-cooperation with eternal non-cooperation is the most unforgiving trigger strategy that can lead to cooperation.

The same outcome can be achieved with more forgiving trigger strategies that punish deviations from cooperation with some period of non-cooperation but give a chance to the opposing player to recover cooperation at a cost.

The most famous such strategy is known as Tit-for-Tat:

I will Cooperate the first time, and from then on I will mimic what you did the last time we met.

A branch of game theory known as evolutionary game theory has given strong reasons to expect Tit-for-Tat to be among the most “evolutionarily stable” trigger strategies in infinitely repeated Prisoner’s Dilemma games.

In the study of how cooperation may evolve in systems governed by mechanisms akin to evolutionary biology, Tit-for-Tat is a likely candidate for a strategy that evolves as a social norm within civil society.


Recommended