+ All Categories
Home > Documents > Workbook

Workbook

Date post: 21-Nov-2014
Category:
Upload: daehongmin2475
View: 66 times
Download: 1 times
Share this document with a friend
Popular Tags:
63
Workbook for Political Economics December 29, 2009 This set of sample exercises has been created for the undergraduate course JEB064 Political Economics given by Martin Gregor at IES, Charles Univer- sity, Prague. With exception of currently assigned homeworks, each exercise includes a full solution. The workbook is a work in permanent construction; any comment is more than welcome. 1 Essentials in game theory 1.1 Centipede game (Rasmusen 2007) Use backward induction to find the equilibrium of the following game (the hor- izontal arrows mean ‘Pass’ and the vertical arrows are for ’Take’): SOLUTION This game has the following interpretation: Think of two play- ers who work on a joint project, with initial value 5. A player has an opportunity to finish the project (’Take’) and divide the value 4:1. If not (’Pass’), there is a next round, where the value doubles, but the opportunity to finish is now in the hands of the other player. Maturity of the project is 7 rounds. Thus, if the project gets to the very end, the final prize is 5 · 2 7 = 640. Intuitively, if the current values is v> 0, and I expect the other player to finish in the next round, I compare 4v/5 (4:1) with 2v/5 (1:4, but value is double), hence I finish myself. Thus, we should expect ’Take’ to be played by both players in all nodes, and the equilibrium payoff should be (4, 1). So, the project should be immediately finished. Formally, this is an extensive game with complete information. In these games, we solve for a subgame-perfect Nash equilibrium, that is identified by backward induction. Backward induction states that A plays ’Take’ in the last node (255 > 128), B plays ’Take’ in the node preceding the last node (128 > 64), and so on and so forth. We may try to look for non-subgame-perfect equilibria. (Recall bailouts, where the politician commits not to subsidize a bad project, and this leads the manager to finance a good project. The point was that the politician announced a move that is not ex post credible, i.e. he announces not to bail out a bad project.) Similarly, in our case, we would need that A announces a move that 1
Transcript
Page 1: Workbook

Workbook for Political Economics

December 29, 2009

This set of sample exercises has been created for the undergraduate courseJEB064 Political Economics given by Martin Gregor at IES, Charles Univer-sity, Prague. With exception of currently assigned homeworks, each exerciseincludes a full solution. The workbook is a work in permanent construction;any comment is more than welcome.

1 Essentials in game theory

1.1 Centipede game (Rasmusen 2007)

Use backward induction to find the equilibrium of the following game (the hor-izontal arrows mean ‘Pass’ and the vertical arrows are for ’Take’):

SOLUTION This game has the following interpretation: Think of two play-ers who work on a joint project, with initial value 5. A player has an opportunityto finish the project (’Take’) and divide the value 4:1. If not (’Pass’), there isa next round, where the value doubles, but the opportunity to finish is now inthe hands of the other player. Maturity of the project is 7 rounds. Thus, if theproject gets to the very end, the final prize is 5 · 27 = 640.

Intuitively, if the current values is v > 0, and I expect the other playerto finish in the next round, I compare 4v/5 (4:1) with 2v/5 (1:4, but value isdouble), hence I finish myself. Thus, we should expect ’Take’ to be played byboth players in all nodes, and the equilibrium payoff should be (4, 1). So, theproject should be immediately finished.

Formally, this is an extensive game with complete information. In thesegames, we solve for a subgame-perfect Nash equilibrium, that is identified bybackward induction. Backward induction states that A plays ’Take’ in the lastnode (255 > 128), B plays ’Take’ in the node preceding the last node (128 > 64),and so on and so forth.

We may try to look for non-subgame-perfect equilibria. (Recall bailouts,where the politician commits not to subsidize a bad project, and this leads themanager to finance a good project. The point was that the politician announceda move that is not ex post credible, i.e. he announces not to bail out a badproject.) Similarly, in our case, we would need that A announces a move that

1

Page 2: Workbook

is not ex post credible. That is, to play ’Pass’ in the final node. This couldchange B’s previous move to ’Pass’ and vice versa.

Anyway, in this kind of Centipede game, this cannot be an equilibrium.Why? An equilibrium is characterized such that there is no opportunity forunilateral deviation. Think of a profile of strategies with ’Pass’ everywhere, andthe game ending in payoffs (128, 512). There is no opportunity for deviationsof player B (8 < 32 < 128 < 512). For player A, there is no opportunityfor deviation only in early rounds (4 < 16 < 64 < 128); however, there is anopportunity in the final node, where 128 < 256. Hence, A must play ’Take’ inthe last node. By analogy, we can easily derive that also strategy profiles thatwould end up sooner cannot be equilibrium.

Can you see why the search for non-subgame perfect equilibrium is herefutile, unlike in bailout game? The difference to bailout is that there, thenot-ex-post-credible move was not played in equilibrium. In other words, ifthe politician announced not to bailout, and the manager believed that, thepolitician would not have to prove that he or she really does not bailout, becausethere would not be any bad project. In contrast to that, if A announces to ’Pass’in the last node, he or she must prove this by play. So, in the previous case, thenot ex-post credible move served only as a threat that is not carried out. Here,the not ex-post credible move serves as a promise that must be carried out.

To sum up, Centipede game features a single, subgame-perfect equilibrium,which is inefficient (because 4 < 128 and 1 < 512). Efficiency can be improvedonly if the players (i) either play this game repeatedly, or (ii) dispose with somecooperative devices (e.g., possibility to write a contract that will be enforcedby a third party). This is also the problem with experimental tests of theclosely related Ultimatum game (I recommend Chapter 3 in Ken Binmore’slovely book Does Game Theory Work? Bargaining Challenge, MIT Press, 2007).Participants in experiments do not play the equilibrium as we describe it, butwe don’t know if they are really maximizing payoff in this game, or if theyare misled by thinking that they should play a repeated game with threatsand retaliations, where ’Pass’ can be an equilibrium move. And why are theymisled? Because in real life, we typically play repeated games, not only singlegames where we meet a partner and when the game ends, he or she foreverdisappears in void.

1.2 Dirty campaign (lecture notes by Boehmke)

Consider a situation facing two competing candidates for a seat in the Senatein Round 2 of run-off elections (in run-off elections, two best candidates passfrom Round 1 to Round 2). They can either spend money on negative campaignadvertising, or they can hold to their pledge not to engage in “mud-slinging”.If one candidate breaks their pledge, independent voters may hold it againsthim, but if both break their pledges, most independent voters will surely bedisgusted and stay home. However, these candidates only really care aboutgetting elected, not about hurting voters belief in democracy.

Since the Round 2 follows Round 1 by only one week, we can treat their

2

Page 3: Workbook

decisions as simultaneous–neither candidate would have time to prepare dirtyads if they have not done so already. If both candidates keep it clean, theincumbent will win 60% of the time. If the challenger is the only one to getdirty, the incumbent gets embarrassed and only has a 45% chance of winning.If only the incumbent goes dirty, the challenger looks sort of like an okay guyand the incumbent will have only a 55% chance of winning. Lastly, if they bothstart slinging mud, independent voters are so disgusted they all flip coins andeach candidate has 50% chance of winning.

Draw the normal form representation (game matrix) of this simultaneousmove game and find the equilibrium.

SOLUTION It is only important to carefully reflect assumptions and rewritethem into the payoffs.

Table 1: Chances of winning (game or payoff matrix)

Incumbent/Challenger dirty clean

dirty 50, 50 55,45clean 45, 55 60, 40

The best responses are emphasized. Equilibrium in pure strategies is suchthat no player has a possibility for unilateral deviation, hence an equilibriuminvolves strategies that are best responses to best responses of other players, fromthe perspective of all players. In our case, this is only (dirty, dirty) strategyprofile, with equal chance of win for both candidates. Independent voters stayhome, watching TV, desperate and disgusted from democratic politics.

1.3 George W. Bush in 2004 (lecture notes by Boehmke)

Use a time machine to get back to year 2004. The current US President, GeorgeW. Bush, is trying to achieve legislation that will enact (i) increased defense tofight terror, (ii) economic stimulus to help the economy, and (iii) money to fightAIDS. Unfortunately, with a slight majority in the Senate, the Republicans willonly be able to pass one of these issues in order to keep the budget deficit down.

Knowing that once he says that he will compromise on the policies, there isno going back, GWB must choose wisely. If he offers to compromise on AIDS andthe Democrats agree, he has a 52% chance of getting re-elected in 2004, whichmeans that the Democrats have a 48% chance of winning. If the Democrats donot agree, Bushs hopes jump to 57%, leaving the Democrats chances at 43%. IfGWB offers to compromise on the defense budget for fighting terrorism he hasa 55% chance of winning re-election if the Democrats cooperate, but if they failto do so almost everyone will vote Republican, giving Bush a 68% chance of asecond term.

If, however, the President chooses economic stimulus, the Democrats canexpect a 55% chance of their candidate winning if they refuse to be cooperative,

3

Page 4: Workbook

since the economy will stay down the tubes and everyone will blame GWBselimination of dividend taxation, but if the Democrats do the responsible thingand cooperate, they will only have 45% chance of winning.

Draw the extensive form of the game just described, with the Presidentmoving first. Assume that Bush only cares about the chance of avenging hisdaddys loss in 1992 and the Democrats only care about their own chances. Usingthese payoffs, solve by backward induction for the equilibrium of the game andwrite out the equilibrium strategies for both players.

SOLUTION GWB play either of three policies in the initial node. Democrats(D) play subsequently. Democrat’s best responses are emphasized.

Following these best responses, GWB’s best response is also emphasized.This gives a subgame-perfect Nash equilibrium, with solid lines denoting strat-egy profile. It involves compromise on defense spending, yielding payoffs (55, 45).

Figure 1: George W. Bush vs Democrats

Can we think of a non-subgame perfect equilibrium? To check for exis-tence, we may transform an extensive game to a simultaneous game. How todo that? Well, simply construct all strategies of the players. For GWB, theset of strategies is compromise on AIDS, defense, economy. For D, a strat-egy, by definition of strategy, provides a full guide (manual) for behavior inall nodes. Since there are three nodes, each with action (yes, no), a strat-egy describes (dis)agreement for each of the three nodes. The set is thereforeyyy, yyn, ynn, nyy, nyn, nny, nnn. The final step is to construct a payoff ma-trix for any strategy profile, i.e. for any combination of strategies.

Table 2: Payoff matrix in an equivalent simultaneous game

GWB/D yyy yyn ynn nyy nny nnn

AIDS 52, 48 52, 48 52, 48 57, 43 57, 43 57, 43

4

Page 5: Workbook

Table 2: Payoff matrix in an equivalent simultaneous game

defense 55, 45 55, 45 68, 32 55, 45 68, 32 68, 32economy 55, 45 45, 55 45, 55 55, 45 55, 45 45, 55

As usual, best responses are in italic. For convenience, the equilibria are inaddition in bold . It is straightforward that in all equilibria, we have that GWBchooses defense spending and the Democrats agree. Thus, all equilibria yieldsidentical outcome as the subgame-perfect equilibrium.

1.4 The Monty Hall problem (Rasmusen 2007)

You are a contestant on the TV show, “Let’s Make a Deal.” You face threecurtains, labelled A, B and C. Behind two of them are toasters, and behindthe third is a Mazda Miata car. You choose A, and the TV showmaster says,pulling curtain B aside to reveal a toaster, “You’re lucky you didn’t choose B,but before I show you what is behind the other two curtains, would you like tochange from curtain A to curtain C?” Should you switch? What is the exactprobability that curtain C hides the car?

SOLUTION A critical assumption is that the only goal of the showmaster isto make a show for the spectators, not to save Mazda for the next round. Onceyou recognize that, the rest not so difficult.

For the showmaker, to make a show means to give you a chance for reconsid-eration of choice. Notice that show is always possible, and we can even exactlysay what will be revealed. How come? Suppose the car is in A. If your firstchoice is A (so you hit the car), then the showmaker reveals randomly B orC, each with probability 1

2 . If your first choice is B, he must reveal C withprobability 1 (he can’t reveal B, that you selected, or A, where the car actuallyis). If you first choice is C, he must reveal B with probability 1. The followingtable illustrates what will be revealed under any combination of your choice andthe true location of Mazda. In the table, we use that a priori, you are com-pletely uncertain about where the car could be, so you must treat each curtainsymmetrically, with the same apriori probability 1/3:

Table 3: Which curtain is revealed?

You/Mazda A (13 ) B ( 1

3 ) C ( 13 )

A B ( 12 ), C (1

2 ) C BB C A ( 1

2 ), C (12 ) A

C B A A ( 12 ), B ( 1

2 )

Coming back to our example: You chose A, and B was revealed. You wantto find out the posterior probability that Mazda is in A, when you selected A

5

Page 6: Workbook

and B was revealed by the showmaker.1

You have to apply Bayes’ rule. It states (recall Rasmusen, p. 57 in 4th ed)that observing actions (here, a showmaker’s revelation) helps you in updatingbeliefs.

Posterior for Nature’s Move =(Likelihood of Player’s Move)(Prior for Nature’s Move)

Marginal Likelihood of Player’s Move

In our example, the updated (posterior) belief is as follows:

Pr(Mazda in A|B revealed) =Pr(B revealed|Mazda in A)Pr(Mazda in A)

Marginal Likelihood of Player’s Move

The marginal likelihood gives you a “total” probability that B is revealed,which rewrites into Pr(B revealed|Mazda in A)·Pr(Mazda in A)+Pr(B revealed|Mazda in B)·Pr(Mazda in B)+Pr(B revealed|Mazda in C) ·Pr(Mazda in C) = 1

2 · 13 +0 · 1

3 +1 · 1

3 = 1/2. Hence:

Pr(Mazda in A|B revealed) =1/6

1/2=

1

3

Pr(Mazda in B|B revealed) =0

1/2= 0

Pr(Mazda in C|B revealed) =1/3

1/2=

2

3

This clearly shows that if B is revealed, a posterior belief on A is less that theprior belief, whereas posterior belief on C is more that the prior belief. So, youshould revise the choice towards C once B is revealed, and your initial choice is A.A webpage http://www.stat.sc.edu/∼west/javahtml/LetsMakeaDeal.html offers a(hopefully functioning) Java applet to enjoy this (a little bit) silly game.

1.5 Elmer’s apple pie (Rasmusen 2007)

Mrs Jones has made an apple pie for her son, Elmer, and she is trying to figureout whether the pie tasted divine, or merely good. Her pies turn out divinely athird of the time. Elmer might be ravenous, or merely hungry, and he will eateither 2, 3, or 4 pieces of pie. Mrs Jones knows he is ravenous half the time (butnot which half). If the pie is divine, then, if Elmer is hungry, the probabilitiesof the three consumptions are (0, 0.6, 0.4), but if he is ravenous the probabilitiesare (0, 0, 1). If the pie is just good, then the probabilities are (0.2, 0.4, 0.4) if heis hungry and (0.1, 0.3, 0.6) if he is ravenous.

Elmer is a sensitive, but useless, boy. He will always say that the pie isdivine and his appetite weak, regardless of his true inner feelings.

1For simplification, I will omit in all expressions a condition that “A is chosen”; this ispossible because in the derivation of probabilities, we don’t have to care at all about whathappens in the case of your other choices.

6

Page 7: Workbook

a) What is the probability that he will eat four pieces of pie?

b) If Mrs Jones sees Elmer eat four pieces of pie, what is the probability thathe is ravenous and the pie is merely good?

c) If Mrs Jones sees Elmer eat four pieces of pie, what is the probability thatthe pie is divine?

SOLUTION It is useful to write Elmer’s choice over actions (how much toeat, 2, 3, 4) in a table, for each combination of pie quality (divine/good) andhungriness (ravenous/hungry).

Table 4: Elmer’s choice over 2, 3, or 4 pieces

divine (13 ) good (2

3 )

ravenous(12 ) 0, 0, 1 1

10 , 310 , 6

10hungry (1

2 ) 0, 610 , 4

10210 , 4

10 , 410

Answer a): The probability that he will eat four pieces of pie is the marginallikelihood of eating 4 pieces, 1

3 · 12 · 1 + 2

3 · 12 · 6

10 + 13 · 1

2 · 410 + 2

3 · 12 · 4

10 = 1730 .

Answer b): In Bayes rule, the nominator for this case is 23 · 1

2 · 610 = 1

5 . Theposterior probability of having ravenous Elmer and good pie, when Elmer eats4 pieces, is therefore

1/5

17/30=

6

17.

Answer c): In Bayes rule, the nominator for this case is 13 · 12 ·1+ 1

3 · 12 · 410 = 7

30 .The posterior probability of having a divine pie, when Elmer eats 4 pieces, is

7/30

17/30=

7

17.

1.6 Cancer tests (McMillan 1992)

Imagine that you are being tested for cancer, using a test that is 98 percentaccurate. If you indeed have cancer, the test shows positive (indicating cancer)98 percent of the time. If you do not have cancer, it shows negative 98 percentof the time. You have heard that 1 in 20 people in the population actually havecancer. Now your doctor tells you that you tested positive, but you shouldn’tworry because his last 19 patients all died. How worried should you be? Whatis the probability you have cancer?

SOLUTION Again, a table is useful.

Table 5: Indication of cancer tests

Reality/Test Positive/Negative

7

Page 8: Workbook

Table 5: Indication of cancer tests

Cancer ( 120 ) 49

50 , 150

No cancer ( 1920 ) 1

50 , 4950

First, derive the marginal likelihood of getting a positive test: Pr(Positive) =Pr(Positive|Cancer) ·Pr(Cancer)+Pr(Positive|No cancer) ·Pr(No cancer) = 49

50 ·120 + 1

50 · 1920 = 17

250 . By Bayes rule,

Pr(Cancer|Positive) =Pr(Positive|Cancer) · Pr(Cancer)

Pr(Positive)=

49

68

.= 0.72.

Contrary to the doctor’s claim, there is a high probability of having cancer.Notice one interesting point: If Type I and Type II errors (false positivity,

false negativity) are close to each other (in our case, they are identical, 2%),then it is not very useful to test populations where a priori distribution is veryasymmetric. Rasmusen mentions, for instance, HIV testing of an entire popu-lation. The problem is that in a large subpopulation of non-infected, a smallerror will bring a large amount of false positives. This large amount will makeit difficult to distinguish between true and false positives.

In our case, the share of false positives is already high, at 38%. But itmay even increase if the probability of cancer in population drops (hence, ifthe population is very asymmetric). For instance, if the probability of cancer is1/100 instead of 1/20, we have the marginal likelihood 37/250. The probabilityof having cancer when observing positive test will be only 49/148

.= 0.33, hence

the share of false positives is extremely high, at 67%. The message is that testingis more precise in (high-risk) subpopulations where probability of infection islarger, since the share of false positives is much lower here.

1.7 The Battleship Problem (Nalebuff 1988)

The Pentagon has the choice of building one battleship or two cruisers. Onebattleship costs the same as two cruisers, but a cruiser is sufficient to carry outthe navy’s mission—if the cruiser survives to get close enough to the target.The battleship has a probability of p of carrying out its mission, whereas acruiser only has probability p/2. Whatever the outcome, the war ends and anysurviving ships are scrapped. Which option is superior?

SOLUTION Either the mission is successful or not. For the battleship,Pr(success) = p > 0. For the cruisers, we have that a first cruiser tries tocomplete the mission, and if fails, the second cruiser attempts to complete. Forthe cruisers:

Pr(success) = Pr(C1 success) + Pr(C1 failure) · Pr(C2 success)

=p

2+

(

1 − p

2

) p

2= p − p2

4< p

8

Page 9: Workbook

It is better to build a single battleship. Notice that this has been underassumption that we don’t care about saving the cost of the second cruiser, oncethe first cruiser is successful.

If we care, we have to introduce a cost c per cruiser, and also a value ofmission v. To have a battleship brings expected payoff pv−2c. To have cruisers,we have expected payoff consisting of three states of the world: (i) success ofcruiser 1 (no need for further investment), (ii) success of cruiser 2, and (iii)failure of cruiser 2:

p

2(v−c)+

(

1 − p

2

) p

2(v−2c)+

(

1 − p

2

) (

1 − p

2

)

(−2c) = (pv−c)(

1 − p

4

)

< pv−c

We use that the expected payoff of the battleship must be positive, pv−c > 0.To sum up, the battleship is better even if we account for the expected savingof the cost of cruiser 2, which occurs with probability p/2.

1.8 Joint ventures (Rasmusen 2007)

Software Inc. and Hardware Inc. have formed a joint venture. Each can exerteither high or low effort, which is equivalent to costs of 20 and 0. Hardwaremoves first, but Software cannot observe his effort. Revenues are split equally atthe end, and the two firms are risk neutral. If both firms exert low effort, totalrevenues are 100. If the parts are defective, the total revenue is 100; otherwise,if both exert high effort, revenue is 200, but if only one player does, revenue is100 with probability 0.9 and 200 with probability 0.1. Before they start, bothplayers believe that the probability of defective parts is 0.7. Hardware discoversthe truth about the parts by observation before he chooses effort, but Softwaredoes not.

a) Draw the extensive form and put lines around the information sets ofSoftware at any nodes at which he moves.

b) What is the Nash equilibrium?

c) What is Software’s belief, in equilibrium, as to the probability that Hard-ware chooses low effort?

d) If Software sees that revenue is 100, what probability does he assign todefective parts if he himself exerted high effort and he believes that Hard-ware chose low effort?

SOLUTION To start with, consider the payoffs for all combinations of effortand defection/non-defection in components. We use that the expected payoff forthe case with non-defective parts and single high effort is 0.9·100+0.1·200 = 110.See the tables, where row is for Hardware, and column for Software.

Table 6: Expected total revenues

9

Page 10: Workbook

Table 6: Expected total revenues

non-defective ( 310 ) low high

low 100 0.9 · 100 + 0.1 · 200high 0.9 · 100 + 0.1 · 200 200

defective ( 710 ) low high

low 100 100high 100 100

Dividing revenues equally, and inserting the cost for effort, we get the payoffs.These are also provided in the game tree, where uncertainty of Software (theyobserve neither Nature’s move, nor Hardware’s move) is depicted by having asingle information set.

Table 7: Expected payoffs

non-defective ( 310 ) low high

low 50, 50 55, 35high 35, 55 80, 80

defective ( 710 ) low high

low 50, 50 50, 30high 30, 50 30, 30

To solve these games, we proceed by backward induction. Denote p ∈ [0, 1]the probability that Hardware plays high effort if parts (components) are non-defective and q ∈ [0, 1] if defective. Software calculates that with probability (i)0.3(1 − p) he faces the first node (non-defective parts, low Hardware’s payoff),(ii) 0.3p the second node, (iii) 0.7(1− q) the third node, and (iv) 0.7q the fourthnode. Thus, Software prefers high effort to low effort if expected payoff fromhigh effort exceeds expected payoff from low effort,

0.3(1 − p)35 + 0.3p · 80 + 0.7 · 30 ≥ 0.3(1 − p)50 + 0.3p · 55 + 0.7 · 50.

This rewrites to p ≥ 185/120 > 1, which is impossible. Hence, we knowthat Software plays low effort irrespective of the play of Hardware. Hardwareanticipates low effort of Software. For non-defective parts, this means that loweffort is better, p = 0 (50 > 35). For defective parts, this means that low effortis again better, q = 0 (50 > 30). Thus, Nash equilibrium is characterized suchthat Hardware and Software always exert low effort. For answer c), the equi-librium Software’s belief on the Hardware’s low effort is one.

Answer d): One has to be careful. The combination of low effort of Hard-ware and high effort of Software gives 100 in two cases: (i) non-defective parts,

10

Page 11: Workbook

Figure 2: Joint venture: a game tree

but only with probability 0.9, and (ii) defective parts, with probability 1. Thus,conditional probability is 0.3 · 0.9 + 0.7 · 1 = 0.91. The posterior probabilityof having defective parts is thus 0.7/0.91

.= 0.72. (Notice that this is not an

equilibrium probability, because both players play always low effort. Here, totalrevenues are always 100, so Software keeps initial beliefs, and he assigns theprobability to defective parts 7/10.)

1.9 Political consulting

You are a foreign investor and you need to bribe a decisive senior bureaucrat.There are 2 bureaucrats (A, B), both looking identically important. Bribing asingle bureaucrat costs you b. You can hire a political consultant who chargesc < b/2 for a recommendation. He certainly knows who is the decisive bureau-crat but you cannot be sure about his advice because he charges his moneybefore you bribe. But, if he has to select from a set of alternatives, all withexactly identical monetary payoffs, he recommends the truthful one. If youhappen to bribe a non-decisive bureaucrat, you have to proceed in looking forthe decisive bureaucrat (again, you decide first whether to hire a consultant,and then whether to follow his advice or not).

1. What are your prior (initial) beliefs that bureaucrat A is decisive?

2. Draw a game tree of this extensive game.

3. By backward induction, identify and fully describe an equilibrium.

11

Page 12: Workbook

4. In the equilibrium, are you purchasing advise of the consultant? If so,once or even twice? Are you following the consultant’s advice?

5. When consultant recommends A, what is your posterior belief that bu-reaucrat A is decisive?

SOLUTION Since both bureaucrats look identically, you must treat themsymmetrically, so your priors are 1/2 for each. The game tree in its entiretyis complex, so we help ourselves by solving all subgames that follow when (i)you select bureaucrat X ∈ A,B, and he or she is not decisive. Denote yourexpected equilibrium value of any of such a subgame as E. In such a subgame,posterior belief that X is decisive is zero, and posterior belief that Y is decisiveis one. Thus, playing X leads to repetition of the subgame, only the payoffdeclines by b to E − b (or E − b − c, if the consultant is paid). Playing Yterminates the game with payoff −b (or −b − c, if the consultant is paid). It isclear that E < 0, so you select Y in all nodes. By backward induction, you don’tinvest, then play Y , and the equilibrium payoff in this subgame is E = −b < 0for you and zero for the consultant.

Figure 3: Subgame with indecisive X in Round 1

We enter this equilibrium payoff into the entire game. The full game startsby a play of Nature, that appoints bureaucrat A to be decisive with probability1/2 and bureaucrat B with probability 1/2. Then you decide on investing. In thecase of no investment, your prior beliefs remain identical (nothing is observed, soyou couldn’t update them). In such a case, you play a lottery with probabilities1/2 over A or B being decisive. A correct pick gives you −b, and a wrong pickgives you −b+E = −2b. Your expected payoff of playing A is −b 1

2 −2b 12 = − 3

2b,

12

Page 13: Workbook

and your expected payoff of playing B is also −b 12 − 2b 1

2 = − 32b. Thus, you get

always − 32b, so any mixing can be played in equilibrium.

To solve the game, you help yourself by considering that a consultant, facingidentical payoffs, reveals the truth with probability one. Since the consultantindeed always faces identical payoffs (in subgame of Round 2, you never pay theconsultant), the consultant’s action is a perfectly revealing signal of the stateof Nature, and your posterior belief on X ∈ A,B being decisive, when ob-taining recommendation on X, must be one. The same could be obtained moreformally, using Bayes rule and the fact that Pr(Xrecommended|Xdecisive) = 1and Pr(Xrecommended|Y decisive) = 0.

Figure 4: The game tree with equilibrium values in the subgames

To summarize the equilibrium path (depicted by solid lines): You invest, theconsultant recommends a decisive bureaucrat, and you follow the advice. Thepayoffs (in bold) are (−b − c, c), irrespective of the state of Nature.

13

Page 14: Workbook

2 Collective choice: preferences

2.1 Asymmetric utilities

We have a policy t ≥ 0, and two individuals, A and B, with the following indirectutility functions over the policy:

uA(t) = 2√

t − t,

uB(t) = 1 − (b − t)2, b > 1.

1. Derive bliss points of the individuals, (t∗A, t∗B) and prove that t∗B > t∗A.Prove that the preferences are quasiconcave.

2. Characterize all pairs of proposals, 0 ≤ x < t∗A < t∗B < y, that simultane-ously satisfy uB(x) > uB(y) and uA(y) > uA(x).

3. Characterize a necessary condition for the existence of a pair of proposalsdefined above.

SOLUTION

1. By FOCs, (t∗A, t∗B) = (1, b), where by assumption b > 1. Concavity of bothutility functions, u′′

A(t) = − 12t3/2

< 0, u′′B(t) = −2 < 0 implies quasicon-

cavity.

2. We look for all pairs (x, y) that satisfy all the conditions, so one of theways to identify the pairs is to fix x and look for those y that satisfy theconditions (and then repeat this for any possible x). So, we fix x and useuA(y) > uA(x). This implies that 2

√y − y > 2

√x − x = k, where k is

utility of A-individual associated with policy x. Notice that by t ≥ 0, wehave k ∈ [0, 1]. By solving the inequality, we obtain

2 − k − 2√

1 − k < y < 2 − k + 2√

1 − k.

It is easy to verify that for k ∈ [0, 1],

2 − k − 2√

1 − k ≤ 1 ≤ 2 − k + 2√

1 − k.

We combine this restriction with y > b > 1 (by assumption) to get

b < y < 2 − k + 2√

1 − k. (1)

The second condition that has to hold is uB(x) > uB(y), equivalent totB − x < y − tB , or

y > 2b − x. (2)

14

Page 15: Workbook

In total, we may characterize the pairs as correspondences of k ∈ [0, 1],X(k), Y (k). This means 2

X(k)−X(k) = k, or X(k) := 2−k−2√

1 − k.With this, we re-write the condition in (2) as

y > 2b − X(k) = 2b − 2 + k + 2√

1 − k. (3)

Therefore, all y-proposals that satisfy our conditions are Y (k) := y ≥ 0 :y > b, y > 2b − 2 + k + 2

√1 − k, y < 2 − k + 2

√1 − k. To sum up, the

pairs of proposals are defined as x = X(k), y ∈ Y (k), where k ∈ [0, 1].

3. Since X(k) is defined over an entire interval k ∈ [0, 1], a necessary condi-tion for the existence is just existence of any k such that Y (k) 6= ∅. Itamounts to ensuring:

∃k ∈ [0, 1] : 2b − 2 + k + 2√

1 − k < 2 − k + 2√

1 − k

∃k ∈ [0, 1] : b < 2 − k + 2√

1 − k

The former holds when b < 2, and the latter when b ∈ (0, 4), hence thecondition writes b < 2. In such a case, we can always find a sufficientlylarge k ∈ (0, 1) giving us non-empty Y (k).

2.2 Condorcet winner

Consider individuals A, B, C and eight alternatives a, b, . . . , h. The preferenceorderings are as follows:

1st 2nd 3rd 4th 5th 6th 7th 8thA a d b f c e g hB c e a h b g d fC b g c h a d f e

1. Is there a Condorcet winner? Explain.

2. If we eliminate two proposals, can we get a Condorcet winner? If so, whichtwo proposals?

SOLUTION

1. We begin by constructing all pairwise voting outcomes. For n proposals,

there are n(n−1)2 pairwise votes, with winning proposal in the following

table. There is no proposal beating all other proposals, hence no Con-dorcet winner. To see why, notice that there are two subsets of proposals,H = a, b, c and L = d, e, f, g, h. Any proposal in H wins over any

15

Page 16: Workbook

proposal in L. Within the subsets, there is neither a Condorcet winner,nor a Condorcet loser (a proposal losing in all pairwise contests). Thus,the structure of preferences is such that H constitutes a top cycle and La bottom cycle. The existence of a top cycle with more than just a singleelement implies non-existence of Condorcet winner.

a b c d e f g ha a c a a a a ab b b b b b bc c c c c cd d d g he f e ef g hg g

2. For Condorcet winner to exist, we require the top cycle to be a single-ton. Hence, if we have to eliminate exactly two proposals, we have twooptions: (i) Either we eliminate two proposals from the top cycle (3 com-binations) or (ii) we eliminate exactly one element from the top cycle andone element from the bottom cycle (15 combinations). In total, we have18 combinations.

2.3 Euclidean preferences

Prove analytically that contract curves for Euclidean preferences are linear.Use that on a contract curve, we cannot increase utility of one agent withoutdecreasing utility of the other agent.

SOLUTION Consider two agents with bliss points x ∈ Rn, z ∈ Rn andutility functions u1(y) = H(X) and u2(y) = G(Z), where X denotes Euclideandistance of (y,x), X =

(y1 − x1)2 + . . . + (yn − xn)2, and Z is for Euclidean

distance of (y, z), Z =√

(y1 − x1)2 + . . . + (yn − xn)2.On a contract curve, y = arg max u1|u2=u=const. To characterize the con-

tract curve, we therefore construct a Lagrangian L(y, λ) = u1 + λ(u− u2). Thefirst-order conditions write for any i ∈ 1, . . . , n

∂L

∂yi=

H ′(X)

X(yi − xi) −

λG′(Y )

Y(yi − zi).

Thus, for any i, j ∈ 1, . . . , n, we obtain

yi − xi

yi − zi=

yj − xj

yj − zj.

By rearranging, we obtain a linear relationship between yi and yj :

yi(xj − zj) − yj(xi − zi) + xizj − xjzi = 0

16

Page 17: Workbook

2.4 Dominant point

Suppose a dominant point D exists in two-dimensional space. Consider qualifiedmajority voting, with quota 1

2 < m < 1. Which policies cannot be outvoted?

SOLUTION This must be a non-empty neighborhood of D. To see whethera point A is in the neighborhood, we would construct an arbitrary line passingthrough D, and a parallel line passing through A. If the latter line separatedthe set of bliss points such that cardinality of a smaller subset would be lessthan 1 − m (or cardinality of the larger subset would exceed m), then A wouldnot be in the neighborhood. This requires A to be sufficiently close to D. Withincreasing m, the condition of large asymmetry of subsets is less likely violated,hence the neighborhood is larger.

3 Collective choice: Majority voting

3.1 3-person committee

You are a member of a 3-person committee which firstly votes to get a completeproposal and then compares the proposal with the status quo. We have 2proposals, A and B, of which you prefer A to B but the others prefer B to A.What kind of proposal C you have to propose so that A is selected, if

a) A is status quo and the others vote sincerely,

b) B is status quo and the others vote sincerely,

c) A is status quo and the others vote strategically,

d) B is status quo and the others vote strategically?

SOLUTION It is useful to use X i Y for player i preferring X to Y, andX C Y for committee voting for X instead of Y. Suppose you are player 1,hence

A 1 B, B 2 A, B 3 A.

First and foremost, we use that in Stage 2, everyone votes according to hisor her true preferences, regardless of sincere or strategic voting (by backwardinduction, there is no possibility of strategic voting for a worse alternative inthe final stage).

Case a) For A to win overall, a proposal C must win in pairwise vote with Bin Stage 1 (C C B), and then it must lose in pairwise vote with A in Stage2 (A C C). The loss of C in Stage 2 implies that at least 2 players actuallyprefer A to C. That can be either

i) both opponents 2 and 3,

17

Page 18: Workbook

ii) and/or you and one of the opponents (without loss of generality player 2).

It is straightforward to reckon that i) is impossible. By contradiction: if thisis so, then for both opponents B i A i C, and transitivity of their preferencesimplies B i C. Irrespective of sincere or strategic voting, players 2 and 3 wouldin Stage 1 vote for B, which means that A would lose in Stage 2.

Continue with ii): player 2 supports A to C, hence B 2 A 2 C. As regardsStage 1, we need that B loses with C. Since for sincere voting, player 2 votesfor B in Stage 1, player 3 must vote for C, and C 3 B 3 A.

The final thing is to determine your preferences. In Stage 2, you alwaysvote sincerely, hence A 1 C. This however doesn’t restrict your preferences toB 1 C or C 1 B.

Put intuitively: What happens is that you try to pit player 2 and 3 againsteach other. You are a partner of player 3 in Stage 1 against player 2, but thenyou become a partner of player 2 to vote against player 3. To summarize, ifplayers 2 and 3 vote sincerely, you win if you propose an amendment C where

A 1 B 1 C, B 2 A 2 C, C 3 B 3 A, or

A 1 C 1 B, B 2 A 2 C, C 3 B 3 A

Case b) For A to win, A must be voted against B in Stage 2. This is howeverthe final stage where each votes according to his or her preferences, hence B C

A. A cannot win.(If our task were just to beat B, then we can make C win for preferences

A 1 C 1 B,B 2 A 2 C,C 3 B 3 A by strategically voting for C inStage 1.)

Case c) We proceed in the same way like in a), and examine ii). Again, weneed one supporter in Stage 2 (suppose again player 2), hence B 2 A 2 C.This player in Stage 1 always votes for B to C. The logic is that he can calculateconsequence of his vote:

• vote for B: he effectively votes for B (B C A in Stage 2)

• vote for C: he effectively votes for A (A C C in Stage 2)

Therefore, player 2 votes for B and we need that player 3 supports C to B inStage 1. Regardless of his preferences on C, he can also calculate consequencesof his vote:

• vote for B: he effectively votes for B (B C A in Stage 2)

• vote for C: he effectively votes for A (A C C in Stage 2)

The reasoning is identical like for player 2, so he will support B, even if wehave C 3 B. As a result, A cannot win.

18

Page 19: Workbook

Case d) The same logic like in b) applies. A cannot win.

3.2 Ancient letter

Frequently cited is a ancient-Roman letter from Pliny the Younger to TitusAristo asking for reassurance on a matter that arose during his chairmanshipof the Senate. Consul Afranius Dexter was found dead, and it was not clearwhether he had committed suicide, had ordered his servants to kill him, orwhether they had killed him out of malice. Three possibilities were suggested:they be acquitted (A), they be banished (B), or they be executed (E). In thosedays, questions were resolved by a literal division of the house. That is, thosewho agreed with a motion sat with the person who made the motion, and thosewho disagreed sat on the other side of the room.

a) You don’t know opinions of the others, but want the servants to be ban-ished. How do you order sequence of votes? Explain in detail.

SOLUTION We will list below all possible procedures applied in all possiblehouse pairwise votes. There are 3 procedures: i) vote A and E, then winnerwith B; ii) vote A and B, then winner with E; iii) vote B and E, then winnerwith A. Pairwise votes are in rows of the table. In cells, the first item is winnerin Stage 1, and the second item is winner in Stage 2, i.e. the overall winner.Bold are the cases without Condorcet winner.

i) A to E ii) A to B iii) B to E

A E A B E B A A A A E AA E A B B E A A A A B AA E B A E B A B B E E AA E B A B E A B B B B BE A A B E B E E A E E EE A A B B E E B A E B AE A B A E B E E B E E EE A B A B E E B B B B B

In the table, we use that members of the house don’t know about preferencesof the others, so vote sincerely. It is immediately seen that for cases withCondorcet winner, it is irrelevant which order of voting is used: Condorcetwinner is always selected. However, for cases without Condorcet winner, thewinner is always the alternative not voted in Stage 1.

As a result, we recommend Pliny the Younger to use procedure A to E(denoted i)) so as to maximize chance of servants being banished (B).

19

Page 20: Workbook

3.3 Median voter

Identify median voter in your country of origin. Use publicly available statistics,justify your selection of data and discuss whether single-peakedness may holdin the criterion that you picked up.

SOLUTION Use available socioeconomic characteristics that determine pref-erence for general redistribution.

3.4 Referendum test

In McEachern’s test, think about what would happen if DSQ > Dm and GM(greater than majority referendum) would be applied instead of normal (simplemajority) referendum. What signs of coefficients (positive or negative) shouldwe expect?

SOLUTION Discussed in the lecture: the sign of GMi shall be opposite.

3.5 Amendments

There are 3 policies: status quo (SQ), original bill (B), and amendment (A).The Congress controlled by Democrats has preferences B A SQ and theRepublican President has preferences SQ A B. The order of voting is:

1. The Congress must prepare a final bill. It may or may not propose amend-ment to the original bill.

2. The President may apply veto on the final bill.

Find equilibria in the following cases:

1. Presidential veto cannot be overthrown and the amendment can be pro-posed only by the Congress.

2. Presidential veto can be overthrown and the amendment can be proposedonly by the Congress.

3. Presidential veto cannot be overthrown and the amendment can be pro-posed both by the President and the Congress.

4. Presidential veto can be overthrown and the amendment can be proposedboth by the President and the Congress.

SOLUTION We construct extensive games, apply backward induction (bestresponses highlighted by solid lines) and identify equilibria. In Cases 1 and 3(effective veto), President gets its first best, SQ. This is obvious, because vetoalways leads to his or her first-best, and veto is always possible. In Case 2,Congress gets its best, because President is powerless. In Case 3, we have anintermediate case when the President can use agenda-setting power to avoid theoriginal bill, hence a compromise (amended bill) takes place.

20

Page 21: Workbook

Figure 5: President vs Congress

3.6 4 proposals in a 3-person committee

Suppose you are one of three members of a committee that must choose anoutcome from among A, B, C, D. The preference of the members of the com-mittee are A B C D for you, D C A B for Member 2, andC B D A for Member 3.

What are the outcomes of the following agendas if everyone is strategic?

1. B versus C, the winner against A, the winner against D

2. A versus C, the winner against D, the winner against B

3. B versus D, the winner against C, the winner against A

If it were up to you, which agenda would you choose?

21

Page 22: Workbook

SOLUTION These are extensive games, that can be solved by backward in-duction. To do that quickly, first find the outcomes of sincere majority pairwisevoting:

B C D

A A C DB - C BC - - C

Agenda 1 We may use the following table:

nominal pair real pair outcome

Stage 3DA DA DDB DB BDC DC C

Stage 2BA BD BCA CD C

Stage 1BC BC C

The table is constructed such that it solves the game from backwards. InStage 3, it finds majority voting outcomes for all possible pairs. These outcomesare used in Stage 2; here, to vote nominally in favor of A means to vote reallyfor D (highlighted in bold). The outcomes are used for Stage 1. Finally, wecan see that C wins (no wonder, because C is Condorcet winnner).

Agenda 2 We again use the table to derive that Condorcet winner wins.

nominal pair real pair outcome

Stage 3AB AB ACB CB CDB DB B

Stage 2AD AB ACD CB C

Stage 1AC AC C

22

Page 23: Workbook

Agenda 3 Also here the Condorcet winner wins.

nominal pair real pair outcome

Stage 3BA BA ACA CA CDA DA D

Stage 2BC AC CAC DC C

Stage 1BD CC C

To conclude, if all agents are strategic, it is irrelevant which agenda is used;the outcome is always identical, and it is a Condorcet winner. This is a clearproperty of the fact that all proposals are voted throughout the game, so every-one anticipates Condorcet winner to pass. The role of agenda-setting for strate-gic voters—at least in these simple settings with pairwise majority voting—isrestricted only to non-existence of Condorcet winner.

3.7 Strategic voting

We have three committee members and three proposals. One proposal is Con-dorcet winner. Prove that for any complete agenda (any sequence of pairwisevoting including all proposals), Condorcet winner is in Nash equilibrium.

SOLUTION Without loss of generality, denote proposals voted in Stage 1a, b, and the remaining proposal c. Hence, Stage 2 is either vote over a, cor b, c.

• C.w. is c: Always voted in Stage 2, since in the last stage, each membervotes sincerely.

• C.w. is a: Proposal a must be preferred over b at least by 2 members, andover c at least by 2 members. There are only two possibilities: (i) Thetwo members are the same. Then, c is first-best alternative for both, andthe equilibrium is that they vote for their first-best alternative in Stage1 and then in Stage 2. (ii) The two members are different. Let the firstpair be members 1, 2 and the second pair be members 2, 3. Hence,the preferences for members 1, 3 must be b 1 a 1 c and c 3 a 3 b;for member 2, a 2 b, a 2 c.

Thus, voting for a in Stage 1 is to vote effectively for C.w. in Stage 2,and voting for b in Stage 1 is to vote for the second-best alternative ofmember 2 (b if a 2 b 2 c, and c if a 2 c 2 b). In any case, this is a

23

Page 24: Workbook

pairwise vote of a C.w. and an alternative proposal, and in such pairwisevote Condorcet winner must win (by definition of C.w.).

• C.w. is a: This is just an identical problem to the previous one (C.w. againvoted in Stage 1, and majority of members agree on passing it to Stage2).

To provide intuition even more generally: To misrepresent one’s preferenceswith respect to the Condorcet winner (i.e., not support it and eliminate through-out the agenda) could be a best response only if it implies an alternative proposalto be passed. However, in any pairwise vote over C.w. and this alternative ofcourse the majority supports Condorcet winner.

3.8 Strategic voting: The general result

Consider a set of policies with one policy being a Condorcet winner. The policy-makers are strategic and non-cooperative. Voting is such that there is thatthere is a full ordering of policies which determines the order of how policies aresequentially voted in pairwise votes. In each vote, the loser is outvoted and thewinner passes to another round. Is it possible that the equilibrium outcome isnot a Condorcet winner? Discuss formally.

SOLUTION The answer is surprisingly straightforward. There is a stage twhere Condorcet winner C is proposed and voted against alternative proposal A.By voting, the policy-makers select a subgame, where we have structurally onlytwo different types: A-subgame (C eliminated), and C-subgame (A eliminated).Each subgame has an equilibrium outcome proposal, to be denoted A∗ and C∗.Clearly, A∗ 6= C and C∗ 6= A.

In stage t, A-subgame is selected if and only if A∗ C C∗. (Recall thatpolicy-makers vote on the basis of the anticipated consequences, independentlyof the content of currently given proposals). Since A∗ 6= C, this requires C∗ 6= C.(Otherwise, C∗ = C C A∗ by the definition of Condorcet winner, and C-subgame is voter.) Thus, a subgame where C is allowed to pass must end upwith a result that is not Condorcet winner.

But if C is passed, we would be in stage t+1, where the problem would justreplicate (C facing an alternative B), and we would again require C∗ 6= C. Thisends up in the last stage, where it must be that C∗ = C, hence it is impossible tomaintain C∗ 6= C. Therefore, once C appears on the ballot, it is never outvoted.In other words, each subgame containing C leads to C, and this is true also forthe entire game (= improper subgame).

3.9 Two-party electoral competition

Suppose that preferred taxes are tM < t∗R < t∗L for median voter (M), right-wing (R) and left-wing party (L). The parties engage in simultaneous electoralcompetition with binding platforms.

24

Page 25: Workbook

a) What electoral platforms will R and L set under deterministic voting?

b) What under stochastic voting?

SOLUTION We have to distinguish between deterministic and stochasticvoting.

Deterministic voting This is simple. Suppose tR = t∗R. The best responseof L is the best of the three alternatives:

• loss, tL > t∗R: pL = 0, t = tR = t∗R

• tie, tL = t∗R: pL = 12 , t = tR = t∗R

• win, tM ≤ tL < t∗R: pL = 1, t = tL

We have policy-seeking parties, hence

UL(t∗R) > UL(tL < t∗R).

This means that L is willing to select loss or tie. The best response of R tothis choice is tR = t∗R (bliss point), so we have equilibrium

tR = t∗R ≤ tL.

Stochastic voting We use the first-order conditions imposed on expectedutilities EUR and EUL to be equal zero, as derived in the lecture:

dpL

dtL[UL(tL) − UL(tR)] = −pL

dUL(tL)

dtL

Step 1. We can obviously discuss only cases tM ≤ tL ≤ t∗L, and tL ≥ tR:

dpL

dtL︸︷︷︸

≤0

[UL(tL) − UL(tR)]︸ ︷︷ ︸

≥0

= −pL︸︷︷︸

<0

dUL(tL)

dtL︸ ︷︷ ︸

≥0

It is easy to find that this is satisfied such that 0 = 0 only if tR = tL = t∗L.Here, R would obviously decrease tR (increases both pR = 1− pL and UR(tR)),so it cannot be an equilibrium. Therefore, in equilibrium, both terms must bestrictly negative. This implies tR < tL < t∗L.

Step 2. Now focus upon R. We discuss only cases tM ≤ tR ≤ t∗R. Then:

dpR

dtR︸︷︷︸

≤0

[UR(tR) − UR(tL)]︸ ︷︷ ︸

≥0

= −pR︸︷︷︸

<0

dUR(tR)

dtR︸ ︷︷ ︸

≥0

We use tR < tL. Then, the term would be satisfied with 0 = 0 only iftR = tM = t∗R, which violates assumptions. Therefore, in equilibrium, both

25

Page 26: Workbook

terms must be strictly negative. This implies tM < tR < t∗R. Finally, UR(tR) >UR(tL) implies that tR < t∗R < tL. Overall,

tM < tR < t∗R < tL < t∗L.

We observe incomplete convergence to the median platform. Unlike in per-fect voting, R has to make a concession, and L is not willing to converge to thebliss point of R.

3.10 Redistribution

Consider our example of redistribution with distortion, but suppose that thesubsidy is paid only after the person ends working (i.e. it is a pension providedby the government) and people are of different age. Assume that an individualworks wi-time, where wi ∈ [0, 1], and earns pre-tax income yi ∈ [0, 1], which istaxed by flat tax t ≥ 0; the he/she retires and receives pension s (see lecturenotes for the definition of s). Assume that the length of retirement is 1, and(wi, yi) is uniformly distributed on [0, 1] × [0, 1].

a) Derive individually optimal ti as a function ti(yi, wi).

b) Derive density function of ti over t ∈ [0, 1].

c) Identify individual/s with median value of ti (to be denoted as tM ).

d) Is tM a Condorcet winner or not?

SOLUTION The pension is paid out of tax revenues, which is equal to

wy =

∫ 1

0

∫ 1

0

wy dy dw =

∫ 1

0

w

[y2

2

]1

0

dw =1

2

∫ 1

0

w dw =1

2

[w2

2

]1

0

=1

4.

Retirement lasts single period, hence individual consumption writes

ci = wi [yi(1 − t)]︸ ︷︷ ︸

income per period

+ (1 − λt)(twy)︸ ︷︷ ︸

pension per period

= wiyi(1 − t) +t(1 − λt)

4.

By the first order condition,

∂ci

∂t= −wiyi +

1

4− 2λ

1

4t = 0

ti =1 − 4wiyi

From above, two individuals i, j prefer identical tax, if their lifetime (factor)income Y = wy is identical, Yi ≡ wiyi = wjyj = Yj . We have thereforeti = ti(Yi), or by inverse

26

Page 27: Workbook

Yi(ti) =1 − 2λti

4.

We can define two distribution functions. Let F (t) be the share of individualswhose preferred tax is less than t, ti ≤ t, and let G(Y ) be the share of individualswhose lifetime income is less then Y , Yi ≡ wiyi ≤ Y . By equation above, wehave ∀ti ≥ t : Yi ≤ Y , hence

G(Y ) = 1 − F (t(Y )),

or alternatively

G(Y (t)) = 1 − F (t).

The following figure illustrates. For any lifetime income Y , the critical indi-viduals for whom wiyi = Y are located on the hyperbole. For these individuals,we can define their optimal t, satisfying Y = Y (t), i.e. t = (1 − 4Y )/(2λ).

Figure 6: Pensions

Now, the share of individuals whose lifetime income is less then Y , G(Y ),is defined by the density of yiwi below the hyperbole. Since we have uniformdistribution of yi and wi, the share is defined only by the size of the area belowthe hyperbole.

G(Y ) = 1 − Y −∫ 1

Y

w(y) dy = 1 − Y −∫ 1

Y

Y

ydy = 1 − Y − Y [ln y]

1Y =

= 1 − Y + Y lnY = 1 + Y (lnY − 1)

The other distribution function, F (t) or F (t(Y )), is the complementary areaabove the hyperbole. Density function f(t) is obtained by making the firstderivative on F (t):

27

Page 28: Workbook

f(t) =∂F (t)

∂t= −∂G(Y )

∂Y

∂Y

∂t=

∂Y

∂tlnY = −λ

2ln

1 − 2λt

4

The median tax is defined as F (tM ) = 12 , or G(Y (tM )) = 1 − 1

2 = 12 .

1 + Y (tM )(ln Y (tM ) − 1) =1

2

Which gives implicit solution (using 1 = ln e):

(1 − 2λtM ) ln4e

1 − 2λtM− 2 = 0

We can easily prove that consumption function ci(t) is quasiconcave in singlepolicy dimension t (ti is a unique local maximum for this function, and thesecond derivative is always negative on t ∈ [0, 1]). With quasiconcave preferenceson single dimension, tM is a Condorcet winner.

3.11 Redistribution: advanced (Hsu & Yang 2007)

In public sector economics, an important concept is the marginal cost ofpublic funds (MCF). It defines the cost of raising an extra unit of tax revenues.In economy where taxation implies no deadweight loss, it is easy to see thatMCF = 1; to get an extra dollar of tax revenue means a dollar less of after-taxincome. In economy with distorting taxation, MCF ≥ 1.

To see that, denote t ∈ [0, 1] a flat tax rate imposed on yi, a pre-tax incomeof an individual i = 1, . . . , N . Denote total tax revenues T (t), average incomey, and let L(t) be the total fall in after-tax incomes. Marginal cost of publicfunds is defined as

MCF (t) ≡ dL

dT.

With non-distorting taxation,

T (t) =∑

i

tyi = tny,

L(t) =∑

i

tyi = tny.

You can directly apply that T (t) = L(t), hence MCF (t) = 1. To identifyMCF in the general case, it is nevertheless better to write

MCF (t) =dL

dT=

dLdtdTdt

.

a) Derive T (t) and L(t) for the economy with distorting taxation that wasintroduced in the lecture ‘Majority’.

28

Page 29: Workbook

b) Derive MCF (t) as a function of tax rate t. Is it increasing, constant, ordecreasing?

c) Find MCF (t) that is in the majority voting equilibrium.

d) To measure deadweight loss (and MCF) can be very difficult. Can aneconomist identify MCF in the economy only by studying pre-tax incomedistribution? How? (This question is motivated by Hsu & Yang 2007paper in Economic Inquiry.)

SOLUTION In the lecture, we have assumed that 1 − λt share of the taxbase disappears, so the total tax revenues are T (t) = tny(1−λt). The differencebetween pre-tax and after-tax income (i.e., not yet accounting for a subsidy) istyi for each individual, which in total gives L(t) = tny.

To derive MCF (t), derive dL/dt = ny and dT/dt = ny(1 − 2λt).

MCF (t) =dLdtdTdt

=1

1 − 2λt

dMCF (t)

dt=

1 − 2λt> 0

(Be careful, you cannot divide L/T to get 1/(1 − 2λt), and then argueMCF (t) = d1/d(1 − λt) = 0. You can’t either argue that

MCF (t) =d 1

1−λt

dt=

λ

(1 − λt)2.

All of that would obviously be incorrect.)

We know from the lecture that in the majority voting equilibrium, the taxis the median voter’s preferred tax, which writes (recall again the lecture)

tM =y − yM

2λy.

Thus, MCF in the equilibrium is MCF (tM ) = 1/(1 − 2λtM ) = y/yM > 1.To conclude: If an economist can observe only the distribution of pre-tax (wage)incomes yi (e.g., from household income statistics), and believes that in politicalequilibrium, median voter is decisive, that he or she can argue that the marginalcost of public funds is simply the ratio of mean to median pre-tax income.

4 MCF: extension

Following the previous example, suppose MCF (t) = 1 + λt. What are the lossand revenue functions, L(t), T (t)? Prove that for any t > 0, T (t) < tny.

29

Page 30: Workbook

SOLUTION In our specification, the deadweight loss of taxation is for sim-plicity modeled as the loss on the part of policy maker (e.g. administrative costs),not the loss affecting pre-tax incomes (e.g., distortions). With unchanged pre-tax incomes, the loss function is, by definition, L(t) = tny.

By definition of the MCF,

1 + λt = MCF (t) =dLdtdTdt

=nydTdt

.

By integrating dT (t)dt = ny

1+λt , we obtain, using normalization T (0) = 0,

T (t) =ny

λlog(1 + λt).

The inequality T (t) < tny is equivalent to λt > log(1 + λt), and usingsubstitution x = λt, f(x) = ex − x− 1 > 0 if x > 0. To see that it holds, noticethat f(0) = 0 and f ′(x) > 0 for x > 0.

5 Public spending

5.1 Strategic deficit I

Suppose that politics is a conflict of two representative consumers, right-wing Rand left-wing L. Each has identical endowment m 0, pays head tax τ ∈ [0,m],consumes private goods in amount m − τ and a public good provided by thegovernment in amount g ≥ 0. The utility functions in period t are

uL,t =gt

2+ k ln(m − τt)

uR,t =gt

2+ ln(m − τt)

where 0 < k < 1. Lifetime utilities are UL =∑

t uL,t and UR =∑

t uR,t.

a) If government budget must be balanced in each period (2τt = gt), whatare the optimal amounts of public good for R and L?

b) Suppose we have two periods, t = 1, 2. In period t = 1, R controls thegovernment budget (sets g1 and τ1). In period t = 2, L controls the budget(sets g2 and τ2). The budget must be balanced at the end of the secondperiod, 2(τ1 + τ2) = g1 + g2, but not necessarily at the end of the firstperiod. What taxes t1, t2 will be imposed, and what is the deficit?

c) In two-period setting, let L control the budget in t = 1 and R controlthe budget in t = 2. What taxes t1, t2 will be imposed, and what is thedeficit?

For b) and c), analyze only interior solutions (i.e. with gt > 0).

30

Page 31: Workbook

SOLUTION

a) For 2τt = gt, we can write lifetime utilities as UL =∑

t [τt + k ln(m − τt)]and UR =

t [τt + ln(m − τt)]. From the first-order condition, this yieldsfor each period identical tax in optimum, τ∗

L = m − k and τ∗R = m − 1.

Hence,g∗L = 2(m − k) > 2(m − 1) = g∗R.

b) R plays in period 1 and L plays in period 2.

1. First, examine what L will do in period 2. L inherits deficit b ≡g1 − 2τ1, so he/she is restricted by necessity to balance the budget,g2 = 2τ2 − b. L sets g2, τ2 to maximize utility in period 2, uL,2:

(g2, τ2) = arg maxg2

2+k ln [m − τ2] = arg max

g2

2+k ln

[

m − g2

2− b

2

]

This yields g2 = 2m − 2k − b. (Since we consider only interior solu-tions, we have to impose b < 2m − 2k.)

2. Second, examine what R will do in period 1, anticipating g2 = 2m−2k − b. R maximizes lifetime utility UR:

(g1, τ1) = arg maxg1

2+ ln [m − τ1] +

g2

2+ ln [m − τ2]

Imposing g2 = 2m − 2k − b, τ1 = g1

2 − b2 and t2 = g2

2 + b2 , we have

UR =g1 − b

2+ln

[

m − g1 − b

2

]

+m−k+ln k = τ1+ln(m−τ1)+m−k+ln k.

This yields unique τ1 = m − 1. The pair (g1, b) is not uniquelycharacterized due to quasilinearity of the utility function; with τ1 asabove, we only have g1 = 2τ1 + b = 2m − 2 + b.

3. As a final step, derive τ2 as a function of b. We know τ2 = g2

2 + b2 =

2m−2k−b+b2 = m − k. The solution can be characterized for any

b < 2m − 2k as follows:

τ1 = m − 1 = τ∗R g1 = 2m − 2 + b

τ2 = m − k = τ∗L g2 = 2m − 2k − b

If, for some reason, R wants to set (g1, b) so as to maximize utilityin the period that he/she has in control (i.e. period 1; notice thatthe lifetime utility UL will be constant if g1 and b satisfy conditionsabove), then we have:

31

Page 32: Workbook

(g1, b) = arg maxg1

2+ ln [m − τ1] = arg max

g1

2+ ln 1 = arg max

g1

2

This implies maximal g1 and maximal deficit b, but since we need alsonon-negative spending in period 2 to have interior solution (g2 > 0),our maximal deficit is b = 2m − 2k − ε, where ε > 0 is very small.We have g1 = b + 2τ1 = 2m − 2k + 2m − 2 − ε = 4m − 2k − 2 − ε.Then, the particular solution with strategic deficit is:

τ1 = m − 1 = τ∗R g1 = 4m − 2k − 2 − ε

τ2 = m − k = τ∗L g2 = ε

c) R plays in period 2 and L plays in period 1. You proceed by analogy andget

τ1 = m − k = τ∗L g1 = 2m − 2k + b

τ2 = m − 1 = τ∗R g2 = 2m − 2 − b

In both cases, both L and R manage to get in the period when they rule theoptimal taxation, where individual marginal cost of public spending equals indi-vidual marginal benefit of public spending. The deficit not necessarily emerges.

5.2 Strategic deficit II

We have 2 periods with discount factor equal 1 (i.e., zero interest rate). Inperiod 1, a left-wing party is in power. In period 2, a right-wing party is inpower. In any period, preferences of both parties over spending x and tax tare u(x, t) = bix − t2/2, where the only difference is bL > bR. Budget must bebalanced after two periods, x1 + x2 = t1 + t2.

a) Is it optimal for a left-wing government to create a deficit in period 1?If so, is spending x1 higher or lower than in a case when deficit is notpossible at all?

SOLUTION Start with the case of zero deficit. Then, x1 = t1 and x2 = t2.In period 2, R-party selects xR = tR = arg maxbRx − x2/2 = bR. In period1, L-party selects xL = tL = arg maxbLx − x2/2 = bL.

With deficit d := x1 − t1, R-party in period 2 optimizes as follows:

x2 = arg maxbRx2 − (x2 + d)2/2 = bR − d

We may also re-write into t2 = x2 + d = bR. This says that R-party keepstax t2 at a constant level (this is property of quasilinear utility as we have ithere.)

32

Page 33: Workbook

Since this is an extensive game, where L-party plays first, L-party anticipatesthe best response of R-party. Thus, L-party optimizes over both periods, whereusing discount factor equal,

(x1, d) = arg maxbLx1 − (x1 − d)2/2 + bLx2 − (x2 + d)2/2= arg maxbLx1 − (x1 − d)2/2 + bL(bR − d) − b2

R/2.

Maximizing over both variables yields an identical condition, x1 − d = bL.This states that L-party keeps tax t1 = x1−d at a constant level bL. Otherwise,L-party is indifferent over the size of the deficit.

So, with this specification of utility, we observe strategic effect only insofaras each party minimizes deviation from the party-optimal tax. Control over taxin each period has only the party that rules that period, and all equilibrium arecharacterized simply by t1 = bL and t2 = bR.

To answer the question: L-party may choose spending x1 below bL, butthen—to keep tax constant— she sets a negative deficit (i.e., makes a budgetarysurplus). If she instead chooses x1 > bL, then she sets a positive deficit.

5.3 Spending cap given by losers

In the lecture, we discussed the possibility to agree on a spending cap T asa remedy to overspending (tragedy of budgetary commons). Suppose currentlosers are agenda-setters, and current winner only vetoes their take-or-leaveproposal.

a) For what parameters (n, δ) is the equilibrium cap socially optimal?

SOLUTION The losers’ optimal cap is Tk = δ2 < 1. Losers try to get thecap as close to this level as possible. The problem is that the winner can vetotheir proposal.

The game is a simple extensive game: (i) Loser propose a cap. (ii) Winneragrees, or vetoes. In the case of veto, there is no cap. We solve the gameby backward solution. That means, we decide when the winner is willing toapprove a cap. Approval depends on whether the winner gets more or less thanif vetoing the cap.

Winner’s utility in the case of any cap T is Wj(T ) (recall lecture), and isincreasing if T < Tj and decreasing if T > Tj , where we know 1 < Tj < n2.

Wj(T ) = 2√

T − T

n+

δ

(1 − δ)n

(

2√

T − T)

.

In the case of veto (no cap), the utility is as-if a cap were set at T = n2,where each winner in any period sets x = T = n2, so the utility is

Wj(n2) =

2δn + n2(1 − 2δ)

(1 − δ)n.

33

Page 34: Workbook

We also evaluate winner’s utility for the socially-optimal cap, T c = 1:

Wj(1) =n + (n − 1)(1 − 2δ)

(1 − δ)n.

Now, losers want to get approval for the cap that is as close as possible toTk < 1. Since no-cap is strictly worse for the losers, they will always propose acap that is acceptable to the winner. The equilibrium cap is the lowest possiblecap where the winner is still (at least weakly) better off than having no cap atall. Thus, the winner’ utility under the equilibrium cap must be exactly equalto the utility under no cap.

If the equilibrium cap should be a socially optimal cap 1, then we simplyrequire Wj(n

2) = Wj(1). By comparing,

2δn + n2(1 − 2δ) = n + (n − 1)(1 − 2δ),

(1 − 2δ)(n − 1)2 = 0.

This holds always for n = 1 (which obviously violates assumption n > 1)and for δ = 1/2, irrespective of the number of players.

5.4 Coalitional vs single-party governments

Suppose that structure of tax base is such that each of n groups in a societypays equal share of total public spending.

If a government is coalitional, assume that each of n groups proposes xi

that is collective benefit for this group. Denote the optimal spending xCi . In

a single-party government, only the winner group gets xj , the others get zero.Denote the optimal spending of the winner xS

j .Suppose utility function ui = ui(xi, ti) satisfies the following standard as-

sumptions:

∂ui

∂xi> 0,

∂2ui

∂x2i

≤ 0

∂ui

∂ti< 0,

∂2ui

∂t2i≤ 0

a) Is it possible that xCj > xS

j , i.e. the winner in the single-party governmentspends less than if he were just a member of the coalitional government?

SOLUTION Equal share of total public spending means ti =∑

j xj/n. Now,we use that in both types of government, each group that is in power determinesher own spending. Thus, xi = arg max ui. In the optimum, the total differentialis zero,

dui =∂ui

∂xidxi +

∂ui

∂tidti = 0

34

Page 35: Workbook

From ti =∑

j xj/n, we have dti = dxi/n. Thus, the differential rewrites

dui

dxi=

∂ui

∂xi+

1

n

∂ui

∂ti= 0.

Now, under coalitional government, we know that all coalitional partiesspend, so tCi =

j xj/n > xi/n. Thus, due to non-decreasing marginal cost,we have that the marginal cost of a unit of collective benefit is relativelylarger in a coalitional government that in a single-party government, wheretSi =

j xj/n = xi/n. For some identical value xi = x,

0 >∂uS

i

∂ti(x, tSi ) ≥ ∂uC

i

∂ti(x, tCi ).

Now, inserting into the implicit solutions,

∂uCi

∂xi(xC

i , tCi ) = − 1

n

∂ui

∂ti(xC

i , tCi ) ≥ − 1

n

∂uSi

∂ti(xC

i , tSi ) =∂uS

i

∂xi(xS

i , tSi ).

To conclude, from non-increasing marginal benefit, we have

∂uCi

∂xi(xC

i , tCi ) ≥ ∂uSi

∂xi(xS

i , tSi ) =⇒ xCi ≤ xS

i .

5.5 Coordinated budgeting

Assume three parties, A, B and C, and two types of public expenditures, x andy. Parties have the following preferences over budgets of total size B = x + y:

uA = −(2 − x)2 − (2 − y)2

uB = −(3 − x)2 − (1 − y)2

uC = −(xc − x)2 − (yc − y)2

a) For which xc ≥ 0, yc ≥ 0 does coordinated budgeting lead to higher Bthan sequential budgeting?

SOLUTION Both for party A and B, the optimal total budget is B = 4.Therefore, in coordinated budgeting, they will create a majority in the first stepand agree on B = 4, regardless of preferences in C. We want that in sequentialbudgeting, B < 4.

This problem can be solved both graphically or analytically. Analytically,we want x + y < 4. This implies only two cases: (i) C is decisive (median) atleast in either of dimensions, and less than median in the other dimension; (ii)C is less than median in both dimensions. In (i), consider that C is decisive inx, and less than median in y; hence, xc ∈ [2, 3) and yc < 1. If C is decisive iny, we have a mirror-case, xc < 2 and yc ∈ [1, 2). If C is decisive in both x andy, then xc ∈ [2, 3) and yc ∈ [1, 2), where obviously also xc + yc < 4. In (ii), wehave xc ≤ 2 and yc ≤ 1.

Graphically, this amounts to polygon with coordinates (0, 0)−(3, 0)−(3, 1)−(2, 2) − (0, 2) − (0, 0).

35

Page 36: Workbook

5.6 Pivotal legislator

We have n non-cooperative legislators in the Parliament (n is even number),each with circular preferences in space x × y. The bliss point of a legislatori ∈ 1, . . . , n is (xi, yi). Suppose a new legislator enters the Parliament, withbliss point (xn+1, yn+1).

1. Derive all possible (xn+1, yn+1) for which in sequential budgeting, theequilibrium policy satisfies (x, y) = (xn+1, yn+1).

2. Derive all possible (xn+1, yn+1) for which in coordinated budgeting, theequilibrium policy satisfies (x, y) = (xn+1, yn+1).

3. Derive (or just plot a graph) all possible (xn+1, yn+1) for which in bothsequential and coordinated budgeting, x 6= xn+1 and y 6= yn+1.

SOLUTION We know that in each stage, a proposal is equilibrium if it is amedian proposal. Label xi, i ∈ 1, . . . , n + 1 such that x[1] ≤ x[2] ≤ . . . ≤ x[n+1].By analogy, introduce y[i], (x + y)[i] and (x − y)[i].

Sequential budgeting: The necessary and sufficient condition is that xn+1 =x[n/2+1] and yn+1 = y[n/2+1].

Coordinated budgeting: The necessary condition and sufficient is that (xn+1−yn+1) = (x − y)[n/2+1] and (xn+1 + yn+1) = (x + y)[n/2+1].

To answer the last point: In sequential budgeting, the set Ω where x 6= xn+1

and y 6= yn+1 is characterized by xn+1 < x[n/2+1] or xn+1 > x[n/2+1] andyn+1 < y[n/2+1] or yn+1 > y[n/2+1]. In coordinated budgeting, we know thatequilibrium (x, y) satisfies (x − y) = A := (x − y)[n/2+1] and (x + y) = B :=(x + y)[n/2+1]. Alternatively,

(x, y) =

(A + B

2,B − A

2

)

.

Thus, in coordinated budgeting, the set Θ where x 6= xn+1 and y 6= yn+1 ischaracterized by xn+1 < (A + B)/2 or xn+1 > (A + B)/2 and yn+1 < y(B−A)/2

or yn+1 > y(B−A)/2. Overall, we seek (xn+1, yn+1) ∈ Ω ∩ Θ.

5.7 Order of voting

Suppose three political parties, i = 1, 2, 3, have the following preferences overtax rate t and public spending g, where t1∗ < t2∗ < t∗3 and g1∗ = g2∗ = g3∗:

ui = −(t − ti∗)2 − (g − gi∗)2

The parties vote in two stages. In each stage, they use majority voting whichends if there is no proposal able to beat the last agreed proposal. In Stage 2,public spending g is determined. Is there a difference if there a vote about taxrate or a vote about deficit in Stage 1?

36

Page 37: Workbook

SOLUTION No. In Stage 2, regardless whether we vote on constraint b =b = g − t or t = t = g − b, the pivotal party is party 2. In Stage 1, the pivotalparty is again party 2. Hence, it proposes either (t) = t2 (voting about taxrate), or b = b2 = g2− t2 (voting about deficit). Either is a Condorcet winner inRound 1. In Stage 2, the pivotal party 2 proposes g = g2, which is a Condorcetwinner. This logic can be demonstrated by drawing graphs of constraints votedin Round 1, and the resulting outcomes on these constraints in Round 2, as wedid it in the lecture.

5.8 Coordinated vs. sequential budgeting with compensa-tions

We have 3 individuals who pay tax t. Tax revenues are used to pay for privategoods g1, g2, where balanced budget must be satisfied, 3t = g1 + g2. Utilityfunctions are as follows:

u1 =√

g1 − t

u2 =√

g2 − t

u3 = −t

We have two types of budgeting. In sequential budgeting, the individualsuse majority voting to determine g1 in Stage 1, and then use majority voting todetermine g2 in Stage 2. In coordinated budgeting, the individuals use majorityvoting to determine t in Stage 1, and then use majority voting to determineallocation of 3t into g1, g2 in Stage 2. In each stage, costless compensationsbetween any pair of players are possible.

a) Derive τ∗, g∗1 , g∗2 in sequential budgeting.

b) Derive τ∗, g∗1 , g∗2 in coordinated budgeting.

c) Compare the results and explain the difference or the absence of difference.

SOLUTION

a) In Stage 2, the individuals determine g2 and t is set residually, as t =(g1 + g2)/3. This means that we can alternatively think that individualsdetermine t ≥ g1/3 and g2 is set residually. In the absence of compensa-tions, we use u2 =

√3t − g1 − t and have

∂u1

∂t=

∂u1

∂t= −1,

∂u2

∂t=

3

2(√

3t − g1)− 1.

Hence, Individuals 1 and 3 prefer the lowest feasible tax, t = g1

3 , whereas

Individual 2 prefers tax that satisfies ∂u2

∂t = 0, i.e. t = g1

3 + 34 .

In the absence of compensations, Individuals 1 and 3 would vote togetherfor the lowest possible tax t = g1

3 , where g2 = 0. Each individual would

37

Page 38: Workbook

obtain zero incremental utility (surplus) in Stage 2. This gives intuitionthat it will be Individual 2 who will be willing to compensate the otherindividuals to vote for a higher tax, and correspondingly for a strictlypositive g2. Denote compensations to Individuals 1 and 3 as c1, c3 ≥ 0.

Such a compensation vector will be successful if Individuals 1 and 3 cannotmake a counterproposal that would compensate each other to vote backfor t = g1

3 , where they earn zero. In other words, the joint net surplus ofIndividuals 1 and 3 in Stage 2 must not be less than zero in total, formally

u1 −√

g1 +g1

3+ u3 +

g1

3+ c1 + c3 ≥ 0.

Individual 2 must respect this constraint, and since c1+c3 negatively enterhis or her net utility, the constraint will be satisfied with equality:

c1 + c3 =√

g1 −2g1

3− u1 − u3

Individual 2 maximizes net surplus in Stage 2, u2 + g1

3 − c1 − c3 = u2 +u1 + u3 − √

g1 + g1, which is equivalent to maximization of total payoff,with the first-order condition

∂(u1 + u2 + u2)

∂t=

3

2(√

3t − g1)− 3 = 0.

Maximum is at t = g1

3 + 112 , or g∗2 = 1

4 . The compensations are

c1 + c3 =√

g1 −2g1

3− u1 − u3 =

g2

3+

g2

3=

2

12.

With symmetry, c1 = c3 = 112 . As a final check, observe that Individual 2

has strictly positive net surplus in Stage 2:

u2 +g1

3− c1 − c3 =

g∗2 − g∗23

− c1 − c3 =6 − 3

12> 0

In Stage 1, the players are expecting the outcome described above. Hence,they take g2 as given and face the symmetric problem like in Stage 2, onlyIndividual 1 will be the one who compensates one of the other individualsto increase tax (and, correspondingly, g1). Hence, g∗1 = 1

4 . Total tax ist∗ = 2

12 = 16 . Notice that net payoff of Individual 3 is u3 + 2c3 = 0.

b) In Stage 2, tax revenues are spent, 3t = g1 + g2. Individual 3 is indifferentbetween all the allocations, whereas interests of Individuals 1 and 2 are inconflict. If it happens that Individuals 1 and 2 cooperate with each other,they maximize joint payoff

√3t − g2 +

√g2, and the solution is symmetric,

g1 = g2 = 32 t. If Individuals 1 and 3 cooperate with each other, they

maximize joint payoff√

g1, and g1 = 3t. By analogy, if Individuals 2

38

Page 39: Workbook

and 3 cooperate with each other, they maximize joint payoff√

g2, andg2 = 3t. The maximal joint payoff is, quite paradoxically, for cooperationof Individuals 1 and 2 who are in strict conflict.

We will see that in this cooperation, where g1 = g2, there will be an equi-librium. Consider Individual 3 who would like to charge a compensation,so he offers an increase in g1 to Individual 1 (without loss of general-ity) in exchange for a compensation. Maximal willingness of Individual1 to pay is c1 =

√g1 −

3t/2. Maximal willingness of Individual 2 tomake a counterproposal to Individual 1 to restore symmetric allocationis c2 =

3t/2 − √3t − g1. It is easy to see that c2 > c1 if g1 >

3t/2,

because√

3t − g1 +√

g1 is maximized exactly for g1 =√

3t/2, so

3t − g1 +√

g1 ≤ 2√

3t/2,

√g1 −

3t/2 ≤√

3t/2 −√

3t − g1,

c1 ≤ c2.

Put simply, Individual 2 can always restore cooperation between Individ-uals 1 and 2, facing any bargain between Individuals 1 and 3. As a result,Individual 3 cannot offer an increase to Individual 1 that would not bechallenged by a counter-offer of Individual 2, and vice versa. The equi-librium is g1 = g2 = 3t/2. It is interesting that this equilibrium occurs ifIndividual 1 provides Individual 2 compensation

3t/2 to keep g1 = g2

instead of g2 = 3t, and equivalently Individual 2 provides Individual 1compensation

3t/2 to keep g1 = g2 instead of g − 1 = 3t. However,effectively there are no transfers between any players in equilibrium.

In Stage 1, the Individuals 1 and 2 expect to get u1 = u2 =√

3t/2 − t,whereas Individual 3 expects to get u3 = −t in the future. In the absenceof compensations, we would have t = 3/8 by agreement of Individuals 1and 2, maximizing u1 + u2 = 2

3t/2 − 2t. Individual 3 can compensateone of the two individuals (suppose Individual 1) to decrease t; he or sheis willing to pay maximal compensation c3 ≥ 0 as long as u3 − c3 ≥ −3/8.The maximal compensation of Individual 3 is therefore c3 = u3 + 3/8.Joint cooperation yields t = 3/32.

The reservation payoff of Individual 1 (given by cooperation with Individ-ual 3) is u1 + c3 = u1 + u3 + 3/8 =

3t/2− 2t + 3/8 = 9/16. Identically,the reservation payoff of Individual 2 (given by cooperation with Individ-ual 3) is 9/16. Now, are these reservation payoffs sufficient to persuadeIndividual 1 or 2 to break their coalition? Yes, because payoff of eachunder joint cooperation is lower,

u1 = u2 =√

3t/2 − t =√

9/16 − 3/8 = 6/16 < 9/16.

39

Page 40: Workbook

The coalition of Individuals 1 and 2 is able to face deviation only if theyprovide Individual 3 with some compensation that indirectly affects reser-vation payoffs. Since we have only compensation in pairs, part of thecompensation will be provided by Individual 1 (this is the one that servesas threat to coalition of 2 and 3), and part by Individual 2 (as a threat tocoalition 1 and 3).

We can immediately impose symmetry and denote the compensation fromeach individual c > 0, so total compensation is 2c. Now, Individual 3 getspayoff −3/8 + 2c if t = 3/8. Therefore, her maximal compensation for acooperating partner is lower, c3 = u3+3/8−2c, and the reservation payoffof Individual 1 or 2 is 9/16− 2c. Are reservation payoffs now sufficient topersuade either Individual 1 or 2 to break their coalition? Not any moreas long as

u1 − c = u2 − c = 6/16 − c ≥ 9/16 − 2c,

c ≥ 3/16.

OK, but is this a solution? Think about coalition of 1 and 2 properly. Ifthey set tax, they not only directly affect u1 and u2, but also u3 whichindirectly enters the reservations payoffs. In general, for any t12 agreedby the coalition of 1 and 2, we have u3 = −t12. The reservation payoff isu1(t = 3/32)+u3(t = 3/32)−u3(t12)−2c and the net payoffs of Individuals1 and 2 are u1(t12)−c = u2(t12)−c. We need c set as a minimum satisfying

u1(t12) − c = u2(t12) − c ≥ u1(t = 3/32) + u3(t = 3/32) − u3(t12) − 2c,

3t12/2 − t12 − c ≥ 3/8 − 3/16 + t12 − 2c,

c = 3/16 −√

3t/2 + 2t12.

Net payoff is u1(t12) − c = u2(t12) − c = 2√

3t12/2 − 3t12 − 3/16. This ismaximized for t∗ = 1/6 and g∗1 = g∗2 = 1/4.

c) With pairwise compensations, both coordinated and sequential budgetingyield identical taxation. The level of taxation is socially optimal, meaningthat it maximizes total utilities. This non-intuitive result stems from thenecessity of winning coalitions to take into account possible counter-offersof non-members.

5.9 Coalitional bargaining

Parties A and B have the following preferences over tax t and spending g:

uA = 18 + 2(g + t) − (g2 + t2)

uB = 2(4t − t2 + 2g) − g2

40

Page 41: Workbook

They bargain about the budget. We only know that they establish a Pareto-efficient budget (i.e., under such a budget, none of the parties can be better offwithout making the other party worse off). Can we expect a deficit here?

SOLUTION By rearranging, we get uA = 20− (g − 1)2 − (t− 1)2 and uB =12 − (g − 2)2 − 3(t − 2)2. In other words, their preferences are quasiconcave ineach dimension, with bliss points (1, 1) and (2, 2). Both bliss points feature zerodeficit.

In the Pareto-efficient budget, the slope of indifference curves of both partiesmust be identical.

−∂uA

∂g

∂uA

∂t

= −g − 1

t − 1= − g − 2

3(t − 2)= −

∂uB

∂g

∂uB

∂t

g =t − 4

2t − 5

The deficit exists if g > t. Thus, we check if (t − 4)/(2t − 5) > t undert ∈ (1, 2) (where the agreement will be located). By examining roots of apolynomial, we find 2t2 − 5t + 4 < 0 holds if t ∈ (1, 2), so in fact g < t. (Becareful when multiplying the inequality above by the negative term 2t− 5 < 0!)The answer is no, we don’t expect budget deficit but a budget surplus.

6 Lobbying

6.1 Winner-take-all rent-seeking

Assume a winner-take-all contest for rent R, where groups X and Y compete forthe rent. X invests into rent-seeking x ≥ 0. Y observes x and invests y ≥ 0. Noextra investments are possible. Each group maximizes expected profit (expectedrent minus rent-seeking investment).

a) Suppose that the government gives rent to group X if x ≥ b; otherwise itis given to group Y . Which (x, y) is in equilibrium?

b) Suppose that the government gives rent to group X only if x > y andto group Y if y > x. Otherwise, the rent is allocated randomly withprobability 1

2 each. Which (x, y) is in equilibrium? Is it different to theprevious case?

SOLUTION

Case a) It is only X, not competition between X and Y, that determines whois awarded a rent. Hence, Y sets y = 0. X faces a take-or-leave offer of R atprice b, which is obviously accepted if b ≤ R, and is not accepted if b ≥ R:

b ≤ R : (x, y) = (b, 0)

41

Page 42: Workbook

b ≥ R : (x, y) = (0, 0)

Case b) Solve by backward induction. Consider optimal investment of Y,y = y(x). Start with x ≥ R. Y can either i) win, ii) lose, or iii) play lottery. Win(y = x+ε > x) implies negative profit R−y = R−x−ε < 0, loss implies at bestzero profit (y = 0), and lottery implies negative profit R/2 − y = R/2 − x < 0.Hence, loss is best response, y(x ≥ R) = 0.

If x < R, win implies strictly positive profit if x < y < R. Loss gives, atbest, zero profit. Lottery gives less than win, R/2− x < R− y = R− (x + ε), iffor win we set a sufficiently small 0 < ε < R/2. Hence, win is the best response,y(x < R) > x.

X anticipates this behavior. X can either i) win, or ii) lose (lottery is un-available, because Y never prefers it!). Win is only if x ≥ R, i.e. at best forx = R. Loss is for x < R, at best x = 0. Both options give zero profits. X isindifferent, so we have two equilibria:

(x, y) = (0, ε > 0) or (x, y) = (R, 0).

Case a) and Case b) yield identical (but not unique!) equilibrium if and onlyif b = R:

(x, y) = (R, 0).

6.2 Redistribution by pressure with tax evasion

We have 2 individuals, one productive and one unproductive. Individual 1earns pre-tax income Y1 > 0, whereas Individual 2 earns nothing, Y2 = 0. Bothindividuals can invest into political influence, ci ≥ 0 (suppose that zero incomeis not a binding constraint). The unproductive individual uses influence to taxincome of the productive individual and grab the tax revenues. In contrast, theproductive individual uses the influence to reduce taxation and thereby protecthimself from expropriation. The government responds to political influence by aproportional rule (v = 1); i.e. taxable income Y is taxed by a tax rate τ ∈ [0, 1]such that the net gains are

π1 ≡ (1 − τ)Y =c1Y

c1 + c2,

π2 ≡ τY =c2Y

c1 + c2.

In other words, the flat tax is τ = c2

c1+c2

and all tax revenues are used as asubsidy to the unproductive individual. In addition, the productive individualcan invest into tax evasion; protection of eY1 part of income from taxation(where e ∈ [0, 1]) costs him e2Y1; this investment decreases taxable income fromY1 to Y = (1 − e)Y1.

a) If tax evasion is impossible, what are the equilibrium τ∗, c∗1, c∗2?

b) If tax evasion is possible, what are the equilibrium e∗, τ∗, c∗1, c∗2?

42

Page 43: Workbook

SOLUTION We know from the lecture that rent-seeking investments underimperfect competition of 2 players for prize of any value Y are for both ci = Y

4 ,

and since investments are symmetric, each gains (in net terms) Y2 − Y

4 = Y4 .

a) Without tax evasion, Y = Y1, hence c∗1 = c∗2 = Y1

4 , and τ∗ = 12 .

b) With tax evasion, the productive individual thinks about net payoff fordifferent values of e. Investing into e affects his payoff in three ways: i)eY1 is saved; ii) e2Y1 is lost, and iii) rent-seeking yields net gain Y

4 < Y1

4 .The payoff writes

π1 = eY1 − e2Y1 +(1 − e)Y1

4= Y1

[

−e2 +3

4e +

1

4

]

.

By the first-order condition, we get −2e + 34 = 0, hence

e∗ =3

8, Y =

5

8Y1, c

∗1 = c∗2 =

5

32Y1, τ

∗ =1

2.

6.3 Lobbying a bureaucrat

You are a foreign investor and you need to lobby a decisive senior bureaucrat.There are 2 bureaucrats (A, B), both looking identically important, but onlyone is decisive. Lobbying a single bureaucrat costs c > 0.

1. Describe your optimal strategy and derive your expected payoff of playingthis strategy.

2. Suppose there is a lobbyist who has exclusive access to the bureaucratsand knows who is decisive. He gives you recommendation whom to lobby,but you decide who will be lobbied. Recommendation is costless, so youonly compensate the lobbyist for the lobbying cost c. Carefully describeall equilibria. Describe posterior beliefs in equilibrium (i.e., equilibriumprobability that a recommended bureaucrat is decisive). What is yourequilibrium expected payoff? Is it always better than in the previouscase? Can it be lower than in the previous case?

3. What happens if you can sign up a contract with the lobbyist such that heor she does not charge the lobbying cost if, following his recommendation,the bureaucrat appears not to be decisive? Again, carefully describe equi-libria, posterior beliefs that a recommended bureaucrat is decisive, andalso equilibrium payoffs. Are you always better off than in the previouscase where such a contract could not be signed?

4. Now, suppose the lobbyist has non-exclusive access to the bureaucrats.If you decide to use the lobbyist as an intermediary, you compensate thelobbyist for lobbying a single bureaucrat by amount l, where c < l < 3c/2,

43

Page 44: Workbook

and you are obliged to follow the lobbyist’s recommendation. Again, care-fully describe equilibria, posterior beliefs that a recommended bureaucratis decisive, and also equilibrium payoffs.

SOLUTION

1. Your prior beliefs are 1/2 for each bureaucrat. To describe your strategy,let p ∈ [0, 1] be the probability that you select bureaucrat A in the firstround. For the second round, it is obvious that you select the remaining(decisive) bureaucrat with probability 1. The expected payoff is

p

[1

2(−c) +

1

2(−2c)

]

+ (1 − p)

[1

2(−c) +

1

2(−2c)

]

= −3

2c.

Thus, in equilibrium, you can select bureaucrat A with any p ∈ [0, 1].

2. First of all, notice that a lobbyist’s costs are always compensated, so hispayoff is always zero. If a non-decisive bureaucrat is lobbied in the firstround, it is obvious that the other (decisive) bureaucrat is lobbied, so yourextra costs in the second stage are c.

Thus, the problem can be represented by the following simultaneous game:

You/Lobbyist truth lie

follow −c, 0 −2c, 0not follow −2c, 0 −c, 0

Denote f ∈ [0, 1] your probability of following, and t ∈ [0, 1] the lobbyist’sprobability of telling truth. Your best-response correspondence f(t) is (i)t < 1/2 : f = 0, (ii) t = 1/2 : f ∈ [0, 1], and (iii) t > 1/2 : f = 1.The lobbyist is indifferent, so all these strategy profiles can be equilibriumprofiles. Your equilibrium payoffs are equilibrium-dependent: (i) t < 1/2 :t(−2c) + (1 − t)(−c) = −c(1 + t), (ii) t = 1/2 : −3c/2, and (iii) t >1/2 : t(−c) + (1 − t)(−2c) = −c(2 − t). Clearly, the equilibrium payoffis minimized when t = 1/2 (because the advice is useless, only replicatesyour prior beliefs), and equals to the payoff in the previous case. In theother cases, the payoff is bigger (because the advice is at least partiallyuseful).

Your posterior belief that the recommended bureaucrat is decisive is alsoequilibrium-dependent, and equals t. (Conditional probability of selectinga bureaucrat is t+1− t = 1, where only t is for decisive bureaucrat, henceby Bayes rule, the posterior is t/1 = t.) Again, this shows that t = 1/2 isan equilibrium where advice is useless, because the prior belief equals theposterior belief.

3. In the second stage, you know the truth with certainty, so you have to

44

Page 45: Workbook

distinguish between following when truth is recommended, f t, and follow-ing when lie is recommended, f l. To be able to study deviations, denotethe expected payoffs of the last stage as (e1, e2); since in the last stageyou have to find the decisive bureaucrat, and in this case you always haveto pay (regardless whether the decisive bureaucrat was or was not recom-mended), these payoffs must be (e1, e2) = (−c, 0).

Now, if the lobbyist recommends truth, your following ends the game withpayoffs (−c, 0) and not following proceeds the game with (−c+e1, 0+e2) =(−2c, 0). Hence, f t = 1, and the expected payoff is (−c, 0).

If the lobbyist recommends lie, your not following ends the game with(−c, 0) and following gives (0+e1,−c+e2) = (−c,−c) (i.e., game proceeds,but you don’t pay). You are indifferent, play any f l ∈ [0, 1]. The expectedpayoff in this case is (−c,−cf l). As a result, the second stage gives eithert = 1 and f l ≥ 0, or t < 1 and f l = 0. In any case, the expected payoff ofthe second stage is (−c, 0).

With payoff in the second stage, we can rewrite the game:

You/Lobbyist truth lie

follow −c, 0 −c,−cnot follow −2c, 0 −c, 0

We have two equilibria, either (truth, follow) or (lie, not follow). Yourequilibrium payoff is always (−c, 0). Irrespective of the equilibrium, thegame always ends in the first stage. The posterior belief is either zero orone.

4. In the second-stage, not to using lobbyist means cost c, whereas usinglobbyist gives you at best l > c, hence lobbyist is not used, and thepayoffs are (−c, 0). Enter into the payoff matrix:

You/Lobbyist truth lie

ask lobbyist −l, l − c −l − c, l − clobby alone −3c/2, 0 −3c/2, 0

The lobbyist is always indifferent. Your expected payoff from asking lob-byist is (−l)t + (−l − c)(1 − t) = −l + (1 − t)(−c), and expected payofffrom lobbying alone is −3c/2. There is a critical level t∗ = (2l − c)/2cthat determines your best-response: (i) t < t∗, you lobby alone, (ii)t = t∗, you are indifferent, and (iii) t > t∗, you ask the lobbyist. Fromc < l < 3c/2, you can easily derive that all three equilibria types exist,because 1/2 < t∗ = (2l − c)/2c < 1.

45

Page 46: Workbook

7 State aid

7.1 Bailouts under government’s budget constraint

Like in the lecture, let managers determine the type of business. To find a goodbusiness, the manager must pay 0 < c < b. To find bad business is costless.Firm with bad business must be bailed out to survive. The manager obtainswage b > 0 if a company survives.

Suppose we have n firms. Each manager has different ci (different ability),but obtains identical (market) wage b in the case of survival. The government iswilling to bail out all bad firms (each bailout costs 1), but is restricted by havingonly 0 < m < n in the budget (hence can make only m bailouts at maximum).

a) Suppose that firms expect the probability of an average bad firm beingbailed out to be β ∈ [0, 1]. Derive the number of firms that will choose abad business and thereby demand bailout, d = d(β).

b) Derive the equilibrium probability of rescue, β∗ (implicit solution is suffi-cient; I recommend to plot a graph to illustrate the solution).

c) What is the number of firms that are bailed out in equilibrium?

d) For the equilibrium number of firms that demand bailout, d∗, do we haved∗ < m, d∗ = m or d∗ > m?

SOLUTION It is useful to define distribution function of managers ability,F (c), where F (0) = 0 and F (b) = 1 (by assumption ∀i : ci < b).

a) A manager compares two options, good project (payoff b − ci) and badproject with demand for bailout (payoff βb). Bailout is demanded if ci >(1 − β)b. Using the distribution function, we have

d(β) = n[1 − F ((1 − β)b)].

We will denote it df (β) since this function describes behavior of firms.

b) The probability of bailout is given as follows: i) for sufficiently low bailoutdemands (d ≤ m), all projects can be bailed out, β = 1; ii) otherwise(d > m) we have rationing and probability of bailout is β = m

d . Wecan define its inverse function dg(β) = m

β , which gives us the number ofdemands corresponding to a probability of bailout β; superscript g is hereto capture that this function describes behavior of the government.

Equilibrium is characterized by df (β∗) = dg(β∗). Hence, the equilibriumprobability of rescue is implicitly given by

β∗[1 − F ((1 − β∗)b)] =m

n.

46

Page 47: Workbook

Figure 7: Demand and supply of bailouts

The equilibrium condition df (β∗) = dg(β∗) can also be described on thefollowing figure:

c) Using definition of df (β), we have

d∗ =m

b∗.

d) On the graph, we immediately see β∗ ∈ (mn , 1). From c), we get

d∗ ∈ (m,n).

In other words, we always have some firms that demand bailout but arenot satisfied (demands are rationed = rationing). Moreover, we alwayshave that some firms demand bailout and are satisfied. Probability ofbailout is strictly positive, but never equal 1.

8 Rent-seeking

8.1 Entry into a public tender

Suppose a politician needs a highway of value v > 0. There are three construc-tion companies, with costs per highway 0 < c1 < c2 < c3 < v. These costsare known to all. The company which builds a highway is identified in a ten-der. Tender is organized such that each invited company submits a sealed bidof offer price, pi. By tender law, the politician has to select the lowest price,p∗ = min pi. The winner of tender has profit pi − ci, and the losers have zero.

The law however can not determine how the politician sets tender conditions.Suppose that the politician can set conditions in any way, that is, he invites thecompanies. This opens a window of rent-seeking. Specifically, suppose eachcompany promises bi ≥ 0 to the politician. The politician then decides on

47

Page 48: Workbook

invitations. Those who are invited then have to pay the promise bi. Consider 4options how to organize rent-seeking:

• Fixed entry fee, B: A company has to promise at least bi ≥ B to beinvited.

• Winner-take-all contest: Only the company with max bi is invited.

• Pairwise contest: Only the company with min bi is not invited.

• No rent-seeking: All companies are invited.

The politician values both saved public money (v−p∗) as well as rent-seekingcontributions (

i bi). Let the relative weight be w : 1, π = w(v − p) +∑

i bi.

a) Find equilibrium prices in tender for any subset of participants (i.e., asingle bidder, two bidders, or three bidders). Solve all questions usingonly pure strategies.

b) Find total rent seeking contributions in equilibrium,∑

i bi, for all positivefixed entry fees, B > 0.

c) Which of the fixed entry fees does the politician prefer?

d) Find equilibrium for a winner-take-all contest, pairwise contest, and norent-seeking.

e) Among options 2–4, when does the politician prefer a winner-take-all con-test? When a pairwise contest? When no rent seeking?

f) Which of the four options is preferred by the politician?

SOLUTION Q1. Tender prices The non-empty subsets of participants areas follows:

• Single bidder: The politician can only reject price p ≥ v, so p∗ = v.2

• Two bidders with costs cj < ck: Any offer price of a bidder i ∈ j, k hasto recover costs, pi ≥ ci. The bidder j with the lower minimal offer pricebeats the bidder k by setting any cj < pj ≤ ck (then, bidder k has noincentive to bid less and win, because it implies negative profits; bidder jhas positive profit pj − cj > 0). Thus, p∗ = ck.

• Three bidders: Like above, bidder 1 beats bidder 2 (and also bidder 3) bysetting c1 < p1 ≤ c2 < c3. Thus, p∗ = c2.

2The politician is exactly indifferent between accepting offer and rejecting it for p∗ = v.We would need to set p∗ = v − ε, where ε > 0 is infinitesimally small, to make him strictly

reject the offer. To avoid the nuisance of introducing these negligible variables, we normallyuse assumption that if individuals are indifferent between two actions, they play the actionthat we (the modelers) need.

48

Page 49: Workbook

Q2. Equilibria The company is either invited or not. So, if a companydecides not to be invited, it is optimal to play bi = 0. If it wants to be invited,it is optimal to play bi = B, not more. We can now construct a three-playersimultaneous game.

Table 16: Tender participants if Company 3 doesn’t pay the fee

b3 = 0 b2 = 0 b2 = B

b1 = 0 ∅ 2b1 = B 1 1, 2

Table 17: Tender participants if Company 3 pays the fee

b3 = B b2 = 0 b2 = B

b1 = 0 3 2, 3b1 = B 1, 3 1, 2, 3

Now, we construct payoff tables. We use the tender prices derived in Q1.

Table 18: Payoffs if Company 3 doesn’t pay the fee

b3 = 0 b2 = 0 b2 = B

b1 = 0 0, 0, 0 0, v − c2 − B, 0b1 = B v − c1 − B, 0, 0 c2 − c1 − B,−B, 0

Table 19: Payoffs if Company 3 pays the fee

b3 = B b2 = 0 b2 = B

b1 = 0 0, 0, v − c3 − B 0, c3 − c2 − B,−Bb1 = B c3 − c2 − B, 0,−B c2 − c1 − B,−B,−B

It is easy to see that Company 3 pays the fee only if B < v − c3. The onlysubset of tender players involving Company three is 3. From Table 18, wesee that 1, 2 is not an equilibrium. So, the only suspected equilibria are asfollows:

1. No bidder (∅): This requires B > v − c1. Total rent-seeking contributionsare zero.

2. Single player 1: By checking best responses, we see that this requires justB ≤ v − c1. Total contributions is B (only Company 1 contributes).

49

Page 50: Workbook

3. Single player 2: Again check best responses, and see that this requires c2−c1 < B ≤ v − c2. Total contributions is B (only Company 2 contributes).

4. Single player 3: Again check best responses, and see that this requires c3−c1 < B ≤ v − c3. Total contributions is B (only Company 3 contributes).

Q3. Entry fees As long as B ≤ v − c1, the politician can expect totalcontributions B, irrespective of which equilibrium is selected. A single tenderbidder then submits p∗ = v, and total politician’s payoff is:

π = w(v − p∗) +∑

i

bi = B.

Entry fee maximizing politician’s payoff is B = v − c1, and π = v − c1.

Q4. Other options If all three are allowed to enter, it is clear that a uniqueequilibrium is not to pay anything, b1 = b2 = b3 = 0. The politician’s payoff is

π = w(v − p∗) +∑

i

bi = w(v − c2).

In pairwise contest, denote the company that remains out as loser. Now, canCompany 1 be a loser? If so, then winner in tender is Company 2, and payoff ofCompany 2 must be positive c2 − c3 − b2 > 0, or b2 < c2 − c3. Company 1 is notwilling to get into tender only if the payment b2 is prohibitively high. Since itsexpected rent is c2− c1, this requirement writes c2− c1− b2 < 0, or b2 > c2− c1.However, this together implies impossibility, b2 < c2 − c3 < c2 − c1 < b2.

Company 1 therefore must be in tender. If Company 1 is expected to bein tender, the expected rent of any of the other companies is zero. Hence,Companies 2 and 3 set b2 = b3 = 0 and their chance of getting into tender splitsin half. Under such bids, Company sets b1 = ε > 0 just to be sure to be intender. Notice that neither of Companies 2 or 3 has an incentive to outbid theother and improve a chance to be in tender, because this would imply a net loss.

With equal probability of Companies 2 and 3 to be in tender, the politician’sexpected payoff is precisely

π = w(v − p∗) +∑

i

bi.= w

[

v − 1

2(c2 + c3)

]

< w(v − c2).

Winner-take-all option is not difficult either. The winner’s rent is v − ci

(the winner will be a single bidder and offers price p∗ = v). Thus, Company 1expects the relatively highest rent (conditional on victory). So, if Company 1pays the politician b1 = v − c2 − ε

.= v − c2, none of Companies 2 and 3 will bid

above that to capture exclusive entry. To sustain bid b1 = v − c2, notice thatwe need b2 = v−c2, but this is only promised by Company 2, not actually paid.The politician’s expected payoff is

π = w(v − p∗) +∑

i

bi = v − c2.

50

Page 51: Workbook

Q5. Which of the contests? Clearly, pairwise contest is always worsethan no-rent seeking. The problem for the politician is that elimination of acompetitor is not valuable for the other competitors, so they do not contributeanything as a rent-seeking expenditure. In contrary, the elimination makes thecompetition less intensive, because full entry implies that the winner has to beatthe second-best alternative.

To compare winner-take-all rent-seeking with no rent-seeking, it only de-pends on w. If w < 1 (private contributions weight much more than savings inthe budget), then winner-take-all rent seeking contest is preferred as it pushesthe strongest company to outbid the competitors by valuable private contribu-tions to the politician. Tender is then virtually meaningless. If w > 1 (savingsin budget matter a lot), it is better to make the company pay high price offi-cially in tender.

Q6. Preferred options The non-dominated options give v−c1 (entry fee),w(v−c2) (no rent seeking), and v−c2 (winner-take-all contest). If w < 1, entryfee is preferred. If w > 1, it depends:

• Moderate w, 1 < w < v−c1

v−c2

: entry fee is still preferred

• Large w, w > v−c1

v−c2

: no rent seeking

In other words, either you have full competition, or you try to restrict com-petition just to a single bidder. If it pays off to restrict it to the single bidder,then fixed entry fee is more valuable. Fixed entry fee targets the bidder withthe highest valuation, and tries to share his profits without introducing simul-taneous rent-seeking competition with the other companies. This demonstrateswhy rent-seeking is difficult to detect if the politician so strong that his entryfee is taken as non-negotiable.

8.2 Rent-seeking in Gambit

Tie is important if the strategy set is discrete. We may eliminate ties in thefollowing way: Suppose 2 players, rent R = 6 and strategy sets S1 = 0, 2, 4, 6,S2 = 1, 3, 5, 7.

1. Select any 1 < v < ∞, construct a strategic game in Gambit and find allNash equilibria. Discuss (especially equilibrium payoffs).

2. Select v = ∞, construct a strategic game in Gambit and find all Nashequilibria. Discuss and compare to the previous case.

As output, I prefer PDF-converted print-outs.

51

Page 52: Workbook

SOLUTION Consider v = 2, hence πi =x2

i

x2

i +x2

−iR−xi. We construct a payoff

matrix (π1, π2) (e.g., in MS Excel3) and enter into Gambit (see the followingtable).

The unique equilibrium strategy profile consists of mixed strategy (3196 , 65

96 , 0, 0)for Player 1 and mixed strategy ( 5

96 , 9196 , 0, 0) for Player 2. This gives Player 1

expected payoff zero, and Player 2 expected payoff 74 .

In the continuous case, we would get a unique symmetric equilibrium x1 =x2 = R

2 , where the rent is completely dissipated, and surplus for players is zero.This is obtained from the following FOC and symmetry x1 = x2:

dπ1

dx1=

2x1x22

(x21 + x2

2)2R − 1 = 0

In our case, however, Player 2 can play x2 = 3 = R2 , but Player 1 not. What

we observe is that Player 1 tends to play lower offers (in 2/3 cases playingx1 = 2, and in 1/3 cases x1 = 0), and Player 2’s best response are lower offersas well. The fact that each player’s expected offer is less than the theoreticaloffer ( 65

96 ·2 < 3, 596 ·1+ 91

96 ·3 < 3) explains why surplus is positive. Interestingly,however, the surplus is fully captured by Player 2 whose strategy set is less‘constrained’, meaning that the available actions are closer to the theoretical(continuous-type) equilibrium best responses.

Consider now v = ∞. It predicts full dissipation through mixed strategiesimposing equal probability on each actions. The payoff matrix is as follows:

Gambit computes two equilibria. In each, Player 2’s mixed strategy is( 13 , 1

3 , 13 , 0), hence its expected offer is 3.

• Offensive play. Player 1 may play (0, 13 , 1

3 , 13 ), hence its expected offer

is 4. There are 9 events, each occurring with probability 19 . Player 1

wins in 6 events, and loses in 3 events. Hence, its expected payoff is zero( 69 · 6 − 4 = 0). Player 2 wins in 3 events.

3In Excel, it is convenient to compute nominators and denominators in payoffs of Player 1and 2 separately so that we can enter rational numbers into Gambit.

52

Page 53: Workbook

• Defensive play. Player 1 may play (13 , 1

3 , 13 , 0), hence its expected offer

is 2. There are 9 events, each occurring with probability 19 . Player 1

wins in 3 events, and loses in 6 events. Hence, its expected payoff is zero( 39 · 6 − 2 = 0). Player 2 wins in 6 events.

We may conclude: (i) Player 2’s advantage in previous case now may turninto a disadvantage. (ii) We may obtain both underdissipation (positive surplus)but also overdissipation (negative surplus). Player 2 cannot protect itself fromnegative surplus since his minimum offer cannot be zero, but one.

8.3 Strategic leadership in Gambit

We have an extensive game of rent-seeking with R = 4, v = 1, where Player 1plays x1 ∈ S = 0, 1, 2, then Player 2 observes his choice, and plays x2 ∈ S.

1. Construct an extensive game and find all Nash equilibra. Discuss.

2. Suppose now that Player 1, observing Player 2’s action, can add up x′1 ∈

S. Construct an extensive game and find all Nash equilibra. Createan equivalent strategic game and show the strategy sets of both players.Discuss and compare to the previous case.

SOLUTION AND DISCUSSION. Part 1 We prepare payoff matrix,where we assume that for (x1, x2) = (0, 0), rent is not allocated. (Recall ourdiscussion from the lecture.) The game tree is as follows:

Our theoretical prediction is based on subgame-perfection (and backwardinduction). In any subgame initiated by Player 1’s action, Player 2 responds byx2 = 1. Player 1 anticipating the best response selects x1 = 1, and equilibriumpayoffs are 1 each. This is equivalent to a simultaneous game with a continuousset of actions, which we solved in the lecture. Hence, leadership yields nostrategic advantage.

In simulation, Gambit computes 45 equilibria. Equilibria 1–21 are notsubgame-perfect. Why? By subgame-perfection, Player 2 on the equilibriumpath responds to x1 = 1 by playing x2 = 1. Here, Player 2 instead mixes ac-tions 0, 1, 2, and this makes Player 2 to x1 = 2. These equilibria are not veryappealing, since they involve threats that are not realized on the equilibriumpath, and remain off equilibrium path (even if they constitute Nash equilibria).Payoffs are always ( 2

3 , 13 ).

Equilibria 22–45 are all subgame-perfect. Why? Player 1 plays x1 = 1 andthen, in infoset defined by observing Player 1 to play x1 = 1, Player 2 playsx2 = 1. The only difference between these equilibria are in nodes that are offequilibrium path. Payoffs are always (1, 1), exactly as given in theory.

Part 2 We now add possibility of Player 1 to incrementally increase its offer.The game tree now looks as in the following figure. As usually, we solve firstanalytically subgame-perfect equilibria (SPNE). By backward induction, Player

53

Page 54: Workbook

1 in last Stage plays x′1 = 1 in his/her infosets 2–4 and x′

1 = 0 in infosets 5–10.(In other words, Player 1 exploits his option of incrementally increasing offeronly if he starts with zero offer, x1 = 0.) Anticipating this, Player 2 plays alwaysx2 = 1 regardless of x1. Finally, Player 1 anticipating x2 = 1 and his/her bestresponses in his/her infosets 2–10, mixes in infoset 1 actions x1 = 0 and x1 = 1.The expected payoff as well as payoff in each event are (1, 1), exactly as in abenchmark simultaneous model.

If we allow Gambit to compute equilibria, it runs into trouble if we demandall equilibria. The reason is computational complexity, as we shall see in detailbelow. Instead, we may ask for a single equilibrium. Then, Gambit computesSPNE where actions x1 = 0 and x1 = 1 are mixed with probability 1

2 each. Thisis reflected in the following figure.

We may also for as many equilibria as possible, and we obtain many non-subgame-perfect equilibria. I have got 19 non-SPNE, with expected payoffs( 13 , 2

3 ), (23 , 1

3 ), (1, 1) and (2, 0). Alternatively, one could use quantal-responseequilibria, which is a solution allowing for systematic noise in actions of players,and this converges to the SPNE equilibrium with uniform mix.

The strategic form reveals complexity of even so structurally relatively sim-ple problems: Pure strategy set of Player 1 has 310 elements (3 actions per 10infosets), Player 2’s pure strategy set comprises 33 elements. The set of pure

54

Page 55: Workbook

strategy profiles thus contains 313 = 1594 323 elements that have to be exam-ined. Even worse, in mix-strategy terms, recall that a probability distributionover three actions must be element in two-dimensional simplex, to be denotedP . Then, a Player 1’s mixed strategy x1 defines probabilities over actions ineach infoset, hence it is a 10 × 2-matrix, x1 ∈ P 10:

x1 =

p10 p1

1

p20 p2

1...

...p100 p10

1

For Player 2, a mixed strategy is a 3 × 2-matrix, x2 ∈ P 3. The set ofmixed-strategy profiles is thus P 10 × P 3 = P 13, computationally equivalent toa 26-dimensional simplex.

To conclude: The introduction of strategic leadership has changed the gameonly if we think of non-subgame-perfect equilibria. Then, however, there aremultiple equilibria based on non-realized threats. If we come back to subgame-perfection, neither the strategic leadership itself nor the leader’s possibility tounilaterally increase the initial offer affects the equilibrium payoff. We may thuscall our textbook model relatively robust to modifications in terms of leadership.

8.4 Credit constraint

Consider contest over rent R, where Player 1 can invest any amount x1 ≥ 0,but Player 2 can invest only x2 ∈ [0, z], where 0 < z < R (possibly given hislow initial income and credit constraint).

1. For the winner-take-all rent-seeking contest, derive equilibria for all pos-sible realizations of z.

2. For the proportional contest, derive equilibria for all possible realizationsof z.

SOLUTION Winner-take-all Any action of Player 1 where x1 > z bringsvictory, hence payoff R−x1. Thus, there is (in limit) a reservation payoff R− zthat must be provided by any alternative action. Similarly for Player 2, anyaction must provide at least a reservation payoff 0. Consider equilibria whereall feasible actions are in the support of players. Then, the conditions statedabove require:

F2(x1)R − x1 = R − z

F1(x2)R − x2 = 0

Rewriting, for x ∈ [0, z),

F1(x) =x

R,F ′

1(x) =1

R,

F2(x) =x

R+ 1 − z

R, F ′

2(x) =1

R.

55

Page 56: Workbook

For the sake of completeness, F1(x) = F2(x) = 1 if x ≥ z.

Proportional contest As we know from the lecture, best responses are x∗i (x−i) =√

x−i(√

R−√x−i). These are increasing in x−i up to x−i = R

4 , and decreasing

since then. The intersection is for x1 = x2 = R4 . In our case, the only difference

is that for Player 2, best response is x∗i = min√x−i(

√R − √

x−i), z. Thus,we have two possibilities:

• z ≥ R4 : The constraint doesn’t apply, and in equilibrium (x1, x2) = (R

4 , R4 ).

• z < R4 : The constraint applies, and equilibrium is characterized by (x1, x2) =

(√

z(√

R −√z), z).

8.5 Trade policy

Importers have brought m > 0 units of a good from China into a Europeancountry, and now want to sell it. Competitive domestic producers of the gooddecide on supply s > 0 of the good, where their cost of production is s2/2.Domestic demand for the good is D(p) = a − p, where p is domestic price.Assume a > m.

The politician decides either (i) not to intervene (free market), (ii) impose atax at value t on each unit sold from imports (tariff), or (iii) restrict the importsby licensing only q < m units (quota).

a) Find domestic production, total production, and market price under freemarket, when any tariff t ≥ 0 applies, and when quota system q ∈ [0,m]applies.

b) Derive rent of domestic producers under tariff and quota system.

c) Derive revenues (proceeds from sales) of importers under tariff and quotasystem.

d) Suppose tax beneficiaries and domestic producers create a coalition thatmaximizes the sum of their payoffs. Which system would they prefer?

e) Suppose tax beneficiaries and importers create a coalition that maximizesthe sum of their payoffs. Which system would they prefer?

f) Suppose domestic producers and importers create a coalition that maxi-mizes the sum of their payoffs. Which system would they prefer?

SOLUTION Answer a): The costs of imports are sunk costs for the im-porters. Thus, supply of imports from abroad is constant irrespective of price,Sm(p) = m. Domestic producers are competitive, so they supply such that theprice equals the marginal cost. The marginal cost is

∂s2/2

∂s= s,

56

Page 57: Workbook

where from equality with price s = p we get that s = Sd(p) = p.

1. Free trade: Market clears, D(p) = a − p = p + m = Sd(p) + Sm(p), i.e.,p∗ = (a − m)/2. Total amount is D∗ = D(p∗) = (a + m)/2. Domesticproduction is S∗

d = p∗ = (a − m)/2.

2. Tariff: Supply from imports is constant at m, if price is non-negative.Thus, the importers will respond to an effective price p− t by Sm(p) = mif t ≤ p, and by Sm(p) = 0 if t ≥ p. Thus, if t ≤ t∗, free market allocationpreserves.

3. Quota: With quota, the only difference to the free market is lower supplyfrom imports, Sm = q ≤ m. Market clears, D(p) = a − p = p + q =Sd(p) + Sm(p). Price goes up, pq = (a− q)/2 > p∗, total amount is lower,Dq = D(pq) = (a + q)/2 < D∗. Domestic production is however larger,Sq

d = pq > S∗d .

Answer b): Under tariff t ≤ p∗, the domestic producers cash in zero rent,because the equilibrium market price as well as sales are identical. Under quota,the rent is positive, and it is the difference between profits in the two regimes.

1. Free market: The profits are pSd − S2d/2 = (p∗)2/2.

2. Quota: The profits are pSd − S2d/2 = (pq)2/2.

The rent is as follows:

R(q) =1

2

[(a − q

2

)2

−(

a − m

2

)2]

=1

8(m − q) (2a − m − q)

Answer c): Under tariff, they receive (p − t)m. Under quota, they receivepqq = (a − q)q/2.

Answer d): As we know, tariffs do not change production or market price.The only effect in our model is that some importers’ profits are redistributedtowards tax beneficiaries. Importers are not part of the coalition, so the optimaltariff system is to set t = p∗. The tariff revenues are tm = (a − m)m/2.

For quota system, it depends whether tax beneficiaries are also consumers.If not, tax beneficiaries are not affected, and quota is set such that it maximizesrent

q = arg max R(q) = arg max(m − q)(2a − m − q).

The rent function falls in q (the slope is 2q − 2a < 0, because q ≤ m < a).Hence the optimal quota is q = 0, and rent is m(2a − m). Now, compare tariffrevenues and the maximal rent to see which is better:

(a − m)m

2≥ m(2a − m) : m ≥ 3a

57

Page 58: Workbook

By assumption m < a, quota that completely prohibit imports is better forthe coalition that a non-distorting tariff. The intuition is that the coalitiondoesn’t care for a loss in consumer surplus.

What if consumers are at the same time tax beneficiaries? Then it’s clearthat an optimal tariff system is still t = p (money for nothing). How to get theoptimal quota? A marginal change in rent (for q < m) is:

∂R(q)

∂q= 2q − 2a < 0

A marginal change in consumer surplus (from linearity of demand curve,consumer surplus simply writes C(p) = [D(p) · D(p)]/2):

dC(p)

dq=

∂[D(p)]2/2

∂q=

∂(a − p)2/2

∂p

∂p

∂q=

(a − p

2

)

> 0

The marginal change in total payoffs of the coalition is

dC(p) + R(q)

dq= 2q − 2a +

(a − p

2

)

=9q − 7a

4.

This function has u-shape, so the total payoff is maximized in either of ex-trema, q ∈ 0,m. To find out where, we compare the values. For convenience,denote for q = 0 the price p0 := pq(0) = a/2 and production/consumptionD0 := Dq(0) = a/2.

q = 0 : C(p0) + R(0) =a2

4+ m(2a − m) =

a2 + 8am − 4m2

4

q = m : C(p∗) + R(m) =(a + m)2

4+ 0 =

a2 + 2am − m2

4

By comparing, from a > m, the optimal quota is fully prohibitive quota,q = 0. Now, is full-quota system better than tariff system? Compare the totalpayoffs:

q = 0 : C(p0) + R(0) =a2

4+ m(2a − m) =

a2 + 8am − 4m2

4

t = p∗ : C(p∗) + p∗m =(a + m)2

4+ m

a − m

2=

a2 + 4am − m2

4

By comparing, from a > m, we get that the full-quota system is better thana non-distorting tariff. Thus, it is irrelevant whether tax beneficiaries are alsoconsumers; in any case, a full restriction on imports is imposed by the coalition.

Answer e): Tariff doesn’t distort markets, only redistribute from importersto tax beneficiaries. Thus, the coalition is indifferent over tariffs and free-trade.Quota reduces consumption, increases price, which decreases consumer surplus

58

Page 59: Workbook

C(p). Hence, if at least some tax beneficiary is also a consumer, than the op-timal quota is a non-restrictive (free trade) quota q ≥ m. As a result, thiscoalition has no incentive to modify the free market regime.

Answer f): Tariff revenues go into hands of tax beneficiaries, who are notin this coalition, hence the optimal tariff is zero, and free trade allocation ispreserved. For quota, maximize the total payoff of domestic producers andimporters:

q = arg max(pq)2

2+ pqq = arg max

1

8(a − q)(a + 3q)

It is easy to derive that this payoff is u-shaped, hence is maximized forextrema, q ∈ 0,m. By comparing the values, one gets that the maximum isfull quota (q = 0) if the amount of imports is low, m < 2a/3. Otherwise, themaximum is for loose quota (q = m). The intuition is that importers agreewith complete restriction of imports to create artificial scarcity, if they can becompensated for forgone profits.

Notice that a full-quota system effectively means that all the imported goodsmust be destroyed. This is what we often observe with imports that involvefaked brands.

9 Reform

9.1 Enlargement and costly compliance

Recall the enlargement game where in Stage 1, two acceding countries simulta-neously decide on compliance, and in Stage 2, the club decides whom to admit.For acceding country, accession benefit is Y , and compliance cost is 0 < x < Y .For club, entry benefits are 0 < b < B (b for one acceding country, B for twoacceding countries), which diminish by 0 < c < C in the case of non-compliance(c for one non-compliant country, C for two non-compliant countries), whereB − c < b < B − C + c.

1. Suppose there is an extra Stage 0, where the club pre-commits to an acces-sion rule. The rule specifies which country (or countries) will be admittedfor all possible combinations of countries’ decisions on compliance. Deriveall plausible rules. Which rule should the club choose?

2. Suppose there is an extra Stage 0, where the club pre-commits to Country1 that it will compensate its compliance cost x. What is equilibrium (orequilibria)? Be careful. (Hint: Think of indifference.)

3. What happens if the club is not only willing to cover the compliancecost, but also provides an extra bonus for compliance, so that the totalcompensation for Country 1 (if it complies) is x > x.

59

Page 60: Workbook

4. Suppose that the club needs that both countries comply with certainty,but raising money for compensations is costly. Does the club compensatea single country or both countries?

SOLUTION Recall the EU’s payoffs:

Compliance/Entry None One Two

None complies 0 b − c B − COne complies 0 b B − cBoth comply 0 b B

Answer a) It is okay to solve this in pure strategies. The rule is a vectorfunction over compliance actions (c1, c2), f(c1, c2) : 0, 1 × 0, 1 −→ 0, 1 ×0, 1. The club wants to have a maximal payoff B with certainty. This meansthat compliance of both, (c1, c2) = (1, 1), must be a unique Nash equilibrium ofthe two acceding countries.

To get so, assure that compliance is a dominant strategy.

• If Country 1 doesn’t comply and Country 2 complies, Country 1 should bebetter off by complying. This is if and only if Country 1 is not admittedif it doesn’t comply, but is admitted if it complies, 0 < Y − x. (Think ofimpossibility of all other possibilities.)

• If neither Country complies, any Country should be better off by comply-ing. This is if and only if a Country is not admitted if it doesn’t comply,but is admitted if it complies, 0 < Y − x.

Thus, the optimal club rule is f(c1, c2) = (c1, c2).

Answer b) The payoff matrix in the non-cooperative game of the two coun-tries looks as follows:

AC1/AC2 Comply Don’t comply

Comply Y,Y − x Y, 0Don’t comply 0, Y − x Y,Y

Thus, nothing changes in comparison to the original game. Although Coun-try 1 strictly prefers compliance if Country 2 doesn’t comply, it is indifferentbetween compliance and non-compliance if Country 2 complies, hence the casewhen no country complies (and Country 1 is not compensated) is also an equi-librium.

Answer c) Unlike the previous case, for Country 1, compliance is a dom-inant strategy, because Y − x + x > 0 and Y − x + x > Y . Thus, the onlyequilibrium is that both comply, and payoffs are (Y − x + x, Y − x).

60

Page 61: Workbook

Answer d) In the previous answers, we can see that compensating a countryby less that x is useless. Paying slightly above x to a single country is sufficient toinduce full compliance. As a consequence, only one country will be compensated:The incentive to comply for one Country makes the other Country also comply.

61

Page 62: Workbook

62

Page 63: Workbook

63


Recommended