+ All Categories
Home > Documents > The incentive effects of interim performance evaluations

The incentive effects of interim performance evaluations

Date post: 30-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
34
The Incentive E ects of Interim Performance Evaluations Alessandro Lizzeri, Margaret Meyer, and Nicola Persico Abstract We study a dynamic moral hazard model where the agent does not fully observe his performance. We consider the incentive eects of providing feedback to the agent: revealing to the agent how well he is doing. We show that, if the incentive scheme is exogenously given, there is a wide range of cases where the agent works harder if feedback is provided. However, the agent earns more money in this scenario. We then characterize the optimal incentive schemes in the two scenarios and we show that the principal is better oif feedback is not provided; the expected cost of inducing any given level of expected eort is lower in the no-revelation scenario. Incomplete Draft Department of Economics, New York University, 269 Mercer Street, New York, NY 10003; e.mail: alessan- [email protected]; web: http://www.econ.nyu.edu/user/lizzeria/. 1 USC FBE APPLIED ECONOMICS WORKSHOP presented by Alessandro Lizzeri FRIDAY, Nov. 7, 2003 1:30 - 3:00 pm; Room: HOH-601K NOTE: Dr. Lizzeri will be presenting a more recent version of this material. However, no new write-up is available at this time.
Transcript

The Incentive Effects of Interim Performance Evaluations

Alessandro Lizzeri, Margaret Meyer, and Nicola Persico∗

Abstract

We study a dynamic moral hazard model where the agent does not fully observehis performance. We consider the incentive effects of providing feedback to the agent:revealing to the agent how well he is doing.We show that, if the incentive scheme is exogenously given, there is a wide range of

cases where the agent works harder if feedback is provided. However, the agent earnsmore money in this scenario.We then characterize the optimal incentive schemes in the two scenarios and we show

that the principal is better off if feedback is not provided; the expected cost of inducingany given level of expected effort is lower in the no-revelation scenario.

Incomplete Draf t

∗Department of Economics, New York University, 269 Mercer Street, New York, NY 10003; e.mail: [email protected]; web: http://www.econ.nyu.edu/user/lizzeria/.

1

USC FBE APPLIED ECONOMICS WORKSHOPpresented by Alessandro LizzeriFRIDAY, Nov. 7, 20031:30 - 3:00 pm; Room: HOH-601K

NOTE: Dr. Lizzeri will be presenting a more recent version of this material. However, no new write-up is available at this time.

1 Introduction

How do organizations decide whether, and to what extent, to provide feedback to individualson how their performance to date has been evaluated?

To examine this question, we study a dynamic principal-agent model in which the agentprivately observes how hard he works but cannot fully observe the consequences of his efforts,i.e. his performance. In this setting the principal has a choice of whether to conduct an interimperformance evaluation (IPE) and whether to reveal the outcome of this IPE to the agent.

To the extent that providing feedback on performance helps individuals do their jobsbetter, or plan their futures better, it is beneficial. But what effects does performancefeedback have on incentives and motivation? What effects does it have on sorting within thehierarchy, i.e., ensuring that the more able individuals advance faster than the less able?

There are several organizational settings in which choices about interim performanceevaluations are potentially important. In professional service firms such as law firms andconsulting firms, junior professionals know that a substantial part of their reward takes theform of a potential promotion to partner. Long before the promotion decisions are made,some information about their prospects is typically revealed to these employees. Sometimesthis information is revealed through a formal process of periodic evaluations by the partners;sometimes the information revelation process is more informal. In schools and universities,teachers and professors face choices about whether to assign grades based entirely on per-formance on final exams or whether, and to what extent, to use a more continuous form ofassessment. A midterm exam is a particularly stark form of an IPE; if given a voice, mostuniversity students appear to favor having such an exam.

There are also other types of environments where it is possible to alter the amount ofinterim information individuals receive. In patent races, each firm’s behavior will be affectedby the interim information it receives on the progress made by rivals; how would a socialplanner concerned with innovation and welfare wish to influence how much firms learn abouteach other’s progress? In auctions, the choice of rules can affect how much interim informationcontestants receive: an ascending auction provides more information to bidders about theirrivals than does a sealed-bid auction. In multi-stage sports competitions, organizers could inprinciple influence how much players learned about others’ performances. Finally, in electoralcontests, candidates’ choices about campaign spending and about platforms can be affectedby the frequency and scope of pre-election polling.

Surprisingly, the economics literature has devoted very little attention to the effects ofinterim performance evaluations on behavior. There is extensive discussion of IPE’s in thehuman resource management literature, but there is no consensus and little formal analysis.

The most obvious aspect of behavior that will be affected by IPE’s is the motivationto exert effort following the interim evaluation. In addition, IPE’s can affect incentives toexert effort prior to the IPE. Furthermore, the structure of the optimal incentive scheme will

2

depend on whether or not IPE’s are provided. Finally, when individuals have heterogeneousabilities, IPE’s will affect how well incentive schemes sort agents according to ability, as wellas how well effort choices are tailored to ability.

Our formal model consists of two periods. In each period, the agent exerts a privately-observed level of effort that determinines the probability of success. The agent does notobserve whether the outcome is a success or a failure, but the principal does observe theoutcome. The agent is assumed to be risk neutral but protected by limited liability. Wecontrast two informational scenarios. In the first, the ‘no-revelation’ scenario, when choosingsecond-period effort the agent does not know the first-period outcome. In the second, the‘revelation’ scenario, the principal is assumed to provide a truthful interim performanceevaluation, so the agent learns the first-period outcome before choosing second-period effort.(In order to focus on the effects of IPE’s on the agent’s behavior, we suppress considerationof incentive problems on the principal’s part.)

We begin our analysis by contrasting the agent’s behavior in the two scenarios when theincentive scheme, consisting of rewards conditional on the possible outcomes, is exogenouslygiven and the same in the two scenarios. In this case we show that in some circumstancestotal expected effort is higher when the first-period outcome is revealed to the agent beforethe second period. However, the agent earns more money in the revelation scenario becausehe can tailor his effort to exploit differences in marginal compensation. Thus, the expectedcost to the principal is higher in the revelation scenario. As a consequence, even if revelationraises the agent’s expected effort, we cannot conclude that the principal prefers to providean interim evaluation.

Furthermore, if the principal were to choose the incentive schemes optimally, i.e. tominimize the expected cost of inducing a given level of expected effort, he would choosedifferent schemes in the two scenarios. When interim evaluations are not provided, theoptimal incentive scheme rewards the agent if and only if he succeeds twice. By contrast,when interim evaluations are given, it is optimal to offer a strictly positive reward for a singlesuccess, while offering an even greater marginal reward for a second success.

Armed with these characterizations, we then assess the desirability of providing interimperformance evaluations when incentive schemes can be designed optimally. In this setting,we show that it is better not to reveal any information, i.e. the expected cost of inducingany given level of expected effort is lower in the no-revelation scenario.

Related literatureThere is a large literature on dynamic agency problems, though very little attention has

been paid to the effect of interim performance evaluations. A number of papers (e.g. Fuden-berg, Holmstrom, and Milgrom (1990) and Chiappori et al (1994)) investigate the conditionsunder which the optimal contract in a long-term agency relationship can be implementedthrough a sequence of short-term (spot) contracts. In our setting, whether or not IPE’s are

3

provided, a sequence of spot contracts would be strictly worse for the principal than the op-timal long-term contract–this is a consequence of the assumption that the agent has limitedliability.

Rogerson (1985) and Meyer (1992) study two-period moral hazard models (with one andtwo agents, respectively) in which agents, though not protected by limited liability, are riskaverse. They assume throughout that the first-period outcome is observed by the agentsbefore the second period. Both papers focus on how, in the optimal long-term contract, thesecond-period incentive scheme depends on the first-period outcome. Rogerson shows thedesirability of having wage payments display “memory”, and Meyer shows that the principalbenefits by baising the second-period contest in favor of the first- period winner. Neitherpaper addresses the issue of whether the principal could induce efforts from the agents morecheaply by not providing interim performance evaluations.

Holmstrom and Milgrom (1987), along with the accounting literature on earnings manage-ment, study a dynamic agency setting in which the agent privately learns how he is performingand adjusts his subsequent efforts accordingly.1 Holmstrom and Milgrom present conditionsunder which the optimal contract in such a setting is linear in cumulative performance. Theassumption that the principal knows strictly less than the agent about the agent’s perfor-mance, as well as about his effort, is very natural in some settings, such as when the principalis shareholders, the agent is a CEO, and performance is cash flows. In this paper, however,we are concerned with settings where the principal is better able than the agent to evaluatethe (uncertain) consequences of the agent’s efforts. There may be a significant subjectiveelement in the evaluation of performance, as with clients’ evaluations of professional services,or the agents may have little experience with the tasks they are performing, as when theagents are students learning new material and the principals are their teachers.

Malcomson (1984) and Kahn and Huberman (1988), among others, examine agency set-tings where principals are better informed than agents about agents’ performance. Theyassume that it is intrinsically costless for principals to misreport agents’ performance andstudy what incentive schemes are feasible in this case. In the professional serivces and educa-tional contexts mentioned above, however, principals can, albeit at a cost, make performanceevaluations verifiable. We assume that final performance is made verifiable, and our focus ison the effects of making interim performance verifiable as well.

There are a few papers which attempt to examine some of the consequences of interimperformance evaluations. In Prendergast (1992), the firm privately observes workers’ abilitiesafter the first period and decides whether or not to use promotions to a fast track as a crediblesignal to high-ability workers. The cost of providing IPE’s is that fast-track promotions occurinefficiently early, while the benefit is that workers are induced to tailor their training effort

1For recent overviews of the accounting literature on earnings management, see Arya, Glover, and Sunder(1998) and Lambert (2001).

4

to their level of ability. Gibbs (1991) presents some discussion of how interim evaluationsaffect subsequent efforts, when an agent’s cumulative performance must surpass a thresholdin order for him to receive a reward. Lazear (1999) asks a similar question in the context ofrank- order tournaments.

Finally, our characterization of optimal incentive schemes with and without IPE’s isrelated to work on the optimal design of prize structures in contests. Krishna and Morgan(1998) and Moldovanu and Sela (2001, 2002) examine when it is optimal to use a winner-take-all prize structure. We study how the desirability of rewarding only the very best cumulativeperformance is affected by whether or not an IPE is provided.

2 Model

There are two periods. In each period t, there are only two possible outcomes: success orfailure. The outcome in period t is denoted by Xt ∈ {f, s}. The probability of a successin period t is equal to the effort et in that period: P (Xt = s) = et. Conditional on effortchoices, outputs are independent across periods.

In each period the cost of effort et is denoted by c(et), where c is increasing, three timesdifferentiable, and convex, with c(0) = c0(0) = 0.

The agent is assumed to be risk neutral.An incentive scheme for the agent is characterized by transfers conditional on all possible

outcomes: w(f, f), w(f, s), w(s, f), w(s, s). Given that the agent is risk neutral, the problemis uninteresting unless we assume that there is a limited liability constraint. We assume thatw(x, y) ≥ 0 for x, y = f, s.

We will contrast two scenarios on the information that is available to the agent when hechooses effort. In the no-revelation scenario, which we call the N-scenario, when choosingeffort in the second period, the agent does not observe the first-period outcome. In thisscenario, the agent’s payoff is:

UN(e1, e2) = w(s, s)e1e2+w(s, f)e1(1−e2)+w(f, s)(1−e1)e2+w(f, f)(1−e1)(1−e2)−c(e1)−c(e2)

In the revelation scenario, which we call the Y-scenario, the agent learns the first-periodoutcome before choosing second-period effort. Thus, in this scenario, the agent can choosea different effort in the second period depending on the first-period outcome. Letting e2(s)and e2(f) denote the second-period effort following success and failure, respectively, we canwrite the agent’s payoff in this scenario as:

UY (e1, e2(s), e2(f)) = w(s, s)e1e2(s) +w(s, f)e1(1− e2(s)) + w(f, s)(1− e1)e2(f)

+w(f, f)(1− e1)(1− e2(f))− c(e1)− e1c(e2(s))− (1− e1)c(e2(f))

5

3 The Effect of IPE’s When the Incentive Scheme is Fixed

3.1 Preliminaries

In this section, we discuss the effects of interim performance evaluations on effort choiceswhen the incentive scheme is exogenously given and the same in both scenarios. For now,we assume that the cost of effort function is quadratic: c(e) = ce2/2. We also assume thatthe rewards only depend on the number of successes, i.e., w(f, s) = w(s, f) ≡ w(s), and thatthey are monotonically increasing in this number, so w(f, f) ≤ w(s) ≤ w(s, s).2. Finally, toguarantee interior solutions we also assume that w(s, s) < c.3

3.2 The Effect of IPE’s on Subsequent Efforts

We begin by analyzing the effect of IPE’s on expected effort in the second period, taking asgiven an exogenously fixed probability of success p in the first period. In the no-revelationscenario the agent chooses second period effort eN2 to maximize

p(w(s, s)eN2 +w(s)(1− eN2 )) + (1− p)(w(s)eN2 + w(f, f)(1− eN2 ))−1

2c(eN2 )

2. (1)

so the optimal value is

eN2 (p) =p(w(s, s)− w(s)) + (1− p)(w(s)− w(f, f))

c. (2)

Consider now the revelation scenario. If the first-period outcome is a success, the agentchooses e2(s) to maximize

w(s, s)e2(s) + w(s)(1− e2(s))− 12c(e2(s))

2, (3)

so the optimal value is

eY2 (s) =w(s, s)−w(s)

c. (4)

If the first-period outcome is a failure, the agent chooses e2(f) to maximize

w(s)e2(f) + w(f, f)(1− e2(f))− 12c(e2(f))

2, (5)

so the optimal value is

eY2 (f) =w(s)− w(f, f)

c(6)

We can now state the following benchmark result:

2 In the next section, we relax the assumption that w(f, s) = w(s, f), and we show that the optimal incentiveschemes in the two scenarios are monotonic.

3See Section 3.4 for a discussion of the case where this assumption is relaxed.

6

Lemma 1 Fix the first period probability of success at the same level p in both scenarios.Then expected effort in the second period is the same in the two scenarios.

Proof: Expected second-period effort in the revelation scenario is peY2 (s) + (1 − p)eY2 (f),which, using (4) and (6) equals eN2 (p) as given by (2).

Lemma 1 says that, with exogenously given probability of first-period success and quadraticeffort costs, interim performance evaluations have no effect on average second period effort.An IPE does, of course, induce the agent to tailor his second-period effort to reflect the mar-ginal return, but since (by assumption) the IPE has no effect on the first-period probabilityof success, it has no effect on the expected marginal return to second-period effort. Sinceeffort costs are quadratic, it then follows that an IPE leaves expected second-period effortunchanged.4

3.3 The Effect of IPE’s on Prior and Subsequent Efforts

Denote by uY (s) and uY (f) and uN (s) and uN(f) the continuation utilities associated withsuccess and failure in the revelation and no revelation scenarios, respectively. In eitherscenario i, the difference between the continuation utilities in the two states in the secondperiod, ui(s)−ui(f), represents the marginal benefit from increasing effort in the first period.The following result says that, if the probability of success in the first period is no larger than1/2, this marginal benefit is larger in the revelation scenario than in the no revelation scenario.This is the key to the effort comparison across the two scenarios.

Lemma 2 Given an exogenously fixed probability of success p in the first period, suppose thatthe agent chooses effort optimally in the second period. Then uY (s)−uY (f) ≥ uN(s)−uN(f)if and only if p ≤ 1

2 . Equality holds only if (w(s, s)−w(s)) = (w(s)− w(f, f)) or p = 12 .

Proof: To obtain uY (s), substitute eY2 (s) from (4) into (3). To obtain uY (f) substituteeY2 (f) from (6) into (5). Subtracting the second expression from the first gives:

uY (s)− uY (f) = (w(s)− w(f, f)) +1

2c((w(s, s)− w(s))2 − (w(s)− w(f, f))2) (7)

To obtain the difference in utilities among the two states in the no-revelation scenario,substitute the optimal second-period effort (for a fixed p) into payoffs in the two states toobtain

uN (s)− uN(f) = (w(s)− w(f, f)) +1

c

np [w(s, s)− w(s)]2 − (1− p) [w(s)− w(f, f)]2

+(1− 2p) [w(s, s)− w(s)] [w(s)−w(f, f)]} (8)4With exogenously fixed probability of success in period 1, IPE’s decrease (increase) expected second-period

effort if the marginal cost of effort is convex (concave).

7

Define x = w(s, s) − w(s) and y = w(s) − w(f, f). Then expression (7) is greater than(8) if and only if

px2 − (1− p) y2 + (1− 2p)xy < x2

2− y2

2.

We can rewrite this inequality as

(p− 12)x2 + (p− 1

2)y2 − 2(p− 1

2)xy < 0,

or(p− 1

2)(x− y)2 < 0.

Hence the result.

The next proposition shows that first-period effort is higher in the revelation scenario.

Proposition 1 Suppose that the agent is facing an incentive scheme defined by (w(s, s), w(s), w(f, f)).Then

(i) First-period effort is higher if information is revealed: eY1 ≥ eN1 ; with strict inequaltiyif the incentive scheme is nonlinear, i.e., w(s, s)− w(s) 6= w(s)− w(f, f).

(ii) E(eY2 ) > (<) eN2 if and only if w(s, s) − w(s) > (<) w(s) − w(f, f), i.e., expectedsecond-period effort is higher (lower) when information is revealed if the incentive scheme isconvex (concave).

Proof: Part (i): In the revelation scenario, the agent’s objective is to choose e1 to maximizee1u

Y (s) + (1− e1)uY (f)− 1

2c (e1)2. The first-order condition is

uY (s)− uY (f) = ceY1 . (9)

In the no-revelation scenario, given the optimal choice of eN2 giving rise to uN (s) and uN (f),

the agent chooses e1 to maximize e1uN (s) + (1− e1)uN(f)− 1

2ce21. The first-order condition

isuN(s)− uN (f) = ceN1 . (10)

Now observe that, since c > w(s, s)− w(f, f), then

eN1 = eN2 =w(s)− w(f, f)

c+w(s)− w(f, f)− (w(s, s)− w(s))<1

2. (11)

Suppose then that the first-period probability of success is fixed at the optimal effort inthe N -scenario, eN1 . Lemma 2 guarantees that given this probability of success in the firstperiod, the left-hand side of equation (9) exceeds the left-hand side of equation (10). Thus,first-period effort must be higher in the Y-scenario.

Part (ii): In order to compare second-period efforts, observe that at the optimum, fromequation (2),

8

eN2 =eN1 (w(s, s)− w(s)) + (1− eN1 )(w(s)− w(f, f))

c

and, from equations (4) and (6),

EeY2 =eY1 (w(s, s)− w(s)) + (1− eY1 )(w(s)−w(f, f))

c

Since, by part (i), eY1 > eN1 , then we have that EeY2 > eN2 if and only if (w(s, s)−w(s)) >

(w(s)− w(f, f))

Proposition 1 says that first-period effort is higher under revelation. However, part (ii) ofthe Proposition says that second- period effort could be eiher higher or lower in the revelationscenario than in the no-revelation scenario depending on the convexity of the incentive scheme.

We now want to complete the effort comparison by comparing total expected efforts overthe two periods in the two scenarios. In order to make the comparison, we need the followingLemma.

Lemma 3 Given an exogenously fixed probability of success p in the first period, suppose thatthe agent chooses effort optimally in the second period. In the revelation scenario, expectedoptimal effort in the second period changes less than one for one with p.

Proof: We need to obtain expected second-period effort in the Y-scenario as a function of p.In order to do this, multiply by p the right-hand side of equation 4, multiply the right-handside of equation 6 by (1− p) and add the resulting expressions to obtain

E¡eY2 |p

¢=

p(w(s, s)− w(s)) + (1− p)(w(s)− w(f, f))

c(12)

Thus, ¯̄̄̄dE(eY2 |p)

dp

¯̄̄̄=

¯̄̄̄(w(s, s)−w(s))− (w(s)− w(f, f))

c

¯̄̄̄< 1.

The next proposition shows expected total effort is higher in the revelation scenario andshows that first period effort is higher than second period effort in the Y scenario.

Proposition 2 Suppose that the agent is facing an incentive scheme defined by (w(s, s), w(s), w(f, f)).Then

(i) Total expected effort is higher if information is revealed: eY1 +E(eY2 ) ≥ eN1 + eN2 ;withstrict inequality as long as w(s, s)− w(s) 6= w(s)− w(f, f).

(ii) In the revelation scenario„ first period effort is higher than second period effort:eY1 ≥ E(eY2 ); with strict inequality as long as w(s, s)−w(s) 6= w(s)− w(f, f).

9

Proof: Part (i):

eY1 − eN1 ≥ ¯̄E(eY2 |eY1 )−E(eY2 |eN1 )

¯̄≥ E(eY2 |eN1 )−E(eY2 |eY1 )= eN2 −E(eY2 |eY1 ).

where the inequality in the first line comes from Lemma 3, and the third line comes fromLemma 1. Therefore, eY1 +E(eY2 ) > eN1 + eN2 .

Part (ii): Observe first that eN1 = eN2 . Furthermore, by Lemma 1, if we evaluate theexpectation of second-period effort in the revelation scenario according to the probabilityeN1 , we obtain EeY2 = eN1 . Finally, by Lemma 1, Ee

Y2 increases less than one for one with

increases in p. Thus, eY1 > EeY2 .

Corollary 1 The expected cost of effort to the agent is higher in the Y scenario.

Proof: From the above Proposition we have

eN1 ≤eY1 +E(eY2 )

2.

Thus, using first monotonicity, and then concavity, of c, we can write

c¡eN1¢ ≤ c

µeY1 +E(eY2 )

2

¶≤ 1

2c¡eY1¢+1

2c¡E(eY2 )

¢≤ 1

2c¡eY1¢+1

2E¡c(eY2 )

¢whence

2c¡eN1¢ ≤ c

¡eY1¢+E

¡c(eY2 )

¢For future reference, observe that the optimum first period effort in the revelation scenario

is

eY1 =(w(s)− w(f, f))2c+ (w(s, s)− w(f, f))(w(s, s) + w(f, f)− 2w(s))

2c2<1

2. (13)

10

3.4 Discussion

Proposition 2 shows that there is a wide class of incentive schemes that lead the agent toexert more effort in the case where information is revealed to the agent. It would be temptingto conclude from this result that a principal who is interested in eliciting effort from the agentwould choose to conduct interim performance evaluations and give all possible feedback tothe agent. However, such a conclusion would be premature. In the revelation scenario, theexpected wage bill is also higher. To see this, observe that the wage bill is equal to theexpected utility of the agent plus the expected cost of effort. The expected utility of theagent is obviously higher in the revelation scenario and, by corollary 1 the total cost of effortis also higher in the revelation scenario. The inequalities are strict if the incentive schemeis nonlinear. Thus, it is not clear whether the principal would prefer to conduct interimperformance evaluations. Furthermore, if the principal chooses the compensation scheme aswell as the revelation policy, we have to consider the possibility that the optimal scheme inthe no revelation scenario may be quite different from the optimal scheme in the revelationscenario.

One exception to this discussion is the case of a subprincipal (say a division manager) whohas no control over the compensation scheme, but can choose whether to conduct interimperformance evaluations. Our analysis comparing the incentive properties of exogenouslyfixed incentive schemes suggests that, if the division manager is rewarded on the basis oftotal output and not on the wage bill, he would choose to reveal information to employees.

We now comment on two assumptions that were made in the analysis of this section.The first is the assumption that w(s, s) < c. This assumption guaranteed interior solutions.If w(s, s) > c, then effort would be identical in the two scenarios: the agent would chooseeN1 = eN2 = eY1 = eY2 (s) = 1. While the case of w(s, s) = c seems unimportant, it turnsout to be part of the optimal scheme in the case of no revelation for the case of quadraticcosts. However, when analyzing exogenously given incentive schemes, it seems appropriateto ignore this case because when w(s, s) = c, effort is indeterminate: the agent is indifferentbetween all effort levels.

The second is the assumption of quadratic effort cost. If effort costs are not quadratic,then the analysis is more complex. As mentioned in footnote 3.2, for an exogenously fixedprobability of success in the first period, expected effort in the second period is lower (higher)in the revelation scenario depending on whether the marginal cost of effort is convex (concave).However, we could find no general conditions guaranteeing a ranking of total expected effortsin the two scenarios.

The next section will not assume quadratic costs.

11

4 Optimal Incentive Schemes

We now allow the principal to choose the incentive scheme optimally in the two scenarios, wecompare the properties of the incentive schemes in the two scenarios, and we look at whichscenario is preferred by the principal. We dispense with the assumption of a quadratic costfunction and the assumption that the incentive scheme must depend only on the number ofsuccesses. It is clear that in the optimal incentive scheme, w(f, f) = 0. Thus, from now onwe will focus only on the remaining three values of the compensation scheme.

4.1 The Optimal Incentive Scheme in the No-Revelation Scenario

Let us express the problem of an agent in the following way

maxe1,e2

U (e1, e2)

= maxe1,e2

αe1 + βe2 + γe1e2 − c (e1)− c (e2) . (14)

The α, β, γ’s are synthetic parameters which, in our problem must be a function of w(s, f),w(f, s), and w(s, s). Specifically, α = w(s, f), β = w(f, s) and γ = w(s, s)−w(s, f)−w(f, s).The limited liability assumption implies that α and β are restricted to be nonnegative. Wenow discuss how the principal would choose a compensation scheme that depends on e1, e2,and e1 · e2, two linear terms and a mixed term. It will become clear how such a scheme canbe implemented with the instruments available to the principal. We assume that c0 (0) =0, c0 (1) =∞. These conditions guarantee that the optimal e1 and e2 are interior.

Let us first solve for the case in which the principal sets β = α, meaning that the rewardfor just one success is independent of whether the success happened in period 1 or 2 (we willsoon show that this contract is indeed optimal for the principal.) Provided that problem (14)is concave in e1, e2 (requiring c000 ≥ 0), the agent will choose the same effort in both periods.Denote this effort by e. Then problem (14) becomes

maxe2αe+ γe2 − 2c (e) .

The first-order condition for the agent is

2£α+ γe− c0 (e)

¤= 0.

Denote the solution by e∗. Integrating this expression over e between zero and e∗ yields thesurplus that the principal must allow the agent in order to implement e∗ in the two periods.The per-period surplus enjoyed by the agent is depicted in Figure 1 as the area between thethin straight line originating at α, and the curve c0 (e). Adding the cost of exerting effort,i.e., the integral under the curve c0 (e), yields the expected per-period cost to the principal ofimplementing e∗.

12

Figure 1:

Any line going through e∗ and with intercept greater than 0 corresponds to a contractthat implements e∗. It is clear from Figure 1 that the contract that minimizes the per-periodcost to the principal is the contract in which α = 0. In this case the per-period cost is thearea of the triangle (0, c0 (e∗) , e∗). The expected total cost of implementing (e∗, e∗) is doublethe area of that triangle, i.e., exactly the area of the rectangle (0, A, c0 (e∗) , e∗). Denoting byR (e) the area of the rectangle with base of e and height of c0 (e) ,

R (e) ≡Z e

0c0 (e) dy,

then the expected total cost to the principal of implementing e∗, e∗ is simply R (e∗).Now let us verify that indeed the optimal contract indeed entails α = β. Suppose not,

and without loss of generality suppose that it were optimal to set α < β. Then it must bethat e∗1 < e∗2. Write the agent’s surplus as

U (e∗1, e∗2) = U (e∗1, 0) +

Z e∗2

0

∂U (e∗1, y)∂e2

dy.

In light of the first-order condition and the fact that the compensation scheme is linear in e2,we can rewrite the above equation as

U (e∗1, e∗2) = U (e∗1, 0) +

Z e∗2

0

£c0 (e∗2)− c0 (y)

¤dy.

13

Adding the agent’s cost of effort yields the expense to the principal of implementing e∗1, e∗2.That equals

αe∗1 +Z e∗2

0c0 (e∗2) dy

= αe∗1 +R (e∗2) .

Notice that for a lesser expense of just R (e∗2) the principal can implement e∗2, e∗2 (a strictlylarger total effort) by setting α = β = 0 and appropriately adjusting γ. This shows that it issuboptimal for the principal to set α < β.

We collect these arguments in the following proposition.

Proposition 3 Assume c000 > 0. In the no-revelation scenario, the contract that implementsa given total effort E at the lowest cost entails rewarding the agent only in the case of twosuccesses. Thus, w(s, f) = w(f, s) = 0. The expected payment by the principal to the agentin the optimal contract implementing a total effort of E is R (E/2) = c0 (E/2) ·E/2.

To complete our characterization, we consider what happens if the principal wants toimplement different efforts in the two periods, say because effort is more valuable to himin one period than]the other. The following proposition characterizes the optimal incentivescheme that implements 0 < e1 < e2. The case 0 < e2 < e1 has the complementary properties.

Proposition 4 Assume c000 > 0. In the no-revelation scenario, the contract that implementsan effort profile 0 < e1 < e2 at the lowest cost entails setting w(f, s) = 0 and w(s, s) >

w(s, f) > 0.

4.2 The Optimal Incentive Scheme in the Revelation Scenario

In the revelation scenario, the agent is able to condition the choice of the second-period efforton the outcome of the first period effort. The principal, similarly, can give different second-period incentives depending on whether the first period effort resulted in success. We shallnow describe some properties of the optimal incentive scheme in the revelation scenario.

Proposition 5 Suppose the principal wants to implement an effort of e1 ∈ (0, 1) in the firstperiod and an expected effort of E2 > 0 in the second period. Then the compensation schemethat minimizes the principal’s expected cost induces the agent to exert a strictly positive effortafter a faliure, but to work even harder after a success: e2(s) > e2(f) > 0.

Proof: Given an incentive scheme described by w(s, s), w (s, f) and w(f, s), the agent’ssecond-period effort choices satisfy:

w(s, s)− w(s, f) = c0(e2(s)) and w(f, s) = c0(e2(f)),

14

so the utility of the agent conditional on a success can be written as

u(s) = w(s, f) + c0(e2(s))e2(s)− c(e2(s)) (15)

and conditional on failure

u(f) = c0(e2(f))e2(f)− c(e2(f)) (16)

The agent’s first-period effort satisfies u(s)−u(f) = c0(e1). For any effort triple (e1, e2(s), e2(f))the principal chooses to induce, the necessary wage payments are determined by the equationsabove.

The principal’s objective is to minimize expected wage payments subject to inducing aperiod-1 effort e1 and period-2 expected effort E2 = e1e2(s) + (1− e1)e2(f). The principal’sexpected total cost is

TC = U + c(e1) +E(c(e2)),

where U = e1u(s)+ (1− e1)u(f)− c(e1) and E(c(e2) = e1c(e2(s))+ (1− e1)c(e2(f)). TC canbe rewritten as

TC = e1c0(e1) + u(f) +E(c(e2))

= e1c0(e1) + c0(e2(f))e2(f) + e1(c(e2(s))− c(e2(f))) (17)

using the agent’s first-order conditions above as well as expression (16). The principal choosese2(s) and e2(f) to minimize TC subject to e1e2(s)+(1−e1)e2(f) = E2. Using this constraintto substitute for e2(s) in TC and differentiating with respect to e2(f) yields the first-ordercondition

c00(e2(f))e2(f) + (1− e1)(c0(e2(f))− c0(e2(s)) = 0 (18)

This implies that e2(s) > e2(f) since c00 > 0. Moreover, since at e2(f) = 0 the left-hand sideof equation (18) is strictly negative, it follows that 0 < e2(f) < e2(s).

Proposition 5 shows that in the revelation scenario, the principal will choose to distortthe effort of the agent in the second period away from what would be optimal in the absenceof moral hazard. Specifically, the agent is induced to work harder after a success thanafter a failure despite the fact that this variation in effort per se raises the agent’s ecpectedeffort costs. This distortion is optimal because it makes it cheaper for the principal toprovide incentives for effort in the first period: The marginal reward to e1 is u(s) − u(f) =

w(s, f)+[c0(e2(s))e2(s)−c(e2(s))]−[c0(e2(f))e2(f)−c(e2(f))], so for any e1, the larger the gapbetween e2(s) and e2(f), the smaller the value of w(s, f) required to make u(s)−u(f) = c0(e1).

An implication of Proposition 5 is that the probability of success is positively correlatedacross periods.

If the principal wants to implement a total expected effort of E and does not care di-rectly about how the agent allocates his effort across periods, then we can show that, at theoptimum, 0 < e2(f) < e1 < e2(s).

15

Proposition 6 Suppose the principal wants to implement a total expected effort of E =

e1 + e1e2(s) + (1 − e1)e2(f). Then the compensation scheme that minimizes the principal’sexpected cost induces the agent to choose efforts satisfying 0 < e2(f) < e1 < e2(s).

Proof: By Proposition 5, we already know that 0 < e2(f) < e2(s). We first show thate1 > e2(f). The principal chooses e1, e2(s), and e2(f) to minimize expression 17 subject toe1 + e1e2(s) + (1 − e1)e2(f) = E. Using this constraint to substitute for e2(s) in 17 anddifferentiating with respect to e1 yields the following first-order condition

c0(e1) + c00(e1)e1 − c0(e2(s))(1 + e2(s)− e2(f)) + (c(e2(s))− c(e2(f))) = 0. (19)

The first-order condition with respect to e2(f) is the same as when e1 is exogenouslygiven, namely equation (18). Subtracting (18) from (19) yields

c0(e1) + c00(e1)e1 − (c0(e2(f)) + c00(e2(f))e2(f)))

= c0(e2(s))(e2(s)− e2(f))− (c(e2(s))− c(e2(f))) + e1(c0(e2(s))− c0(e2(f))) > 0

because e2(s) > e2(f) and c is strictly convex. Since c0(e) + c00(e)e is strictly increasing ifc000 ≥ 0, it follows that e1 > e2(f). Furthermore, it follows from 0 < e2(s)−e2(f) < 1, c000 ≥ 0,and equation (19) that

2c0(e1) < c0(e1) + c00(e1)e1 = c0(e2(s))(1 + e2(s)− e2(f))− (c(e2(s))− c(e2(f))) < 2c0(e2(s))

and therefore, since c is strictly convex, e1 < e2(s).

4.3 Comparing the Two Scenarios

The problem of the agent in the revelation scenario can be written as

maxe1,eF2 ,e

S2

U¡e1, e

F2 , e

S2

¢(20)

= maxe1,eF2 ,e

S2

[δe1 − c (e1)] + (1− e1) [ζe2(f)− c (e2(f))] + e1 [θe2(s)− c (e2(s))] .

Our goal is to show that for any constellation of e∗1, e∗2(f), and e∗2(s) giving rise to a totalexepcted effort E, the same total expected effort can be implemented more cheaply in theno- revelation scenario.

Proposition 7 Assume 2c00 (e) + ec000 (e) > 0. Achieving a total expected effort of E underrevelation has an expected cost to the principal of more than R (E/2). Thus, it is less costlyto implement any given total expected if information is not revealed.

16

Proof: Case e∗1 > E/2.From the first order conditions with respect to e1 we have

∂U (e1, e∗2(f), e

∗2(s))

∂e1= c0 (e∗1)− c0 (e1) .

We can then express the agent’s surplus as

U¡e∗1, e

∗F2 , e∗S2

¢= U (0, e∗2(f), e

∗2(s)) +

Z e∗1

0

∂U (e1, e∗2(f), e

∗2(s))

∂e1de1

= U (0, e∗2(f), e∗2(s)) +

Z e∗1

0

£c0 (e∗1)− c0 (e1)

¤de1.

Adding to this expression the cost of effort, which is at least c (e∗1), yields the cost to theprincipal of implementing (e∗1, e∗2(f), e∗2(s)). The cost to the principal is therefore not smallerthan

U (0, e∗2(f), e∗2(s)) +R (e∗1)

> U (0, e∗2(f), e∗2(s)) +R (E/2) .

The inequality follows from the assumption that e∗1 > E/2. The term U (0, e∗2(f), e∗2(s))represents the expected surplus of an agent who has exerted no effort in the first period; it isnonnegative by individual rationality. This shows that a total expected effort of E is cheaperto achieve via no revelation.

Case e∗1 ≤ E/2.

Write the agent’s surplus as

U (e∗1, e∗2(f), e

∗2(s))

= U (e∗1, 0, 0) +Z e∗2(f)

0

∂U (e∗1, e2(f), 0)∂eF2

de2(f) +

Z e∗2(s)

0

∂U (e∗1, e∗2(f), e2(s))∂eS2

de2(s)

Make use of the first order conditions to rewrite the agent’s surplus as

U (e∗1, 0, 0)+(1− e∗1)Z e∗2(f)

0

£c0 (e∗2(f))− c0 (e2(f))

¤de2(f)+e

∗1

Z e∗2(s)

0

£c0 (e∗2(s))− c0 (e2(s))

¤de2(s).

Adding the cost of effort to this expression yields the cost to the principal of implementing(e∗1, e∗2(f), e∗2(s)), which is

δe∗1 + (1− e∗1)R (e∗2(f)) + e∗1R (e

∗2(s)) .

The function R (e) is strictly convex when 2c00 (e)+ec000 (e) > 0. This means that the averagearea of the two rectangles in Figure 2 is larger than the area of the rectangle with averagebase. Then the cost to the principal of implementing (e∗1, e∗2(f), e∗2(s)) is strictly greater than

δe∗1 +R ((1− e∗1) e∗2(f) + e∗1e

∗2(s)) .

17

Figure 2:

Since by assumption e∗1 ≤ E/2, it must be (1− e∗1) e∗2(f) + e∗1e∗2(s) ≥ E/2. Therefore, theprevious expression is greater than

δe∗1 +R (E/2) .

The term R (E/2) represents the cost of implementing E in the no-revelation scenario. Sinceδ ≥ 0, that cost is smaller than the cost to the principal of implementing (e∗1, e∗2(f), e∗2(s))under revelation.

Note that the proof of Proposition 7 can be readily adapted to prove the following:

Proposition 8 Suppose the principal wants to implement an effort of e1 in the first periodand an expected effort of E2 in the second period. This can be done at lower cost wheninformation is not revealed to the agent.

4.4 Example where revelation is optimal

Consider the following cost function:

c(e) =e2

2 e ≤ beebe− (be)2

2 e > be (21)

18

with corresponding marginal cost:

c0(e) =

(e e < bebe e > be

)We will show that, for this cost function, the principal’s total cost of implementing a total

effort over two periods of at least 2be is lower if the principal reveals information to the agent.This is driven by the fact that the first order approach fails, and the first order conditionsof the agent are not sufficient to guarantee that incentive constraints are satisfied. Onceglobal incentive constraints are taken into account, the power of the incentive scheme underno revelation becomes very weak, and this gives the edge to revelation.

Claim: Assume that the agent’s cost function is given by equation (21), and that theprincipal wants to implement a total effort over two periods of at least 2be. Then, in theoptimal incentive scheme in the no-revelation scenario, w (s, s)−w (s, f)− w (f, s) = 0.

This claim can easily be verified in the case in which the effort to be implemented ise∗ in both periods, i.e. the principal wants the agent to implement the same effort in bothperiods. Indeed, assume that w (s, s) − w (s, f) − w (f, s) > 0, then consider a choice ofeffort of e for the agent in both periods. The second order conditions for this commoneffort are: w (s, s) − w (s, f) − w (f, s) − c00 (e∗) < 0. but, if e∗ > be, c00 (e∗) = 0, and ifw (s, s) − w (s, f) − w (f, s) > 0, the first order conditions describe a local minimum. Thecase in which efforts to be implemented are different in the two periods the logic is somewhatmore complex but quite similar nevertheless. Since the costs to the principal can easily beshown to be decreasing in w (s, s)−w (s, f)− w (f, s), this proves the above claim.

Given this claim, it is easy to show that the principal is better off in the revelation scenario.Because of the claim, in the no revelation scenario we have w (s, s)−w (s, f) = w (f, s). Thiswage profile implements exactly the same effort in the revelation scenario since the marginalbenefit of effort to the agent is the same in both states of the second period. Thus, this effortprofile is implementable at the same costs in the two scenarios. However, we know fromProposition 5 that, in the revelation scenario, the principal benefits from choosing differentefforts in the two states. Thus, the principal does better under revelation.

It turns out that even in the no revelation scenario, once the first order approach fails,it can be optimal to implement different efforts in the two periods. To see this, note that toimplement 0 < e1 < be, 1 > e2 ≥ be, we must have: w (f, s) = be, w (s, f) = e1. This leads to atotal cost to the principal of TC = e2be+ e21 where e1 + e2 = 2be. Thus, the optimal profile isobtained by e1 = be

2 , e2 =3be2

Thus, total cost in the N-scenario are TCN = 3be22 + be2

4 =74be2

In the revelation scenario, consider the following effort profile: e1 = be, eS2 = 1, eF2 = 0.To obtain eS2 = 1, we must set w (s, s)− w (s, f) = c0 (1) = be. This leads to a utility for the

19

agent conditional on success of uS = w (s, s)−³be− (be)2

2

´= w (s, f) + (be)2

2

Furthermore, to obtain e1 = be, we must have uS−uF = be or w (s, s) = 2be− (be)22 , w (s, f)+

(be)22 = be. Thus, w (s, f) = be− (be)2

2 , w (s, s) = 2be− (be)22

This leads to TCY = bew (s, s) = 2be2 − (be)32 . It can easily be verified that TC

Y < TCN if14be2 − 1

2be3 < 0 i.e., 12be2(12 − be) < 0 i.e.,be > 12 .

4.5 Ability

We now add to the model a component of ability. Agents can be more or less able. Abilitytranslates into higher value of effort to the principal: effort exerted by a more able agent ismore valuable to the principal. Formally, the value to the principal of effort e from agent ofability a is given by the function v (e, a) which is increasing in a. In this formulation, a goodagent is not more likely than a bad agent to succeed, nor is his effort cheaper.

After the first period effort has been exerted, the principal draws a signal that is infor-mative about the agent’s ability. This signal is separate from and independent of the signalabout effort, which as before is thought of as success or failure.

As before, we allow the contract to be conditional on the agent’s revealed ability. Thequestion for the principal is whether, in this new scenario, there should be interim evaluations.If so, what kind of evaluation should this be, i.e., should it reveal to the agent how well hiseffort turned out and/or reveal the signal about his ability?

We show that, while an the interim evaluation is generally preferable to no interim eval-uation, the subject of the evaluation should be the ability of the candidate, not how wellhis or her effort turned out. Formally, we show that revealing how the effort turned out isdominated by not revealing.

To this end, we first study the case in which ability is revealed but effort is not. Weexamine the case in which the agent is rewarded only for two successes.

Proposition 9 Suppose the principal give an interim evaluation that only reveals the agent’sability a but not whether the agent succeeded or failed in the first period. Suppose further thatthe agent is rewarded only for two successes. Then the cost of implementing any imple-mentable plan e∗1, e∗2 is R (e∗1) = Ea [R (e

∗2 (a))].

Remark 1 By the equality, and since R is convex, we obtain e∗1 ≥ Ea [e∗2 (a)]. Thus, the

agent’s expected effort (though not necessarily its value to the principal) is greater in the firstperiod.

Proof of the PropositionDenote with γ (a) the reward for two successes that implements e∗1, e∗2 (a). Notice that

this reward depends on the ability of the agent as revealed by the signal a. Given this system

20

of rewards, the agent’s utility from taking plan e1, e2 (a) is

U (e1, e2) = Ea [e1 · e2 (a) · γ (a)− c (e2 (a))]− c (e1)

Write

U (e∗1, e∗2) = U (0, e∗2) +

Z e∗1

0

∂U (e1, e∗2)

∂e1de1

= Ea [−c (e∗2 (a))] +Z e∗1

0

£c0 (e∗1)− c0 (e1)

¤de1.

Adding the expected cost of effort yields the cost to the principal of implementing the actionplan, which equals R (e∗1). Conversely, write

U (e∗1, e∗2) = U (e∗1,0) +Ea

(Z e∗2(a)

0

£c0 (e∗2 (a))− c0 (e2 (a))

¤de2 (a)

)

= −c (e∗1) +Ea

(Z e∗2(a)

0

£c0 (e∗2 (a))− c0 (e2 (a))

¤de2 (a)

).

Adding the expected cost of effort yields the cost to the principal of implementing the actionplan, which equals Ea [R (e

∗2 (a))].

Proposition 10 Suppose the principal give an interim evaluation that only reveals the agent’sability a but not whether the agent succeeded or failed in the first period. Then it is optimalfor the principal to reward the agent only for two successes.

Proof of the PropositionConsider any action plan ee∗1,ee∗2 that can be implemented by rewarding more outcomes

than just two successes, and denote the agent’s utility by

U (ee∗1,ee∗2) = eαee∗1 − c (ee∗1) +Ea

heβ (a) · ee∗2 (a)− c (ee∗2 (a))i+Ea [eγ (a) · ee∗1 · ee∗2 (a)] .Performing the usual transformations yields

U (ee∗1,ee∗2) = eαee∗1 +Ea [R (ee∗2 (a))] = Ea

heβ (a) · ee∗2 (a)i+R (ee∗1) .Now we construct e∗1, e∗2, an action plan that is implementable by rewarding only two successesand has greater or equal expected value as ee∗1,ee∗2. Start from a reward scheme that onlyrewards two successes and in which the vector γ (a) is chosen so that the resulting vector ofsecond period efforts ee∗2. Now, look at the resulting first period effort level. Two scenariosare possible. Either the first period effort is larger or equal than ee∗1, in which case we denote

21

the resulting action plan by e∗1, e∗2. The action plan e∗1, e∗2 gives the agent greater expectedvalue than ee∗1,ee∗2 and, by Proposition 9, costs Ea [R (e

∗2 (a))] = Ea [R (ee∗2 (a))]. This cost is no

greater than what it costs to implement ee∗1,ee∗2, which proves our claim that it is optimal forthe principal to reward the agent only for two successes.

In the second scenario, the first period effort associated with the reward scheme γ issmaller than ee∗1. This means that the expected value to the principal under scheme γ issmaller than the expected value of ee∗1,ee∗2. Then, increase the vector γ along all its components,and keep doing this until the resulting expected value of the effort taken by the agent equalsthe expected value of ee∗1,ee∗2. Denote the resulting effort levels by e∗1, e∗2. Notice that sinceby construction e∗2 (a) > ee∗2 (a) for all a, therefore e∗1 < ee∗1. But then the expected cost ofimplementing e∗1, e∗2 which, by Proposition 9, equals R (e∗1), is smaller than R (ee∗1) and thussmaller than the cost of implementing ee∗1,ee∗2.

Now we want to show that, assuming that the principal reveals the agent’s ability in theinterim evaluation, then a given expected value of effort can be more cheaply implementedby not disclosing in the interim evaluation whether the first period effort resulted in successor failure.

Proposition 11 Given any action plan e∗R1 , e∗2(f), e∗2(s) that is implementable with revela-tion of success, there is plan e∗1, e∗2 that is implementable under no revelation and gives theprincipal an expected value at least as large.

Proof of PropositionWrite

U¡eR1 , e2(f), e2(s)

¢= δeR1 − c (e1) +

¡1− eR1

¢Ea [e2 (f, a) ζ (a)− c (e2 (f, a))]

+eR1 Ea [e2 (s, a) θ (a)− c (e2 (s, a))] .

Denote e∗2 =¡1− e∗R1

¢e∗2(f)+e∗R1 e∗2(s) as the ability-indexed vector of expected efforts in

period 2 in the case of revelation. Pick the vector γ (a) so that the second period effort vectorin the case of no revelation equals e∗2. Then, look at the corresponding first period effort inthe case of no revelation, e1. If the resulting value of effort to the principal is greater withno revelation than with revelation, then the resulting allocation in the no revelation case isour candidate plan e∗1, e∗2. We call this configuration Case A.

If, instead, the the resulting value of effort to the principal is smaller with no revelationthan with revelation (which must mean that e1 < e∗R1 ), then increase all the componentsof the vector γ (a) until the resulting value of effort to the principal is with no revelationequals that with revelation. The resulting vector of effort in the no-revelation case is ourcandidate plan e∗1, e∗2. Note that by construction in this case we have e∗1 < e∗R1 . We call thisconfiguration Case B.

22

Case A.Write

U¡e∗R1 , e∗F2 , e∗S2

¢= U

¡e∗R1 ,0,0

¢+¡1− e∗R1

¢Ea

"Z e∗2(f,a)

0

£c0 (e∗2 (f, a))− c0 (e2 (f, a))

¤de2 (f, a)

#

+e∗R1 Ea

"Z e∗S2 (a)

0

£c0 (e∗2 (s, a))− c0 (e2 (s, a))

¤de2 (s, a)

#

= δe∗R1 − c¡e∗R1¢+¡1− e∗R1

¢Ea

"Z e∗2(f,a)

0

£c0 (e∗2 (f, a))− c0 (e2 (f, a))

¤de2 (f, a)

#

+e∗R1 Ea

"Z e∗2(s,a)

0

£c0 (e∗2 (s, a))− c0 (e2 (s, a))

¤de2 (s, a)

#.

Adding the cost of effort yields the cost to the principal of implementing e∗R1 , e∗2(f), e∗2(s),which is

δe∗R1 +¡1− e∗R1

¢Ea [R (e

∗2 (f, a))] + e∗R1 Ea [R (e

∗2 (s, a))] .

Because R is convex, this is greater than

δe∗R1 +Ea

£R¡¡1− e∗R1

¢e∗2 (f, a) + e∗R1 e∗2 (s, a)

¢¤= δe∗R1 +Ea [R (e

∗2 (a))] .

This is not smaller than Ea [R (e∗2 (a))], the cost of implementing the plan e∗1, e∗2 under no

revelation. Since that plan gives the principal an expected value of effort at least as large asthat in the revelation case, we have proved our claim.

Case B.Write

U¡e∗R1 , e∗2(f), e

∗2(s)

¢= U (0, e∗2(f), e

∗2(s)) +

Z e∗R1

0

£c0¡e∗R1¢− c0

¡eR1¢¤deR1

= Ea [e∗2 (f, a) ζ (a)− c (e∗2 (f, a))] +

Z e∗R1

0

£c0¡e∗R1¢− c0

¡eR1¢¤deR1 .

The first term is nonnegative by individual rationality. Adding the cost of effort, which isat least c

¡e∗R1¢, yields the cost to the principal of implementing e∗R1 , e∗2(f), e∗2(s), which is

therefore not smaller than Z e∗R1

0

£c0¡e∗R1¢¤deR1 = R

¡e∗R1¢.

Since by construction we have e∗1 < e∗R1 , this quantity is strictly greater than R (e∗1), whichis the cost to the principal of implementing the plan e∗1, e∗2 under no revelation. Since thatplan gives the principal an expected value of effort at least as large as that in the revelationcase, we have proved our claim.

23

Example 1 The Value of Interim Evaluations. Suppose that ability a can be 0 or 2, withequal probability. Suppose further that v (e, a) = e · a. In the absence of interim evaluationsabout ability (we have shown already that interim evaluations about effort are suboptimal),whatever incentive scheme the principal offers that is a function of a will be perceived as itsexpected value by the agent. Thus, the agent’s effort will not depend on information about hisability and the situation is like that in Proposition 3. So, the optimal plan for the principal isto implement the same effort in both periods, call it e∗, e∗. The expected value of this effortin the first period is e∗, in the second period is 1

2 · 2e∗ + 12 · 0 = e∗, so the expected value in

total is 2e∗ and that is achieved at cost R (e∗).

Suppose now that the principal implements the following effort scheme with revelation ofability (but not of effort). The principal will pay the able agent γ in case of two successes,and zero otherwise. The unable agent receives zero in any case. It is clear that an agent wholearns that he is unable will exert no effort in the second period. So, letting be2 denote theagent’s effort in the second period, the agent solves

maxbe1,be2 be1 12 (be2γ − c (be2))− c (be1)The resulting value of effort is be1 + be2. Pick γ so that be1 + be2 equals 2e∗. The cost ofimplementing be1+be2 is, by Proposition 9, R (be1). So, if we are able to show that be1 ≤ be2 thenit follows that be1 ≤ e∗ and so it is cheaper to implement be1 + be2 with revelation of abilityrather than e∗, e∗ without revelation. To verify that be1 ≤ be2 inspect the first order conditionsthat determine be1 and be2,

1

2[be2γ − c (be2)] = c0 (be1)

γ = c0 (be2) .Since be2 ≤ 1 then 1

2 [be2γ − c (be2)] < γ, which implies that be1 < be2.

24

5 Multi-Outcome Model (Incomplete)

We now generalize the analysis to the case in which there are more than two outcomes in eachperiod. Assume that in each period there are N outcomes denoted by i = 1, ..., N outcomes.Denote

pi (e) ≡ Pr [outcome i is realized given effort e] .We assume that there is an i0 with the property that p0i (e) ≤ 0 iff i ≤ i0 and p0i (e) ≥ 0 iffi > i0. In other words, low-indexed outcomes are such that an increase in effort reduces theprobability of those outcomes, whereas high-indexed outcomes are such that effort increasesthe probability of those outcomes.

5.1 The one-period problem via first order approach

An incentive scheme in the one period problem is a vector of payments µ = [µ1, ..., µN ]

that specifies a payment for each outcome realization. Given an incentive scheme µ and aprobability vector p, denote the agent’s surplus from taking action e as

S

Ãe,Xi

µipi

!=

ÃXi

µipi (e)

!− c (e) .

Given the incentive scheme, the agent solves

maxe

S

Ãe,Xi

µipi

!.

At an interior optimum e∗ the first order conditions must be satisfied, i.e.Xi

µip0i (e

∗) = c0 (e∗) . (22)

Assume that the principal wishes to implement e∗. Then, the principal’s problem is tominimize the cost of getting the agent to choose e∗. Doing so involves minimizing the agent’ssurplus conditional on the surplus being nonnegative and on the agent being willing to choosee∗.

The principal, chooses µ to solve

minµ

S

Ãe∗,Xi

µipi

!

s.t. S

Ãe∗,Xi

µipi

!≥ 0X

i

µip0i (e

∗) = c0 (e∗)

µi ≥ 0.

25

The first constraint is the standard individual rationality constraint. The second constraintis the agent’s first order condition. We assume that this constraint is enough to ensure thatthe agent’s incentive constraint is satisfied; we thus follow the first order approach (Rogerson1985). We will later remove this assumption which turns out to be very stringent in thedynamic model. The final constraint is the limited liability constraint.

Define

eµi = µip0i (e

∗)c0 (e∗)

,

epi (e; e∗) = pi (e)c0 (e∗)p0i (e∗)

.

Note that eµiepi = µipi. Then the principal’s problem can be rewritten as that of choosing theeµ that solvesmineµ S

Ãe∗,Xi

eµiepi!

(23)

s.t. S

Ãe∗,Xi

eµiepi!≥ 0 (24)X

i

eµi = 1 (25)

eµi ≤ 0 iff i ≤ i0. (26)

The next proposition characterizes the optimal incentive scheme.

Proposition 12 Assume the first order approach is valid.(i) Assume first that the agent’s individual rationality constraint is not binding at the

optimum. Then, the optimal incentive scheme entails paying the agent a positive wage onlywhen outcome k+ ≡ argmini>i0 {S (e∗, epi)} is realized. In this case, the cost of implementingaction e∗ is epk+ (e∗; e∗) = pk+ (e

∗) c0(e∗)p0k+(e∗) .

(ii) If there is full extraction at the optimum, then the optimal scheme may entail reward-ing the agent upon the realization of more than one outcome.

Proof: Let L denote the lagrangean for problem (23) subject to the constraint (24), and letλ ≤ 0 denote the multiplier of the constraint. The principal’s problem reduces to choosingthe vector eµ that satisfies the constraints (25), (26) and minimizes

L (eµ) = (1 + λ)S

Ãe∗,Xi

eµiepi!

= (1 + λ)Xi

eµiS (e∗, epi) ,26

where the last equality obtains becausePeµi = 1. Denote

k− = argmaxi≤io

{S (e∗, epi)} ; k+ = argmini>io

{S (e∗, epi)} .Suppose first that λ = 0. Since the lagrangean is linear in eµ, the minimizer eµ∗ will be suchthat eµ∗i = 0 for i 6= k−, k+,

and, in light of constraint (26), eµ∗k+ = 1− eµ∗k− .So the lagrangean reads

L (eµ) = eµ∗k−S (e∗, epk−) + (1− eµ∗k−)S (e∗, epk+) .Therefore, if S (e∗, epk−) − S (e∗, epk+) < 0 the optimal solution is eµ∗k− = 0 and the value ofthe lagrangean is S (e∗, epk+). This vector is the solution provided that S (e∗, epk+) ≥ 0, sothat the constraint (24) is not violated and then λ = 0. In this case the agent’s surplusequals S (e∗, epk). If, instead, S (e∗, epk+) < 0, then the solution cannot put weight 1 on epksince otherwise the constraint (24) would be violated. Given the linearity of the Lagrangean,this requires that λ equal −1. But then the lagrangean equals zero, i.e., the agent’s surplusis zero in the optimal contract.If S (e∗, epk−) − S (e∗, epk+) > 0 then it is optimal to chooseeµ∗k− arbitrarily close to −∞. But then constraint (24) would be violated. This means that λmust equal −1. But then the lagrangean equals zero, i.e., the agent’s surplus is zero in theoptimal contract.

With additional assumptions on the probability vector p it is possible further to charac-terize the optimal scheme.

Assumption (Monotone Likelihood Ratio). For every e < e0, i < j, we have

pi (e)

pj (e)≥ pi (e

0)pj (e0)

.

Under this assumption we can provide a sharper characterization of the incentive scheme.

Corollary 2 Assume the first order approach is valid. When p (e) has the MLR propertythen k+ = N. Unless full extraction obtains, the agent is rewarded only when the highestoutcome is realized.

Proof: When p (e) has the MLR property then epi (e∗; e∗) > epj (e∗; e∗) when j > i > i0.To verify this, observe first that if p (e) has the MLR property then ep (e; e∗) also does foroutcomes j > i > i0. This means that for every e, j > i > i0 implies

∂elog

µ epi (e; e∗)epj (e; e∗)¶≤ 0,

27

which can be written as epi (e; e∗)epj (e; e∗) ≥ ep0i (e; e∗)ep0j (e; e∗) .The right hand side equals 1 at e = e∗, which proves our claim.

The fact that epi (e∗; e∗) > epj (e∗; e∗) implies that S (e∗, epi) > S (e∗, epj) and, therefore,that k+ = N .

5.2 The 2-period problem via first order approach (The Free Lunch prop-erty)

We now move to the dynamic agency environment and we use we extend the properties of theincentive scheme that we obtained in the previous subsection to derive an important featureof the dynamic problem that will be useful for our comparison with the revelation scenario.

Denote by h, j the outcome in which h is realized in the first period and j in the secondperiod. For example, if the outcome can be success (s) or failure (f), the four possible two-period outcomes are: ss, sf, fs, ff . The principal rewards any two-period outcome realizationh, j with payment µhj . Therefore, an incentive scheme is a nonnegative vector µ11, ..., µNN .The agent’s surplus is

S

e1, e2,Xhj

µhjphpj

=

NXh=1

NXj=1

µhjph (e1) pj (e2)

− c (e1)− c (e2) .

Let us start from the problem in which the principal wants to implement the same effort e∗

in both periods. We show that it is possible to implement this outcome at an expected costepk+ (e∗), the same cost of implementing e∗ in one period only!Proposition 13 (The free lunch property) Assume the first order approach is valid.

(i) The cost to the principal of implementing effort e∗ in both periods is epk+ (e∗) this isthe same as the cost of implementing e∗ in the one period problem.

(ii) In the 2-period problem any principal with preferences that are strictly monotonic ine∗1, e∗2 will wish to implement the same effort in both periods.

(iii) The optimal incentive scheme to implement e∗ entails paying the agent only whenthe outcome k+ = argmini>i20

©S¡e∗, ep2i ¢ª is realized in both periods, and nothing otherwise,

unless there is full extraction (i.e. the agent retains no surplus). If there is full extractionthen the optimal scheme may entail rewarding the agent upon the realization of more thanone outcome.

Proof: Consider the incentive scheme which pays the agent only when outcome k+ is realizedtwice. We can write the agent’s problem as

maxe1,e2

(µk+k+pk+ (e1) pk+ (e2))− c (e1)− c (e2) .

28

The first order conditions read

µk+k+p0k+ (eh) pk+ (ej) = c0 (eh) ,

so to implement e∗ it must be

µk+k+ =c0 (e∗)

p0k+(e∗) pk+ (e∗)

.

The cost to the principal of implementing e∗, e∗ is

µk+k+pk+ (e∗) pk+ (e∗) = epk+ (e∗; e∗) .

This proves part (i).Now let us show that this contract is the optimal contract for any principal with pref-

erences that are strictly monotonic in e∗1, e∗2.We will show that, given any incentive schemethat implements e∗1, e∗2 with e∗1 < e∗2, the principal can implement e∗2, e∗2 at a lower cost. Tosee this, let us write down the cost of impementing e∗1, e∗2 asX

h,j

µhjph (e∗1) pj (e

∗2) .

Since it is optimal for the agent to take e∗2 in the second period, the first order conditionsmust be satisfied X

h,j

µhjph (e∗1) p

0j (e

∗2) = c0 (e∗2) .

Define eµhj = µhjp0j (e

∗2)

c0 (e∗2).

Then the first order conditions read Xh,j

eµhjph (e∗1) = 1,and the cost of impementing e∗1, e∗2 can be written asX

h,j

eµhjph (e∗1) epj (e∗2; e∗2) ≥Xh,j

eµhjph (e∗1) epk+ (e∗2; e∗2) ,where the inequality holds because epj (e∗2; e∗2) ≥ epk+ (e∗2; e∗2) ≥ 0 when epj and eµhj are pos-itive, while when epj and eµhj are negative then eµhjepj ≥ 0 ≥ eµhjepk+ . In light of the firstorder conditions, the right hand side equals epk+ (e∗2; e∗2): the cost of implementing e∗2, e∗2, andtherefore the inequality shows that the cost of impementing e∗1, e∗2 is greater than the cost ofimplementing e∗2, e∗2.

29

We have ignored the issue of the second order conditions relative to eh and ej . But sincethe second order conditions are satisfied with respect to a joint deviation in eh and ej , thatis, since

µk+k+ [pk+ (e)]2 − 2c (e)

is maximized at e∗, it follows that

µk+k+pk+ (e∗) pk+ (e)− c (e)− c (e∗) ,

is maximized at e∗, i.e., the second order conditions are satisfied with respect to deviationsin eh and ej .

It is easy to see that the logic of the proof of this proposition extends to any numberof periods. As long as the first order approach remains valid, the cost to the principal ofimplementing effort e∗ for T periods is the same as the cost of implementing e∗ in a singleperiod. This of course means that the first order approach cannot remain valid indefinitely,because at some point, this would violate the individual rationality constraint of the agent.We will see later the implications of violations of the first order approach.

5.3 Comparison of revelation policies in the two period model

With revelation the agent can condition his second period effort on the first period realizationh. We denote the second period effort by e2h, and the vector that collects all the e2h is denotedby e2. The agent’s surplus with revelation is

NXh=1

ph (e1)

NXj=1

µhjpj (e2h)− c (e2h)

− c (e1) . (27)

Suppose the principal maximizes

U (e1) + δEU (e2)−m,

where the expectation is taken using the probability generated by ph (e1), and m representsthe transfer to the agent.

Theorem 1 Assume the first order approach is valid. Then the principal prefers not to revealinformation.

Proof: We will show that given any pair e∗1, e∗2 that is implementable with revelation, theprincipal can at a lower cost implement either e∗1, e∗1 or e∗2, e∗2 by not revealing ( e∗2, e∗2 denotesa lottery which implements e∗2h, e

∗2h with probability ph (e

∗1)). Let us start by showing that

30

e∗1, e∗1 can be implemented more cheaply than e∗1, e∗2. The first order conditions with respectto e1 from problem (27) read

Xh

p0h (e∗1)

c0 (e∗1)

Xj

µhjpj (e∗2h)− c (e∗2h)

= 1.The cost of implementing e∗1, e∗2 isX

h

ph (e∗1)Xj

µhjpj (e∗2h)

≥Xh

ph (e∗1)

Xj

µhjpj (e∗2h)− c (e∗2h)

=

Xh

eph (e∗1; e∗1) p0h (e∗1)c0 (e∗1)

Xj

µhjpj (e∗2h)− c (e∗2h)

Xh

epk+ (e∗1; e∗1) p0h (e∗1)c0 (e∗1)

Xj

µhjpj (e∗2h)− c (e∗2h)

= epk+ (e∗1; e∗1) .

The last equality makes use of the first order conditions. To verify that the second inequalityholds, consider first that the term in brackets is nonnegative because it represents the agent’sexpected surplus in period 2 after h is realized in period 1. Examine then the two casesseparately depending on the sign of eph. When eph > 0 then p0h > 0 and since epk+ (e∗1; e∗1) ≤eph (e∗1; e∗1), the inequality is verified. When eph < 0 then p0h < 0 and therefore substitutingepk+ > 0 for eph gives rise to a negative term where previously there was a positive one, againconsistent with the inequality. This chain of inequalities shows that implementing e∗1, e∗1 ischeaper than implementing e∗1, e∗2.

Let us now show that e∗2, e∗2 can be implemented more cheaply than e∗1, e∗2. The first orderconditions with respect to e2h from problem (27) read

NXj=1

µhjp0j (e

∗2h)

c¡e∗2h¢ = 1.

31

The cost of implementing e∗1, e∗2 isXh

ph (e∗1)Xj

µhjpj (e∗2h)

=Xh

ph (e∗1)Xj

µhjp0j (e

∗2h)

c¡e∗2h¢ epj (e∗2h)

=Xh

ph (e∗1)Xj

µhjp0j (e

∗2h)

c¡e∗2h¢ epk+ (e∗2h; e∗2h)

≥Xh

ph (e∗1) epk+ (e∗2h; e∗2h) .

To verify the inequality split the argument into two parts. When epj > 0 then p0j > 0 andsince epk+ (e∗2h; e∗2h) ≤ epj (e∗2h; e∗2h), the inequality is verified. When epj < 0 then p0j < 0 andtherefore substituting epk+ > 0 for epj gives rise to a negative term where previously therewas a positive one, again consistent with the inequality. This chain of inequalities showsthat e∗1, e∗2 is more expensive to implement than a lottery which implements e∗2h, e

∗2h with

probability ph (e∗1).Let us conclude the proof by showing that purely in terms of value of effort, the principal

must prefer either e∗1, e∗1 or e∗2, e∗2 to e∗1, e∗2. The value to the principal of implementing e∗1, e∗2net of the cost is

U (e∗1) + δXh

ph (e∗1)U (e

∗2h)

≤ (1 + δ)max

(U (e∗1) ,

Xh

ph (e∗1)U (e

∗2h)

).

The value to the principal of e∗2, e∗2 net of the cost is

(1 + δ)Xh

ph (e∗1)U (e

∗2h) .

The value to the principal of implementing e∗1, e∗1 net of the cost is

(1 + δ)U (e∗1) .

This proves that, purely in terms of value of effort, the principal must prefer either e∗1, e∗1 ore∗2, e∗2 to e∗1, e∗2.

32

Quadratic 3 periods: no revelation > revelation after second period > revelation afterfirst >? full revelation

Meg’s Example: no revelation < revelation after second period < revelation after first <?full revelation

6 References

Abreu, Dilip, Paul Milgrom, and David Pierce (1991) “Information and Timing in RepeatedPartnerships,” Econometrica, Vol. 59, No. 6, November, pp. 1713-1733.

Anil, Jonathan Glover, and Shyam Sunder (1998), “Earnings Management and the Revela-tion Principle”, Review of Accounting Studies, Vol. 3, Nos. 1-2, pp. 7-34.

Chiappori, Pierre Andre, Ines Macho, Patrick Rey and Bernard Salanié (1994), “Repeatedmoral hazard: memory, commitment, and the access to credit markets”, EuropeanEconomic Review, 1527-53.

Fudenberg, Drew, Bengt Holmstrom and Paul Milgrom (1990), “Short-Term Contracts andLong-Term Agency Relationships”, Journal of Economic Theory, 51, 1-32.

Gibbs, Michael (1991), “Economic Approach to Process in Pay and Performance Appraisals”,manuscript, University of Chicago, Graduate School of Business.

Holmstrom, Bengt and Paul Milgrom (1987), “Aggregation and Linearity in the Provisionof Intertemporal Incentives”, Econometrica, Vol. 55, No. 2, pp. 303-328.

Kahn, Charles and Gur Huberman (1988), “Two-Sided Uncertainty and ”Up-or-Out” Con-tracts”, Journal of Labor Economics, Vol. 6, No. 4, pp. 423-444.

Krishna, Vijay and John Morgan (1998), “The Winner-Take-All Principle in Small Tourna-ments”, in Advances in Applied Microeconomics, Vol. 7, pp. 61-74.

Lambert, Richard (2001), “Contracting Theory and Accounting”, Journal of Accountingand Economics, Vol. 32, Nos. 1-3, pp. 3-87.

Lazear, Edward (1999), “Keeping Salaries Confidential”, manuscript, Stanford University,Graduate School of Business.

Malcolmson, James (1984), “Work Incentives, Hierarchy, and Internal Labor Markets”, TheJournal of Political Economy, Vol. 92, No. 3, pp. 486-507.

Meyer, Margaret (1992), “ Biased Contests and Moral Hazard: Implications for CareerProfiles”, Annales d’Economie et de Statistique, Nos. 25/6, pp. 165-187.

33

Moldovanu, Benny and Aner Sela (2001), “The Optimal Allocation Prizes in Contests”,American Economic Review, Vo. 91, No. 3, pp. 542-558.

Moldovanu, Benny and Aner Sela (2002), “Contest Architecture”, Working Paper, Univer-sity of Mannheim, January.

Prendergast, Canice (1992), “Career Development and Specific Human Capital Acquisition”,The Journal of the Japanese and International Economies, 6, pp. 207-27.

Rogerson, William (1985), “Repeated Moral Hazard”, Econometrica, Vol. 53, No. 1, pp.69-76.

34


Recommended