+ All Categories
Home > Documents > Learning-by-doing and aggregate fluctuations

Learning-by-doing and aggregate fluctuations

Date post: 14-Sep-2016
Category:
Upload: russell-cooper
View: 216 times
Download: 0 times
Share this document with a friend
28
Journal of Monetary Economics 49 (2002) 1539–1566 Learning-by-doing and aggregate fluctuations $ Russell Cooper a , Alok Johri b, * a Department of Economics, Boston University, 270 Bay State Road, Boston, MA 02215, USA b Department of Economics, McMaster University, 1280 Main Street West, Hamilton, Ont., Canada L8S 4M4 Received 18 December 1998; received in revised form 28 August 2001; accepted 26 February 2002 Abstract An unresolved issue in business cycle theory is the endogenous propagation of shocks yielding allocations that exhibit the persistence displayed in the data. This paper explores the quantitative implications of one propagation mechanism: learning-by-doing, whose para- meters are estimated using sectoral and plant level observations in the U.S. which are then integrated into a stochastic growth model with technology shocks. We conclude that learning- by-doing can be a powerful mechanism for generating endogenous persistence. Moreover learning-by-doing implies that the employment decision of the representative agent is dynamic which allows a re-interpretation of ‘‘taste shocks’’ or ‘‘cyclical labor utilization’’ as endogenous labor supply shifts. r 2002 Published by Elsevier Science B.V. JEL classification: E10; E32; E23 Keywords: Business cycles; Propagation mechanisms; Learning-by-doing $ Early versions of this paper were presented at the 1997 NBER Summer Institute Meeting of the Macroeconomic Complementarities Group, the 1997 Canadian Macro Study Group, the 1998 North American meetings of the Econometric Society and the 1998 NBER Summer Institute meeting of the Impulse and Propagation Group as well as the University of British Columbia, University of Toronto, Queens University, SUNY Buffalo and York. We thank participants at these conferences and seminars as well as R. Caballero, V.V. Chari, L. Christiano, M. Eichenbaum, P. Kuhn, L. Magee and a referee for useful comments. We are grateful to Jon Willis and Andrew Clarke for outstanding research assistance on this project. Cooper acknowledges financial support from the National Science Foundation. Johri thanks the Social Science and Humanities Research Council of Canada and the Arts Research Board. This research was partially conducted at the Boston Research Data Center. The opinions and conclusions expressed in this paper are those of the authors and do not necessarily represent those of the U.S. Bureau of Census. This paper has been screened to insure that no confidential information is disclosed. *Corresponding author. Tel.: +1-905-525-9140 Ext. 23830; fax: +1-905-521-8232. E-mail address: [email protected] (A. Johri). 0304-3932/02/$ - see front matter r 2002 Published by Elsevier Science B.V. PII:S0304-3932(02)00180-0
Transcript
Page 1: Learning-by-doing and aggregate fluctuations

Journal of Monetary Economics 49 (2002) 1539–1566

Learning-by-doing and aggregate fluctuations$

Russell Coopera, Alok Johrib,*aDepartment of Economics, Boston University, 270 Bay State Road, Boston, MA 02215, USAbDepartment of Economics, McMaster University, 1280 Main Street West, Hamilton, Ont.,

Canada L8S 4M4

Received 18 December 1998; received in revised form 28 August 2001; accepted 26 February 2002

Abstract

An unresolved issue in business cycle theory is the endogenous propagation of shocks

yielding allocations that exhibit the persistence displayed in the data. This paper explores the

quantitative implications of one propagation mechanism: learning-by-doing, whose para-

meters are estimated using sectoral and plant level observations in the U.S. which are then

integrated into a stochastic growth model with technology shocks. We conclude that learning-

by-doing can be a powerful mechanism for generating endogenous persistence. Moreover

learning-by-doing implies that the employment decision of the representative agent is dynamic

which allows a re-interpretation of ‘‘taste shocks’’ or ‘‘cyclical labor utilization’’ as

endogenous labor supply shifts.

r 2002 Published by Elsevier Science B.V.

JEL classification: E10; E32; E23

Keywords: Business cycles; Propagation mechanisms; Learning-by-doing

$Early versions of this paper were presented at the 1997 NBER Summer Institute Meeting of the

Macroeconomic Complementarities Group, the 1997 Canadian Macro Study Group, the 1998 North

American meetings of the Econometric Society and the 1998 NBER Summer Institute meeting of the

Impulse and Propagation Group as well as the University of British Columbia, University of Toronto,

Queens University, SUNY Buffalo and York. We thank participants at these conferences and seminars as

well as R. Caballero, V.V. Chari, L. Christiano, M. Eichenbaum, P. Kuhn, L. Magee and a referee for

useful comments. We are grateful to Jon Willis and Andrew Clarke for outstanding research assistance on

this project. Cooper acknowledges financial support from the National Science Foundation. Johri thanks

the Social Science and Humanities Research Council of Canada and the Arts Research Board. This

research was partially conducted at the Boston Research Data Center. The opinions and conclusions

expressed in this paper are those of the authors and do not necessarily represent those of the U.S. Bureau

of Census. This paper has been screened to insure that no confidential information is disclosed.

*Corresponding author. Tel.: +1-905-525-9140 Ext. 23830; fax: +1-905-521-8232.

E-mail address: [email protected] (A. Johri).

0304-3932/02/$ - see front matter r 2002 Published by Elsevier Science B.V.

PII: S 0 3 0 4 - 3 9 3 2 ( 0 2 ) 0 0 1 8 0 - 0

Page 2: Learning-by-doing and aggregate fluctuations

1. Motivation

One of the major unresolved issues in business cycle theory is the endogenouspropagation of shocks such that aggregate variables, particularly output, exhibit thepersistence observed in the data. The fact that the standard real business cycle (RBC)model has a weak internal propagation mechanism is evident from early work onthese models, such as King et al. (1988). This point has been made again in recentpapers by Cogley and Nason (1995) and Rotemberg–Woodford (1996) in the contextof output growth: the data indicate that U.S. output growth is positivelyautocorrelated in contrast to the predictions of the standard RBC model.In this paper we study the role of internal learning-by-doing (hereafter LBD) in

propagating shocks. This is done by supplementing an otherwise standardrepresentative agent stochastic growth model by introducing internal LBD.1 As weshall see, this creates a new state variable, which we label organizational capital, thatreflects past levels of economic activity. Thus LBD has the potential of creatingendogenous propagation. The question is whether these effects are quantitativelyimportant.Section 2 provides an overview of previous work on LBD. These studies document

the fact that the production process creates information about the organization ofthe production facility that improves productivity in the future. While informative,these studies take a very specific view of LBD which is much narrower than thatenvisaged by us. First, organizational capital is generally associated with firms ratherthan households even though skills at one job may be transferable to another.2

Second, many of these studies ignore the depreciation of organizational capital.Third, they focus almost exclusively on learning associated with new technologies ornew plants while ignoring the creation and destruction of organizational capital dueto reorganizations within the production unit.In contrast, our hypothesis is that the stock of organizational capital also

fluctuates over high frequencies due to learning and depreciation when: productionteams are re-organized; workers are hired or fired; employees are promoted orredeployed to new tasks; new capital or software is installed; new management,supervision, or bookkeeping practices are introduced and so on. Further, the processof matching in labor markets creates a form of organizational capital that may bedestroyed during periods of job reallocation. These variations in organizationalcapital may be induced by changes in demand as well as in technology.3

1Cooper–Johri (1997) investigates the role of dynamic complementarities in the production function for

the propagation of temporary shocks to productivity and tastes and find the propagation effects can be

quite strong: an iid technology shock creates serial correlation in output of about 0.95 using their

estimated complementarities. In the present paper these complementarities are ignored in order to isolate

the effects of internal LBD.2Bahk and Gort (1993) provide a useful discussion of this distinction in their LBD study.3For example, consider the lumpy investment decision of a firm to replace old machinery with new

capital, an act which may destroy organizational capital and initiate LBD. Since some of the

organizational capital is very specific to a task or a match (as suggested by the work of Irwin and

Klenow (1994) for example), then there is probably a considerable depreciation of this capital when

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661540

Page 3: Learning-by-doing and aggregate fluctuations

Section 3 presents our analysis of the stochastic growth model with organizationalcapital as an additional state variable. As discussed in Section 4, we find empiricalsupport for these learning effects in both aggregate and plant level data. Finally, theparameterized model is simulated to understand the effects of productivity shocks, asdescribed in Section 5.Overall, we find some interesting implications of introducing internal LBD into

the stochastic growth model. In particular, LBD can be a powerful propagationmechanism. Using parameters from our estimates of LBD at the 2 digit sectorallevel, the first two autocorrelation coefficients of the simulated output series can beas high as 0.24 and 0.17 even when the shock process has zero persistence (serialcorrelation equals 0).4 In the presence of serially correlated but temporarytechnology shocks, we are able to generate the hump-shaped impulse responsefunctions documented in Cogley–Nason (1995). Further, for the case of stochastictrends, when technology follows a random walk, we find that our model produces anautocorrelation function for real output growth with positive coefficients for severalperiods. This is much closer to the autocorrelation function for output growth inU.S. data and contrasts sharply with the predictions of the RBC model withoutLBD.Introducing LBD into the standard model creates another state variable whose

movement shifts both labor supply and labor demand. This has a number ofinteresting implications that we explore as well. First, as seen in Table 7, the modelpredicts a lower correlation between productivity and employment, in keeping withthe finding of Christiano–Eichenbaum (1992). Second, this same device is helpful inunderstanding the source of ‘‘taste shocks’’ and ‘‘unobserved effort’’ in aggregatefluctuations. We are able to generate ‘‘spurious’’ taste shock and labor effort series(see Table 8) when these are calculated from our simulated data even though themodel does not contain either element.

2. Previous studies of LBD

Our specification, presented in detail in the next section, has two key elements:

* increases in output leads to the accumulation of organizational capital;* organizational capital depreciates.

We relate these components to the existing literature before proceeding with ouranalysis.The study of firm-level LBD dates back to the turn of the century (see references in

Bahk and Gort (1993) and Jovanovic and Nyarko (1995)). Since Wright’s (1936)

(footnote continued)

matches are broken. Further, the incentive to undertake a lumpy investment episode reflects the current

state of profitability at the plant (or firm): higher demand today may induce machine replacement and thus

LBD. Thus while underlying technological progress may provide the basis for the introduction of new

machines, the timing of these innovations may be influenced by the state of demand.4These numbers come from the parameterization called IRS-PF reported in Table 4.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1541

Page 4: Learning-by-doing and aggregate fluctuations

work, the typical specification of LBD involves estimating how costs of productionfall as experience rises. Generally, these studies of LBD use cumulative output as ameasure of experience which therefore, by assumption, does not depreciate overtime. These empirical studies often find considerable evidence for LBD in that coststend to fall with cumulative output: typically studies find that a doubling ofcumulative output leads to a 20% reduction in costs.In a widely cited micro-study of LBD, Bahk and Gort (1993) introduce experience

into the production function as a factor that influences productivity. Bahk and Gortconstruct a dataset of new manufacturing plants and this allows them to constructtwo measures of the stock of experience: cumulative output since birth and time sincebirth. While the authors do not allow experience to depreciate, they do allow forlearning to decline to zero over time.Since they are interested in studying the effects of LBD separately by production

factor, they are careful to decompose productivity enhancements into two parts:those that can really be attributed to a change in inputs if measured in efficiencyunits and those that result from accumulation of experience. To capture theformer effects they introduce human capital (measured by average wages) andthe average vintage of capital as inputs that are separate from raw labor andcapital. The latter effects are captured in two ways: one formulation introducesthe stock of experience as a separate input and the other proceeds by allowing theCobb–Douglas input coefficients to change over time i.e., the coefficientsare functions of time since the birth of the plant. Notice that these specificationsare only able to get at LBD that is specific to the firm, any learning that iscaptured by the employee in the form of skills is lumped into the human capitalmeasure.When experience was proxied by cumulative output per unit of labor, a 1%

change in cumulative output lead to a 0.08% change in output. Using the otherspecification, Bahk and Gort find that capital learning continues for 5–6 years, laboror organizational learning appears to continue for 10 years but results were relativelyunstable.Irwin and Klenow (1994) report learning rates of 20% in the semi-conductor

industry. In contrast to previous studies, their estimation is structured by a dynamicmodel of imperfect competition allowing them to endogenize the relationshipbetween marginal cost and price. While their accumulation equation does notexplicitly contain depreciation, it should be noted that their estimates pertain toLBD within a particular generation of DRAM chip. Interestingly, they find‘‘economically relevant’’ spillovers across generations of chips in only two of sevencases (see their Table 7).Benkard (1997) studies LBD in the commercial aircraft building industry using

a closely related specification of technology. In contrast to the many studiesdescribed above, Benkard explicitly allows for depreciation of experience. For hisindustry study, fluctuations in product demand are a significant cause of employ-ment variation which, he hypothesizes, may lead to depreciation of organizationalcapital.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661542

Page 5: Learning-by-doing and aggregate fluctuations

Using data on inputs per aircraft, Benkard estimates the following equation forlabour requirements:

ln Nt ¼ �1

aðln A þ e ln Ht þ f ln St þ utÞ; ð1Þ

where N refers to labor used per aircraft, S to the line speed, u is a productivityshock and H to organizational capital proxied here by experience. Experienceaccumulates through a linear accumulation technology which depends on pastproduction and past experience as follows:

Ht ¼ lHt�1 þ Y1t�1 þ bY2t�1; ð2Þ

where Y1t�1 is production of the same aircraft last period while Y2t�1 is last periodsproduction of similar aircrafts so that b captures the spillover of past experience onrelated aircrafts. Notice that like the typical study of LBD, experience ororganizational capital depends on cumulative output but unlike those studiesorganizational capital depreciates in this specification. In particular, 1� l is the rateof depreciation of organizational capital, so that setting l ¼ 1 and b ¼ 0 would giveus the typical accumulation equation.Using a generalized method of moments procedure, Benkard shows that the model

with depreciation of experience (which he refers to as organizational forgetting) isbetter able to account for the data than the traditional learning model. Interestingly,without allowing for depreciation, Benkard finds a learning rate of roughly 18%which is very close to the benchmark figure of 20%. Introduction of depreciationimproves the fit dramatically (residual sum of squares falls from 13.4 to 2.5) and thelearning rate rises to about 40% while the monthly depreciation rate is 0.055.Benkard argues that the aircraft producers do not make significant changes intechnology once production on a model is underway. He therefore focuses onchanges in demand as the source of variations in organizational capital and usesdemand shifters as instruments. These include the price of oil, time trend, the numberof competing models in the market, etc. Based on a careful analysis of the specificfeatures of aircraft production technology and the nature of union contracts in theindustry he concludes that a large part of the estimated depreciation of experiencemay be explained by labor turnover and redeployment of existing workers to newtasks within the firm.We view Benkard’s evidence as supportive of our hypotheses that: (i) increases in

output leads to the accumulation of organizational capital and (ii) organizationalcapital depreciates. On the issue of the generality, Benkard (1997) notes other studiesrecording organizational forgetting in industries as diverse as ship production andpizza production.Further, to the extent that depreciation of organizational capital is the result of

firms not being able to re-hire workers previously laid-off in response to a temporaryshock, we can look for evidence on this issue. According to Katz and Meyer (1990),for the 1979–1982 period only about 42% of laid-off workers were recalled to theirold jobs over the period of about a year.5 Therefore the match specific organizational

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1543

Page 6: Learning-by-doing and aggregate fluctuations

capital embodied in the other 58% of laid off workers was lost to the firmssuggesting that this may be an important source of depreciation.Imai (2000) introduces the accumulation of human capital into a dynamic labor

supply problem at the household level. While the point of the study is to assess theintertemporal labor supply decision in the presence of human capital accumulation,the results include estimation of a human capital production function. Thespecification assumes that human capital accumulation depends on the stock ofhuman capital and hours (not output). The results, among other things, points to asignificant effect of hours on human capital and also a significant depreciation rate inthe human capital accumulation process.Overall, there appears to be substantial support in the literature for LBD and

some recent evidence suggesting the depreciation of experience is also empiricallyrelevant. For our purposes, we take this existing literature as supportive of ourinterest in understanding the aggregate effects of LBD. To study this quantitatively,we provide additional estimates of these effects, motivated, of course, by the extantliterature.

3. The model

It is convenient to represent the choice problem of the representative agentthrough a stochastic dynamic programming problem solved by a planner. Thus, wedo not consider a decentralized problem with distinct households and firms. Usingthis integrated worker/firm model we are thus agnostic about the question ofwhether the organizational capital is firm or worker specific. As indicated by ourdiscussion of evidence, there are arguments in favor of both firm and worker specificaccumulation. Thus, our specification is intended as an abstract formulation of aneconomy with LBD to better evaluate whether the addition of this state variable hasquantitatively important implications for the time series behavior of the aggregateeconomy.6

3.1. General specification

The representative household has access to a production technology that convertsits inputs of capital ðKÞ; labor ðNÞ and organizational capital ðHÞ into output ðY Þ:

5Their reported numbers vary depending on the states included, the specific sub-period used and for

different sectors, though the highest recall rate they reported was 57% from a dataset suspected of being

biased towards seasonal layoffs. The data was collected mainly on blue collar workers in non-agricultural

sectors but included some from trade, services and administration.6For example, our specification is perfectly consistent with a model in which organizational capital is

accumulated by workers through their labor market experience so that the dynamics associated with the

employment choice are concentrated in the households’ problem. That approach, however, ignores the

accumulation of firm-specific organizational capital. A complete model of the accumulation of firm-

specific organizational capital might include market power by sellers and resulting inefficiencies. In

general, adding market power with fixed markups to the stochastic growth model has only modest

quantitative implications.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661544

Page 7: Learning-by-doing and aggregate fluctuations

This technology is given by Af ðK ;N;HÞ where the total factor productivity shock isdenoted by A:The household has preferences over consumption ðCÞ and leisure ðLÞ denoted by

uðC;LÞ where this function is increasing in both arguments and is quasi-concave.Assume that the household has a unit of time each period to allocate between workand leisure: 1 ¼ L þ N: The household allocates current output between consump-tion and investment ðIÞ:There are two stocks of capital for the household. The first is physical capital ðKÞ:

The accumulation equation for physical capital is traditional and is given by

K 0 ¼ Kð1� dK Þ þ I ; ð3Þ

In this expression, dK measures the rate of physical depreciation of the capital stock.The second stock is organizational capital which is accumulated indirectly throughthe process of production. The evolution of this stock is given by

H 0 ¼ fðH;Y Þ; ð4Þ

where fð�Þ is increasing in both of its arguments. In (4) we have assumed that theaccumulation of organizational capital is influenced by current output rather thancurrent employment. As discussed above in Section 2, this is the traditionalapproach.For our analysis, it is convenient to substitute the production function for Y in (4)

and then to solve for the number of hours worked in order to accumulate a stock oforganizational capital of H 0 in the next period given the two stocks ðH ;KÞ today andthe productivity shock, A: This function is defined as

N ¼ UðK ;H;H 0;AÞ: ð5Þ

Given that the inputs ðK ;N;HÞ are productive in the creation of output and theassumption that organizational capital tomorrow is increasing in output today, U

will be a decreasing function of both K and H and an increasing function of H 0:The dynamic programming problem for the representative household is then given

by

V ðA;K ;HÞ ¼ maxK 0 ;H 0

uðAf ðK ;H;NÞ þ ð1� dÞK � K 0; 1� NÞ

þ bEA0V ðA0;K 0;H 0Þ; ð6Þ

where we use (5) to substitute out for N : The existence of a value function satisfying(6) is standard as long as the problem is bounded.The necessary conditions for an optimal solution are

½ucðC; 1� NÞAfN � uLðC; 1� NÞUH 0 þ bEVH 0 ðA0;K 0;H 0Þ ¼ 0; ð7Þ

and

uc ¼ bEVK ðA0;K 0;H 0Þ: ð8Þ

These two conditions, along with the transversality conditions, will characterize theoptimal solution.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1545

Page 8: Learning-by-doing and aggregate fluctuations

Eq. (7) is the analog of the standard first-order condition on labor supply thoughin this more complex economy, it includes the effects of current labor input on thefuture organizational capital stock. Thus the accumulation of organizational capitalis one of the benefits of additional work (i.e. VH > 0 for all points in the state space)leading to a labor supply condition in which the current marginal utility ofconsumption is less than the marginal disutility of work.To develop (7), we use (6) to find that

VH ðA;K ;HÞ ¼ ucAfH þ ½ucAfN � uLUH : ð9Þ

So, giving the agent some additional organizational capital will directly increaseutility through the extra consumption generated by this additional input into theproduction process. This is captured by the first term in (9). Second, given the higherlevel of H today, and by the assumptions on production and accumulationtechnology, since UHo0; the agent can reduce labor supply in the current periodwhich, given the excessive labor supply, is desirable. Thus the second term in (9) isthe current utility gain from reducing labor supply times the reduction inemployment created by the additional H: Of course, this condition is used in (7)once it is updated to the following period.Eq. (8) is the Euler equation for the accumulation of physical capital: the marginal

utility of consumption today is equated to the discounted value of more capital in thenext period. As with the labor supply decision, a gain to investment is the addedoutput in the next period plus the accumulation of organizational capital that comesas a joint product. So, again using (6),

VK ðA;K ;HÞ ¼ uc½AfK þ ð1� dÞ þ ½ucAfN � uLUK : ð10Þ

Here, an additional unit of physical capital increases consumption directly, thestandard result, and also allows the agent to work a bit less to offset the effects ofthe physical capital on the accumulation of organizational capital. As before, theupdated version of (10) is used in (8) to complete the statement of the Eulerequations.In principle, one can characterize the policy functions through these necessary

conditions. Alternatively, the economy can be linearized around the steady state(when it exists) and then the linear system can be evaluated and the quantitativeanalysis undertaken. That is the approach taken here through a leading examplepresented in the next sub-section.

3.2. A leading example

Here we assume some specific functional forms for the analysis. These restrictionsare used here to illustrate the model and then are imposed in some of our estimation/simulation exercises for parameter identification.We assume that the production function is Cobb–Douglas in physical capital,

labor and organizational capital:

Af ðK ;H ;NÞ ¼ AHeKyNa: ð11Þ

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661546

Page 9: Learning-by-doing and aggregate fluctuations

In this specification, e parameterizes the effects of organizational capital on output.For some of our specifications, we impose the requirement of constant returns toscale in the production process: aþ yþ e ¼ 1:The accumulation equation for organizational capital is specified as

fðH ; yÞ ¼ HgyZ: ð12Þ

where g captures the influence of current organizational capital on the accumulationof additional capital and Z parameterizes the influence of current output on theaccumulation of organizational capital.7 With the additional restriction that gþ Z ¼1; the model will display balanced growth with all variables except labor growing atthe common rate of growth of labor augmenting technological progress. Withoutthis restriction of CRS in the accumulation equation, the model will have a steadystate in which organizational capital grows at a different rate from other variables onthe balanced growth path.Using the production function in the accumulation equation implies:

H 0 ¼ HgðAHeKyNaÞZ ¼ HgþeZðAKyNaÞZ: ð13Þ

In this case, UðK ;H;H 0;AÞ becomes

H 0

HgþeZðAKyÞZ

� �1=aZ

: ð14Þ

So that N is increasing in H 0 and decreasing in H;K and A as noted above.Finally, assume the utility function is given by uðC;LÞ ¼ lnðCÞ þ wðLÞ: Here w

parameterizes the contemporaneous marginal rate of substitution between con-sumption and leisure.With these particular functional forms, the necessary conditions for an optimal

solution become

aY

CN� w

� �N

aZH 0

� �þ bE

1

C0

eY 0

H 0

� ��

aY 0

C0N 0 � w� �

ðgþ eZÞN 0

aZH 0Þ

� �� �¼ 0; ð15Þ

and

1

C¼ bE

1

C0

� �yY 0

K 0 þ 1� d� �

�1

C0

aY 0

N 0

� �� w

� �yN 0

aK 0

� �: ð16Þ

So, (15) and (16) are the analogs of (7) and (8) for this particular specification offunctional forms. These conditions, along with the accumulation equations andresource constraint, fully characterize the equilibrium of the model.

7This constant elasticity functional form was chosen largely because it is useful in deriving the particular

structure used in the production function estimation Section 4.1 as well as the euler equation estimation

Section 4.2. It is particularly useful because it obviates the need to use proxy series for unobserved

organizational capital. In addition, the log-linear accumulation equation can be thought of as an aggregate

production technology for the production of organizational capital which uses the stock of organizational

capital and past production as inputs. Most of the empirical studies of LBD use a linear accumulation

equation though Irwin–Klenow (1994) consider a constant elasticity alternative and find that empirically it

gives them similar results.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1547

Page 10: Learning-by-doing and aggregate fluctuations

4. Parameterization

The parameterization of the model utilizes both estimation and calibrationtechniques. The point of the estimation is to focus on the parameters of theproduction function and the accumulation technology. This procedure and ourfindings are discussed in some detail in this section. These parameters are estimatedin two ways: in the first sub-section we directly estimate the technology usingproduction function estimation techniques. The next sub-section discusses resultsfrom estimating the Euler equation. The remaining parameters, as in our earlierpaper, King, Plosser, and Rebelo and many of the references therein, are calibratedfrom other evidence and are discussed in Section 5.

4.1. Production function estimation

We provide evidence on the parameters of the production function from a numberof estimation exercises. The first follows Burnside et al. (1995) and uses quarterly 2-digit manufacturing data to estimate the production function. The second uses theNBER Manufacturing Productivity Database to estimate the same set ofparameters. One important difference between the datasets is that materials isincluded in our analysis of the latter.8 Finally, we also use observations at the plant-level to estimate the production function.

4.1.1. Estimation using 2-digit manufacturing data

We estimate our production technology and accumulation equation fororganizational capital simultaneously using quarterly 2-digit manufacturing datafor 17 U.S. manufacturing sectors. As the quarterly data display unit roots, theestimation is done using data that have been rendered stationary using log firstdifferences. The equivalent expression for (11) in log first differences is

Dyit ¼ aDnit þ yDkit þ eDhit�1 þ Dait; ð17Þ

where the lower case letters denote logs of variables and the subscript i refers to theith 2-digit sector. The accumulation equation (12) may be written as

Dht ¼ gDht�1 þ ZDyt�1 ¼Z

ð1� gLÞDyt�1; ð18Þ

where L is the lag operator and D denotes first differences. Replacing this expressionin (17) and rearranging yields

Dyit ¼ aDnit � agDnit�1 þ yDkit � ygDkit�1 þ ðgþ eZÞDyit�1 þ D #ait: ð19Þ

Note that g is overidentified but that separation of e and Z is not possible withoutfurther restrictions on the production and accumulation equations. So we will use a

8We are grateful to the referee for urging us to supplement our analysis with a data set that includes

gross output and materials.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661548

Page 11: Learning-by-doing and aggregate fluctuations

non-linear procedure and also impose additional restrictions on the parameters todistinguish e from Z:9

The variables used for estimation are quarterly data on gross output and hours inthe U.S. manufacturing sector at the 2-digit level from 72:1–92:4, as in Burnside et al.The capital input is proxied by electricity consumption to proxy for capitalutilization.10

The instrument list includes the second, third and fourth lag of innovations to thefederal funds rate and to non-borrowed reserves. As is common to the literature onproduction function estimation, the point of the instruments is to capture variationsin inputs, especially organizational capital, that are caused by variations in aggregatedemand rather than by variations in technology or innovation.11 This emphasis ondemand shocks is consistent with our view that variations in organizational capitaltake place in response to any shock (demand or technological) which causes re-organization of some aspect of production, leading to the partial destruction of oldorganizational capital as well as the creation of new organizational capital. Thisreorganization could involve changing the size of the work force, investment orscrapping of physical capital or any number of other activities that affectproductivity.The first row of Table 1 reports the results for a non-linear system instrumental

variable procedure where 17 sectors are jointly estimated with the coefficientsrestricted to be the same across sectors. For this estimation we have imposedconstant returns to scale in both the production and accumulation equations so from(19) there is no problem identifying both e and Z: Hence this specification is denotedCRS.The labor share, a; is estimated at 0.59, the share of physical capital y is estimated

at 0.33, the share of organizational capital is 0.08 while g; which parameterizes thedepreciation of organizational capital, is estimated at 0.63. Note though that e is notsignificantly different from zero in this specification.12

The second row of Table 1 corresponds to a specification in which we haveimposed CRS in the production function but not in the accumulation equation.Instead, we assume Z ¼ 1 so that the elasticity of organizational capital to output is

9Our inability to distinguish e from Z reflects our inability to estimate the production function directly aswe have no direct measure of organizational capital.

10Hence, we follow Burnside et al. in two respects. First, we assume that gross output is the minimum of

value added and materials. See that paper and Basu (1996) for evidence in favor of this specification.

Further, electricity is used to proxy for capital utilization. As noted in numerous studies, it is often difficult

to estimate a significant coefficient on capital from time series evidence. Using electricity is thus a proxy for

the utilization of capital and may also correlate with other unobservable measures of production intensity.

The next sub-section of our paper presents results based upon a specification of technology which does

include capital and materials.11Of course, a key concern is whether the instruments are correlated with inputs and uncorrelated with

the technology shocks. See the discussion in Burnside et al. on this important point.12This is a very constrained specification (CRS in both production and accumulation). The standard

error on the estimate of e is relatively large partly reflecting the imprecise estimate of the capital coefficient,a common problem in production function estimation. This motivates us to explore other specifications in

which the coefficients are not so tightly constrained.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1549

Page 12: Learning-by-doing and aggregate fluctuations

one by assumption. In this specification, which we denote IRS-OC (increasingreturns to scale in organizational capital), g continues to be estimated. This casebrings us closer to the traditional approach taken in the empirical industrialorganization studies of LBD and especially to Benkard’s work where the exponenton current output in the accumulation of organizational capital is set to 1. For thisspecification we find: a is estimated at 0.57, the share of physical capital ðyÞ isestimated at 0.32, the share of organizational capital ðeÞ is 0.11 while g is estimated at0.55. Here again, g is significantly different from zero while e remains insignificant,though its t-value is much higher.The third row of Table 1 investigates a specification in which constant returns is

imposed in the accumulation of organizational capital but not in the productionfunction. We term this IRS-PF (increasing returns in the production function). Here,instead of trying to simultaneously estimate all the coefficients of the productionfunction, we restrict the capital and labor coefficients to sum to one and thenestimate a ¼ 0:61; e ¼ 0:26 and g ¼ 0:5: Once again, the estimate of g is tightly (thestandard error was 0.12) bounded away from either a value of zero or a value ofone.13 Moreover, all production function parameters, especially e are nowsignificantly different from zero.Overall, the evidence from aggregate 2-digit manufacturing data seems to us to

suggest the presence of LBD effects at the macro level. Two patterns clearly emergefrom Table 1. First there seems to be consistent evidence that the stock oforganizational capital depreciates. This result is robust across specifications in thatthe accumulation of organizational capital is significantly related to economicactivity. Second there is some evidence for small learning rates in the aggregateproduction function which depends on the precise specification used. Allowing for

Table 1

Estimation using aggregate 2-digit quarterly dataa (t-statistics given below coefficient estimates)

Specification a y e g Z

CRS 0.59 0.33 0.08 0.63 0.37

(8.7) (2.5) (0.56) (7.0) (4.1)

IRS-OC 0.57 0.32 0.11 0.55 1

(8.9) (3.4) (1.25) (4.7)

IRS-PF 0.61 0.39 0.26 0.5b 0.5

(10.5) (6.72) (2.34) (4.23) (4.23)

a Instruments: the instrument list includes 2nd to 4th lags each of FF and NBR. The data set is gross

output and hours and electricity consumption in the US manufacturing sector at the 2-digit level used by

Burnside et al. 72:1–92:4.bThe standard error of 0.12 implies that the estimate of g is significantly different from both 0 and from

1 at the 5% level.

13We thank the referee for suggesting we impose long run capital and labor shares that sum to one

rather than estimate all the coefficients.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661550

Page 13: Learning-by-doing and aggregate fluctuations

increasing returns in the production function yields significant estimates of theelasticity of output with respect to organizational capital. Interestingly, the impliedlearning rate is very close to the 20% learning rate commonly found in numerousempirical case studies.14

4.1.2. Estimates using the NBER Manufacturing Productivity Database

Our second set of production function estimates is obtained from the NBERManufacturing Productivity Database which includes annual data from 459manufacturing industries (using the 1987 SIC codes) from 1958 to 1996.15 Most ofthe variables in the database come from the Annual Survey of Manufactures (ASM)which samples information from around 60,000 establishments. These variables arethe number of employees, total payroll, number of production workers, number ofproduction worker hours, production worker wages, value added which in turnequals value of industry shipments minus material costs including electricity andfuels. Two additional variables from the ASM are end of year inventories and newcapital spending.The database also includes variables constructed from other sources. The capital

stock measures are constructed from the Federal Reserve Board net capital stockseries. The output deflators are based on the Bureau of Economic Analysis’5-digit deflators. Further information on the construction of these measuresand the materials as well as energy price deflators can be found in Bartelsman–Gray (1996).All these variables are used to construct a five factor measure of total factor

productivity. Total factor productivity (TFP) is calculated using cost shares byindustry by year. Output is measured as the value of shipments. The five factorsincluded are production worker hours, non-production workers, energy, materials,and capital.16 The share of capital is calculated residually so that all shares sum toone. Real values of shipments, materials, and energy are obtained by dividing by thedeflators described above.Since the goal of this exercise is to check the robustness of the results obtained in

the section above to the inclusion of materials in the production function, we

14These results can be related to those obtained by Benkard whose study of the commercial airline

industry is closest to ours. Using a generalized method of moments procedure, Benkard estimated a

learning rate of 40% with a depreciation rate of roughly 20% per quarter. Benkard assumes that a; theexponent on labor in the Cobb–Douglas production function equals one whereas we estimate it to be close

to 0.6. Benkard estimates the ratio of e=a at 0.74, assuming a ¼ 1 implies e ¼ 0:74: Using a labor

coefficient of a ¼ 0:57; implies e ¼ 0:42 which is substantially higher than our macro-estimates from two

digit data. This difference may partly be the result of imposing constant returns to scale on the production

technology. The estimate of e rises substantially in the IRS-PF case.15The data as well as extensive documentation is publicly available at http://www.nber.org/data/.16The fact that only the number of non-production worker and not their hours is available is a point of

concern. This same issue appears in Bahk–Gort (1993). Note that Burnside et al. study included only

production worker hours. We attempted to correct for this omission by including production worker

hours (as they are presumably correlated with non-production worker hours) in our regression analysis of

TFP (described below) with little impact on our estimates.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1551

Page 14: Learning-by-doing and aggregate fluctuations

concentrate on estimating only the parameters associated with LBD rather than theelasticity associated with each input in the production function.17 We can do so byusing the reported measures of TFP which were constructed using sector specificfactor shares, a benefit relative to the more constrained case studied in the previoussub-section.Under the assumption that the production function has constant returns to scale

in all inputs except organizational capital we can uncover estimates of e and g byregressing the Solow residual (based here on the TFP measures using five factorsreported in the dataset) on its own lag and lagged output. We instrument the right-hand side variables using the same instruments used above: innovations to thefederal funds rate as well as to non-borrowed reserves lagged one year.To see this, consider a log-linear version of our production function where xit

denotes the log of the input bundle (created using the appropriate cost shares) forindustry i in period t; hit is the log of the stock of organizational capital and ait is aproductivity shock in logs. Then, total factor productivity is ðTFPitÞ is given by

TFPit ¼ yit � xit ¼ ehit þ ait: ð20Þ

Proceeding as before, hit can be replaced using the accumulation equation (12) in log-linear form. Rearranging yields the equation we estimate

DTFPit ¼ gDTFPit�1 þ eZDyit�1 þ Dait:

As before, the parameters can be identified under two alternative assumptions. Wecould follow IRS-OC and assume Z ¼ 1 (as in Benkard) or we could follow IRS-PFand assume constant returns in the accumulation equation ðZ ¼ 1� gÞ:We used real shipments as our measure of output. The coefficient estimates and

standard errors based on either specification are reported in Table 2. For the IRS-OC case while the estimate of e ¼ 0:16 is precisely estimated, the estimate of g ¼ 0:55is not significantly different from either a value of zero or one at the 5% level.Interestingly, the point estimate is quite similar to the value reported in Table 1. Forthe IRS-PF case, while the estimates of both parameters are positive, since g is

Table 2

Estimation using 4-digit annual dataa (t-statistics given below coefficient estimates)

Specification e Z g

IRS-OC 0.16 1 0.55

(2.465) (1.946)

IRS-PF 0.35 1� g 0.55

(1.736) (1.6) (1.946)

a Instruments: the instrument list includes the first lagged quarter of each of FF and NBR. The data set

is from the NBER Manufacturing Productivity Database using the measure of TFP denoted TFP5.

17Put differently, as suggested by the referee, this is one way to focus our analysis on LBD without

redoing the entire literature on production function estimation.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661552

Page 15: Learning-by-doing and aggregate fluctuations

measured imprecisely this contaminates e which must be recovered from eð1� gÞ:Relative to the findings reported in the previous section, the point estimates aresimilar, output again matters for productivity and contributes to the accumulation oforganizational capital.

4.1.3. Estimates using plant-level data

Quite apart from the obvious advantages of estimating learning directly at themicro-level, the plant level estimation also allows one to distinguish between learningthat is internal to plants and external learning at the industry level. As in Cooper–Johri (1997), our plant level estimates come from the Longitudinal ResearchDatabase (LRD). In particular, we look at 49 continuing automobile assemblyplants over the 1972–1991 period. While this is certainly only a small subset ofmanufacturing plants, it is a group of plants that we have studied before and thus wehave a benchmark for comparison.18

Our results are reported in Table 3. Here, all regressions include plant specificfixed effects.19 The output measure is real value added at the plant level and theinputs are labor (specifically total production worker hours, labeled ph) and physicalcapital (machinery and equipment at the plant level, labeled k).20 The differentspecifications refer to the measure of organizational capital used in the estimationand the treatment of a time trend.

Table 3

Plant level estimatesa

Labor k Year lqv(�1) cumv

0.97 0.13 0.04

(0.04) (0.05) (0.003)

0.980 0.140 Dums

0.850 0.090 0.035 0.220

0.9 0.11 �0.001 0.29

(0.04) (0.04) (0.005) (0.03)

0.87 0.11 Dums 0.38

(0.05) (0.04) (0.04)

Notes: 1. All coefficients are significantly different from zero at the 5% level.

2. Dums refers to treatments which include a year dummy.

3. Standard errors in parentheses.aDependent variable is the log of real output.

18 In fact, this is a slightly different sample than used in Cooper–Johri (1997) due to the inclusion of one

additional plant and two more years of data. Obviously one goal in this continuing research is to go

beyond this group of plants.19One issue to consider is the extent of the bias created by estimation with fixed effects and lagged-

dependent variables.20This is in contrast to our previous study where we used electricity consumption as a proxy for capital

utilization. Note that this measure of capital excludes the value of the plant. We use this measure as an

alternative proxy for electricity consumption. As is well known, the use of total capital results in a zero or

negative estimate of the elasticity of output w.r.t. capital.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1553

Page 16: Learning-by-doing and aggregate fluctuations

We consider two different measures of organizational capital. Lagged output atthe plant is denoted lqv and corresponds to a specification in which g ¼ 0: Thevariable cumv represents an alternative measure of experience which is a running sumof past output: i.e., it measures cumulative output and comes closest in spirit to themeasure used by Bahk and Gort and most studies of LBD. This measurecorresponds to the case of no depreciation of experience. To estimate this modelwe start the experience accumulation process off at an arbitrary date. Obviously atthat date different plants have different levels of experience so this level effect (ofdifferent levels of initial experience) is picked up in the fixed effects.There are also three treatments of a trend that might come from technological

advance. For some specifications the trend is ignored. For others, it is captured as alinear trend and finally we also allow for year dummies. In general, allowing forsome time effects seem to matter though the distinction between a linear trend andtime dummies does not seem to have important implications for other coefficients.The results are generally supportive of some form of LBD at the plant level: the

coefficients for the various output-based measures of organizational capital arestatistically significant at the 5% level and range from 0.22 to 0.38 depending on themeasure used and the trend. Note that the cumulative output results are similar tothe empirical LBD literature and especially to Benkard. They are higher than ouraggregate estimates—once again increasing returns in the production function seemsto make the difference.

4.2. Euler equation estimation

In this subsection we use (15) to estimate the parameters of the LBD process usingnational income and product accounts quarterly series on real GDP, real non-durable consumption and hours from 1964:1 to 1997:1. We stress that this is a verydifferent approach than the direct estimation of a production function. Note thoughthat our inability to distinguish e from g remains.In particular, imposing Z ¼ 1� g; (15) can be rewritten as

aY

CN� w 1þ bE

ðgþ eð1� gÞÞN 0

N

� �� bE

aY 0gNC0

� �:

In this expression of the Euler condition, variables without primes are t period andthose with primes are t þ 1 period. The expectation is taken conditional on period t

information.The discount rate b was set to 0.984 and instruments used were the first seven lags

of ðC0=CÞ and ðY 0=Y Þ which correspond to 2 years worth of information from theinformation set available at date t: These variables should be uncorrelated withexpectational errors at date t: The estimated coefficients were a ¼ 0:54 ð2:7Þ; e ¼0:49 ð2:43Þ and g ¼ 0:79 ð5:67Þ where the t-statistics are given in parentheses. Theseresults are much closer to those found by Benkard in his micro-study. Clearlythe estimated LBD from the Euler equation exercise is much larger than from thesectoral production functions. Note too that the evidence from the Euler equation

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661554

Page 17: Learning-by-doing and aggregate fluctuations

implies that some of the LBD is internal since there would not be any forwardlooking elements if LBD was purely external.

5. Aggregate implications of LBD

The simulations are based upon a linearized version of the model specified inSection 3 parameterized using the estimates from Section 4 and other ‘standard’parameters. In particular, we set b ¼ 0:984 so that the real interest rate is 6.5%annually, d ¼ 0:1 and w is chosen so that the average time spent working is 0.3.As for the organizational capital aspects of the model, we consider a couple of

alternative specifications to evaluate their implications for the behavior of theaggregate economy. In particular we consider four specifications, summarized inTable 4. The first three come directly from our aggregate estimation exercises and arethe specifications labeled CRS, IRS-OC and IRS-PF from Table 1. These estimatesreflected different identifying restrictions placed on the production function andaccumulation equations. Thus we ought to explore the aggregate implications ofthese competing specifications.The fourth specification, termed IRS-HI Gamma, leaves all parameters at the

same level as the previous specification except g; which is set to 0.99 so that thedepreciation of organizational capital is minimal. This specification is quite close tothat in firm-level LBD studies which impose (rather than test) zero depreciation oforganizational capital. The conventional wisdom from these studies is that in manyindustries the learning rate is 20% which corresponds to e ¼ 0:26 in our paper.21 Weinstead picked an e ¼ 0:27 so that any change in results relative to IRS-PF, could beattributed solely to the much lower depreciation rate used here. Note that imposing gexactly equal to 1 would imply that Z ¼ 0; so that past output would contributenothing to organizational capital. Imposing g ¼ 0:99; implies Z ¼ 0:01 which allowsfor very little accumulation of organizational capital. Since the organizational

Table 4

Summary of parameterizations

Model a y g Z e

RBC 0.6 0.4 1 0 0

CRS 0.58 0.33 0.63 0.37 0.08

IRS-OC 0.57 0.32 0.55 1 0.11

IRS-PF 0.6 0.4 0.5 0.5 0.27

IRS-HI Gamma 0.6 0.4 0.99 0.01 0.27

21Note that when you double the experience of the agent ðHtÞ; output increase by a factor of 2e; where eis the value of the elasticity of labor input with respect to experience in (11) therefore the learning rate is

calculated as 2e � 1: A 20% learning rate implies a value of e ¼ 0:26:We thank the referee for suggesting a

parameterization that conforms to the benchmark learning rate of 20% commonly found in micro-studies.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1555

Page 18: Learning-by-doing and aggregate fluctuations

capital stock is almost constant in this case, we should expect results close to thebaseline RBC case.Since the main goal of the paper is to analyze the contribution of LBD to the

propagation of shocks we study this issue using three diagnostic tools. First we studythe model under iid technology shocks (i.e., no serial correlation in the shockprocess) as this seems to highlight the ability of the model to propagate shocks.Second, using highly serially correlated technology shocks we compare the modelgenerated data with key properties of U.S. macro-data in log-levels. The keydiagnostic here will be the ability of the model to generate hump-shaped impulseresponse functions. Third, we analyze a version of our model assuming permanenttechnology shocks. With this stochastic trends formulation, we are better equippedto investigate the implications of the models for the autocorrelation of outputgrowth. Here a key test of the model will be its ability to replicate the two positiveautoregressive coefficients found in aggregate output data when measured in growthrates. Finally, in the last sub-section we conduct some counter-factual exercisesbased on the idea that when LBD effects are ignored, estimation exercises based onthe first order conditions for the labor input are likely to be mis-specified.

5.1. Simulation results from IID (zero persistence) technology shocks

To conduct the quantitative analysis, we consider a log-linear approximation tothe equilibrium conditions described by (15) and (16) plus the resource andaccumulation conditions, around the steady state.22 Using this system, our mainquestion is: how much propagation is created by internal LBD? We address thisquestion by introducing temporary technology shocks into the model.Table 5 summarizes our findings for the four different treatments we consider.

Here we report statistics from the artificial economy for major macroeconomic

Table 5

IID technology shocks

Treatment Corr. with Y Standard deviation Statistics

contemporaneous relative to Y for Y

C Hr In W C Hr In W sd sc

RBC 0.36 0.99 0.99 0.37 0.17 0.95 4.40 0.17 0.02 0.005

CRS 0.4 0.98 0.99 0.65 0.2 0.86 4.38 0.24 0.02 0.07

IRS-OC 0.44 0.92 0.98 0.96 0.25 0.45 3.7 0.61 0.01 0.19

IRS-PF 0.42 0.91 0.99 0.96 0.26 0.44 2.9 0.63 0.01 0.23

IRS-HI Gamma 0.35 0.98 0.99 0.35 0.19 0.95 3.5 0.19 0.02 0.02

22This is the same procedure followed in our earlier paper using a version of the economy specified in

King et al. (1988). For our parameterizations, the steady state was saddle path stable.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661556

Page 19: Learning-by-doing and aggregate fluctuations

variables: output ðY Þ; consumption ðCÞ; investment ðInÞ; total hours ðHrÞ andaverage labor productivity ðW Þ: For each, we present standard deviations of thesevariables relative to output, their contemporaneous correlations with output and theserial correlation of output. The first column of the table lists the specification asdescribed in Table 4.The first row of the table provides results for the baseline real business cycle model

in which technology shocks are iid and there are no LBD effects. This simple modelproduces many interesting features: procyclical productivity, consumption smooth-ing and investment that is more volatile than output. However, for this case there isessentially zero serial correlation in output. As is well understood, the standard RBCmodel does not contain an endogenous propagation mechanism.From the remaining rows of the table, all of the specifications with LBD deliver

similar predictions: all variables are positively correlated with output and there isagain evidence of consumption smoothing. The key difference between the baselineRBC model and those with LBD appears in the very last column which records thefirst autocorrelation coefficient of output. In the CRS specification the autocorrela-tion coefficient is 0.07, compared to 0.005 in the baseline model. If we allow forincreasing returns to the production of organizational capital (IRS-OC), theautocorrelation coefficient rises to 0.19 and is even higher in the IRS-PF case (0.23).The final case, IRS-HI Gamma, imposes a very low depreciation rate oforganizational capital as well as a very low learning rate as explained above. Asexpected, this chokes off most of the propagation ability of the model. Despite this,there is more persistence than in the baseline case (0.02).The other notable difference in Table 5 among the models with strong

propagation relative to the baseline RBC model is the behavior of average laborproductivity ðW Þ: In rows 2–4, W is both more procyclical and more volatile than thebaseline model. Once again, the Hi Gamma case looks more like the RBC modelbecause of the slow learning imposed.To understand the mechanism at work, we discuss the impulse responses of the

model. To focus the discussion we begin with a discussion of Fig. 1 which displaysthe impulse response functions for a 1% temporary increase in total factorproductivity using the IRS-PF estimates. Thereafter we discuss the key differencesbetween the various cases.As in the baseline (RBC) model, an increase in TFP in the IRS-PF case causes an

immediate increase in labor input to take advantage of this temporary shock.Consumption and investment both increase as well and the burst of investment leadsto an increase in the capital stock. Unlike the baseline model, the resulting increase inoutput leads to an increase in organizational capital in the subsequent period. Thusin the period after the burst of productivity, both stocks are higher so that outputand employment remain above their steady-state values whereas in the baselinemodel output is almost back to steady state while hours are below steady state inperiod two. After this period, the stock of organizational capital slowly falls towardssteady state for 20 quarters. Employment is above steady state for only 4 quarterswhile output is above steady state for about 15 quarters. Thus, a single transitoryshock causes some richer dynamics relative to the standard model.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1557

Page 20: Learning-by-doing and aggregate fluctuations

The IRS-OC model behaves very similarly and is not discussed further. In the twocases with weak effects of organizational capital (CRS and HI Gamma), output ispersistently above steady state but only by a small amount. The most interestingpattern is the relative behavior of hours worked across the various cases which wediscuss in some detail below.As is well known, in the baseline RBC model hours respond strongly to the

technology shock. A 1% increase in total factor productivity leads to more than 2%increase in hours worked in the impact period. But this increase is not persistent.From the next period onward, hours are below steady state due to the income effectof an above steady state capital stock. Learning by doing modifies this pattern in twoways. First, the response in the impact period is dampened and second it is mademore persistent. This can be seen in Fig. 1 (IRS-PF case), where the response ofhours worked is around 0.7% (as opposed to over 2% in the baseline case) in theimpact period. However the response is more persistent, hours remain above steadystate for four periods instead of only one period in the baseline RBC case.

0 10 20 30 40-2

0

2

4

6

8

10

12

14x 10-3 Technology Shock

Periods

Per

cent

dev

iatio

n fr

om s

tead

y st

ate

tech consumption output labor org. capital

Fig. 1. IID shock to IRS-PF.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661558

Page 21: Learning-by-doing and aggregate fluctuations

Hours are pulled above steady state because the high level of organizationalcapital raises the marginal product of labor which more than offsets the traditionalincome effect of a higher capital stock. As organizational capital slowly returnstowards its steady-state value, the traditional dynamics take over and hours return tosteady state from below.The IRS-OC case displays a very similar pattern and so the graphs are not

included in the paper but are available from the authors in an appendix. For the twocases where the impact of organizational capital is weak, either due to a small e (CRScase with LBD) or due to a high g (IRS-HI Gamma case), the pattern for hours iscloser to the baseline RBC case with some differences. In the HI Gamma case,organizational capital hardly responds to an increase in output, therefore hours fallbelow steady state in period two just like the RBC model. The CRS case with LBD isdifferent because the response of organizational capital is stronger in the impactperiod and stays above steady state for roughly 20 periods. As a result, hours stayabove steady state for one additional period. This pattern of employment movementfollowing a temporary shock is similar to the pattern reported in Cooper–Johri(1997) for the case of external LBD.

5.2. Serially correlated technology shocks

Another way to see the impact of the internal propagation of the model is to studythe behavior of the impulse response functions when the economy is hit by highlypersistent technology shocks when measured in log-levels not in first differences.Cogley and Nason (1995) show that the data display characteristic hump-shapedresponse functions in response to persistent but not permanent shocks whereas thebenchmark RBC model just replicates the behavior of the shock series.As in the traditional RBC model, we assume that the serial correlation of the

driving process is 0.96 which matches the observed serial correlation of output. Fig. 2reports impulse responses to a 1% shock to productivity for IRS-PF.23 The hump-shaped impulse response of output, which is not a property of the impulse responseof the standard RBC model (see Fig. 2a), indicates that this LBD model possessesinternal dynamics rather than just replicating the dynamics built into the shockprocess. Similar patterns appear for the IRS-OC case as well as the CRS case.However, the hump in the CRS case is quite small because the learning rate is just5.7% and we have imposed constant returns in the production technology as well asin the accumulation of organizational capital. The only exception is the HI Gammacase. For that case, there is no hump-shaped output response since past outputvariations have a minimal effect on the stock of organizational capital.24

As is clear in Table 6, the various specifications with LBD do quite well in terms ofreplicating the basic features of the business cycle which are reported in the final row.Once you allow for highly persistent shocks there is not much to distinguish themodels on the basis of the standard moments.

23The other cases are available from the authors in an appendix.24 It is interesting to note that the hump in output reappears at g ¼ 0:9:

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1559

Page 22: Learning-by-doing and aggregate fluctuations

0 10 20 30 40-0.005

0

0.005

0.01

0.015

0.02

0.025Technology Shock

Periods

Per

cent

dev

iatio

n fr

om s

tead

y st

ate

tech consumption output labor org. capital

0 10 20 30 40-0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8Technology Shock Impulse Response Function

Periods

Per

cent

dev

iatio

n fr

om s

tead

y st

ate

tech consumptionoutput labor

(a)

(b)

Fig. 2. Persistent shock to: (a) IRS-PF; (b) baseline RBC.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661560

Page 23: Learning-by-doing and aggregate fluctuations

Table 7 addresses an important issue: the contemporaneous correlation betweenhours and average labor productivity. It is well known that the baseline modelgenerates too high a correlation between these variables (0.31) relative to the data(0.1). In comparison, IRS-PF generates a correlation of 0.16. However this gain isaccompanied by less procyclicality of hours. These effects result from the impact oftechnology shocks on the stock of organizational capital which tends to shift the‘‘labor supply’’ at the same time as the shocks shift labor demand. This reduces thecorrelation between productivity and employment in much the same way as arguedby Christiano–Eichenbaum (1992) without, of course, introducing a second shock inthe model.25 Since the impact of organizational capital is weaker in the otherspecifications considered here, the reduction in the correlation between hours andaverage labor productivity is also smaller.While the introduction of LBD appears to improve the performance of the model

with respect to hours and output and productivity, a stronger test of the model is tolook at the dynamic patterns in the correlations. In the U.S. data, the dynamiccorrelation between average labor productivity and hours displays a distinct pattern:productivity leads hours. In fact the correlation between hours and lagged

Table 7

Correlation of hours ðtÞ and ALP ðt þ iÞ

Treatment �2 �1 0 1 2

RBC 0.4 0.334 0.307 0.245 0.187

CRS 0.4 0.362 0.336 0.275 0.216

IRS-OC 0.3 0.262 0.227 0.166 0.106

IRS-PF 0.2 0.203 0.163 0.108 0.054

IRS-HI Gamma 0.3 0.305 0.28 0.233 0.187

U.S. data 0.2 0.12 0.1 �0.04 �0.12

Table 6

Persistent technology shocks

Treatment Corr. with Y Standard deviation Statistics

contemporaneous relative to Y for Y

C Hr In W C Hr In W sd sc

RBC 0.91 0.68 0.88 0.91 0.78 0.43 2.34 0.78 0.05 0.96

CRS 0.93 0.66 0.88 0.93 0.79 0.4 2.2 0.79 0.07 0.97

IRS-OC 0.92 0.55 0.88 0.94 0.81 0.35 2 0.86 0.08 0.98

IRS-PF 0.89 0.54 0.9 0.92 0.8 0.41 1.8 0.85 0.11 0.98

IRS-HI Gamma 0.93 0.62 0.9 0.93 0.82 0.39 1.8 0.82 0.08 0.98

U.S. data log levels 0.89 0.71 0.60 0.77 0.69 0.52 1.30 1.10 0.04 0.96

25This point is discussed more fully in our discussion of taste shocks below.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1561

Page 24: Learning-by-doing and aggregate fluctuations

productivity is positive while the correlation between hours and leads of productivityis negative, though all the correlations are quite small when measured using linearlydetrended data. For example the correlations between hours and the first two lags ofproductivity are 0.12 and 0.18. The corresponding leads of productivity are �0:04and �0:12:A pattern of the declining correlation is clearly evident as we go from the second

lag of productivity to the second lead. The models with LBD also display a similardynamic pattern, albeit at a higher level, i.e., all the correlations are positive.However the correlations go from being the largest for the second lag of productivityto the smallest for the second lead of productivity. These correlations are reported inTable 7 for all parameterizations.

5.3. Stochastic growth

As argued in the introduction, a major weakness of the standard real businesscycle model is the lack of an internal propagation mechanism. This point washighlighted in Cogley–Nason (1995) by pointing out that the dynamics of outputgrowth in that class of models to a very close approximation replicated the dynamicsof the growth rate of the technology shock process that was built into the model.This is a serious shortcoming because typically, the technology shock process isestimated (using the Solow residual as a proxy) as a random walk whereas thegrowth rate of output displays at least two positive autoregressive coefficients. In thissection we explore these issues, by reworking our model to accommodate stochastictrends.26

Our results are illustrated in Fig. 3 which provides a plot of the autocorrelationfunction for output generated by all specifications of the LBD model whentechnology shocks follow a random walk along with that found in U.S. data and thatproduced by the standard RBC model. Note that our model generates positiveautocorrelation coefficients for all parameterizations except the HI Gamma casewhich, like the baseline RBC model, produces almost zero coefficients.

5.4. Some counter-factual exercises

In the traditional RBC exercises, the Solow residual is viewed as a measure oftechnology shocks to the economy. By now it is widely recognized that movements inthe Solow residual may not represent exogenous shocks to technology since theidentifying assumptions used in early exercises may not hold. Cooper–Johri (1997)showed that the naively constructed Solow residual displayed a high degree ofpersistence even when the model was originally disturbed by iid technology shocks.Clearly similar results will emerge here. While the ‘‘endogeneity’’ of the Solowresidual has received a lot of attention, similar arguments can be made about anumber of other unobservables that have been identified using Euler equations and

26We are grateful to Jeff Fuhrer for supplying us with computer code.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661562

Page 25: Learning-by-doing and aggregate fluctuations

first-order conditions in the labor market. In this section we focus on two such seriesand discuss the implications.

5.4.1. Taste shocks

Baxter-King (1991) study the quantitative implications of shifts in preferences as adriving force to explain economic fluctuations at the business cycle frequency. This isachieved by introducing a parameter into the utility function which varies theindividuals marginal rate of substitution between consumption and leisure. Thispreference shift parameter is identified from the first-order condition that equates themarginal rate of substitution between consumption and leisure with wages. Animportant characteristic of that preference shock series is that it is extremelypersistent. A possible interpretation of large preference shocks is that it measures thepoor fit of the traditional equation to the macro-data. In an empirical study ofvarious driving forces, Hall (1997) argues that preference shifts appear to be the mostimportant driving force that ‘‘explains’’ aggregate fluctuations and that this shouldbe viewed as a reason for focusing more on atemporal analysis as opposed to inter-temporal analysis.Since the preference shocks are unobserved, they can be uncovered only under

certain identifying assumptions. The preference shock series is calculated from thestatic first-order condition on labor supply which is the term in parentheses in (7).However this ignores the effects of labor supply on the accumulation of futureexperience or organizational capital. Ignoring that element can then be viewed as apotential explanation of the poor fit of the standard static equation.27

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0 5 10 15 20

periods

irs-pfacf-rbcUS data

higamma

irs-occrs

Output Growth

Fig. 3. Autocorrelation functions for various models subjected to random walk technology shocks as well

as for U.S. data.

27 Johri–Letendre (2001) uses a GMM structure to estimate the first order conditions (FOC) from a

series of RBC type models and examine the behavior of the residuals from these equations. They conclude

that the hours FOC in most RBC models is severely misspecified because of the high degree of persistence

in the residuals. They show that models with dynamic labor supply decisions like LBD can remove all this

persistence and dramatically improve the overall fit of the model to the data.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1563

Page 26: Learning-by-doing and aggregate fluctuations

This point is illustrated here by using the Baxter–King specification for calculatingtaste shocks on the simulated data from our model. Baxter–King assume that currentutility is derived from the log of current consumption and leisure. In order to beconsistent with their exercise we re-solve our model using the same specification ofpreferences as in Baxter–King. This gives rise to their equation for calculating thetaste shock (denoted by xt) in logs as

xi

c¼ #ct � #wt þ

n

1� n#nt;

where c and n denote steady-state values of consumption and hours.We find that using the Baxter–King procedure on our simulated data uncovers a

persistent series that appear as taste shocks even though the data was generated froma model with only technology shocks. The moments for our constructed preferenceshock series are reported in Table 8. When the model is parameterized using theestimates from IRS-OC of Table 4 and a serially correlated (0.92) technology shock,we see that the constructed taste shock series is very persistent (autocorrelationcoefficient of 0.98) and strongly co-moves with output (0.97). In comparison, theseries generated by Baxter–King had an autoregressive coefficient of 0.97 and wasalso positively correlated with output.

5.4.2. Labor utilization

Recently a number of studies have argued that there is an unobserved componentto the labor input, namely effort, which becomes a source of measurement error inthe labor input series. Since effort will be procyclical this means that observed laborinput series understate the contribution of labor to movements in output. Here wedevelop an expression for the effort series using a greatly simplified structure basedon Burnside et al. (1993).Imagine that labor input can be varied by firms along two margins: hiring more

labor hours and getting workers to supply more effort per hour. Assume that firmschoose the number of hours to hire before they observe the technology shock andthen adjust along the effort margin once the shock is realized. Then the unobservedseries on effort can be uncovered from two conditions.The first is the intratemporal first-order condition, obtained from a specification of

preferences in which current utility depends on the log of consumption and the log ofleisure. This is given by

Table 8

Counterfactual experimentsa

Counterfactual Correlation with output AR(1)

Taste shock 0.97 0.98

Unobserved effort 0.97 0.98

aThe simulated data for these experiments was generated using IRS-OC in Table 4.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661564

Page 27: Learning-by-doing and aggregate fluctuations

The second expression is a Cobb–Douglas technology, which determines themarginal product of labor:

Ct

1� NU¼ MPL;

Yt ¼ K1�at ðNtUtÞ

aAt;

where U measures unobserved effort and MPL is the marginal product of labor.Using these conditions, the effort series can be uncovered from actual or simulateddata.When the above specification is used to calculate effort using simulated data from

our model, we can back out the effort series from the production function as weobserve the simulated technology shock. This procedure uncovers a highly cyclicaland persistent series even though effort is constant in the data generating model.Some important moments of the ‘naive’ effort series are an autocorrelationcoefficient of 0.98 and a contemporaneous correlation with output of 0.97.28

6. Conclusions

The question addressed in this paper is relatively simple: what does LBDcontribute to business cycles? As a theoretical proposition the answer is: quite a lot.From a quantitative perspective, it appears that the answer is the same. In particular,we estimate substantial and statistically significant LBD effects and find that thesedynamic interactions can influence observed movements in real output. Thus, theselinks can serve as propagation devices.To the extent that these interactions are excluded from standard models, they can

lead to misspecification errors. This was highlighted by our reinterpretation ofresults concerning the presence of taste shocks and unobservable labor effort. Itappears that these phenomena can be ‘‘explained’’ by a richer stochastic growthmodel that incorporates LBD.Finally, the model generates a hump-shaped response in the level of output to a

serially correlated shock which highlights the internal propagation ability of themodel. Hump-shaped response functions were suggested as an important diagnostictool by Cogley and Nason for this class of models.This analysis hinges on the presence of technology shocks. The next step in this

research will be to explore the effects of other disturbances in a model with LBD.

References

Bahk, B.H., Gort, M., 1993. Decomposing learning-by-doing in new plants. Journal of Political Economy

101, 561–583.

28 Interestingly, Johri–Letendre (2001) find that the variable labor effort model reduces the size of the

residuals from the hours FOC but leaves the persistence of the residuals unchanged. By contrast the LBD

model reduces the size as well as the persistence of the residuals.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–1566 1565

Page 28: Learning-by-doing and aggregate fluctuations

Bartelsman, E., Gray, W., 1996. The NBER Manufacturing Productivity Database. NBER Technical

Working Paper #205, October.

Basu, S., 1996. Procyclical productivity: overhead inputs or cyclical utilization? Quarterly Journal of

Economics 111 (3), 719–751.

Baxter, M., King, R., 1991. Productive externalities and business cycles. Institute for Empirical

Macroeconomics, Federal Reserve Bank of Minneapolis, Discussion Paper #53.

Benkard, C., 1997. Learning and forgetting: the dynamics of aircraft production. Mimeo, Yale University.

Burnside, C., Eichenbaum, M., Rebelo, S., 1993. Labor hoarding and the business cycle. Journal of

Political Economy 101, 245–273.

Burnside, C., Eichenbaum, M., Rebelo, S., 1995. Capital utilization and returns to scale. In: Bernanke, B.,

Rotemberg, J. (Eds.), NBER Macroeconomics Annual. MIT Press, Cambridge, MA, pp. 67–123.

Christiano, L., Eichenbaum, M., 1992. Current real-business-cycle theory and aggregate labor-market

fluctuations. American Economic Review 82, 430–450.

Cogley, T., Nason, J., 1995. Output dynamics in real-business-cycle models. American Economic Review

85, 492–511.

Cooper, R.W., Johri, A., 1997. Dynamic complementarities: a quantitative analysis. Journal of Monetary

Economics 40, 97–119.

Hall, R., 1997. Macroeconomic fluctuations and the allocation of time. Journal of Labor Economics 15,

223–250.

Imai, S., 2000. Intertemporal labor supply and human capital accumulation. Working Paper, Pennsylvania

State University.

Irwin, D., Klenow, P., 1994. Learning-by-doing spillovers in the semiconductor industry. Journal of

Political Economy 102 (6), 1200–1227.

Johri, A., Letendre, M.-A., 2001. Labour market dynamics in RBC models. McMaster University

Working Paper.

Jovanovic, B., Nyarko, Y., 1995. A Bayesian learning model fitted to a variety of empirical learning

curves. Brookings Papers on Economic Activity, pp. 247–299.

Katz, L., Meyer, B., 1990. Unemployment insurance, recall expectations, and unemployment outcomes.

Quarterly Journal of Economics 105 (4), 973–1002.

King, R., Plosser, C., Rebelo, S., 1988. Production, growth and business cycles: I. The basic neoclassical

model. Journal of Monetary Economics 21, 195–232.

Rotemberg, J., Woodford, M., 1996. Real-business-cycle models and the forecastable movements in

output, hours, and consumption. American Economic Review 86 (1), 71–89.

Wright, T., 1936. Factors affecting the cost of airplanes. Journal of Aeronautical Science 3, 122–128.

R. Cooper, A. Johri / Journal of Monetary Economics 49 (2002) 1539–15661566


Recommended