+ All Categories
Home > Documents > FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL...

FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL...

Date post: 02-Aug-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
25
r Academy of Management Journal 2016, Vol. 59, No. 2, 436459. http://dx.doi.org/10.5465/amj.2013.1109 FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL FAILURES AND R&D PERFORMANCE IN THE PHARMACEUTICAL INDUSTRY RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL NERKAR University of North Carolina at Chapel Hill Do firms learn from their failed innovation attempts? Answering this question is im- portant because failure is an integral part of exploratory learning. In this study, we consider whether and under what circumstances firms learn from their small failures in experimentation. Building on organizational learning literature, we examine the con- ditions under which prior failures influence firmsR&D output, in terms of amount and quality. Our empirical analysis of voluntary patent expirations (i.e., patents that firms give up by not paying renewal fees) in 97 pharmaceutical firms between 1980 and 2002 shows that the number, importance, and timing of small failures are associated with a decrease in R&D output (patent count) but an increase in the quality of the R&D output (forward citations to patents). Exploratory interviews further suggest that the results are driven by a multilevel learning process from failures in pharmaceutical R&D. Our findings contribute to the organizational learning literature by providing a nuanced view of learning from failures in experimentation. Failure is an integral part of the innovation pro- cess. Exploratory learning, a key building block of innovation, occurs through experimentation and search (March, 1991). The literature on organiza- tional innovation emphasizes the importance of experimentation and the establishment of organiza- tional structures and incentives that encourage it (e.g., Ahuja & Lampert, 2001; Cannon & Edmondson, 2005; Lee, Edmondson, Thomke, & Worline, 2004; Nohria & Gulati, 1996; Thomke & Kuemmerle, 2002). Inevitably, most experiments fail, but common wisdom suggests that such failures provide valu- able feedback for future search efforts. However, learning from failure is far from automatic, given the psychological and organizational processes that attach negative meaning to failure (see Cannon & Edmondson, 2005, for a review.) This observation has led to a large body of prescriptive advice to embrace failure as a necessary part of the innovation process (e.g., Edmondson, 2011; McGrath, 2011). For instance, IDEO, an influential design firm known as one of the most innovative in the world, has a slogan that encourages experimentation and trial-and-error learning: Fail often in order to suc- ceed sooner.In spite of the central role of failure in experimentation, there is little empirical research examining whether and how firms learn from fail- ure in innovation. We aim to fill this gap by shedding light on learn- ing from failure in the course of knowledge genera- tion (March, 1991; cf. Sitkin, 1992). In particular, we ask whether and under what circumstances firms learn from their failed attempts at innovation. While prior literature provides many important insights about learning from failure (e.g., Baum & Dahlin, 2007; Haunschild & Sullivan, 2002; Hayward, 2002; Madsen & Desai, 2010; Miner, Kim, Holzinger, & We thank associate editor Dovev Lavie and three anon- ymous reviewers for their feedback and suggestions during the review process. We are grateful to Aleksandra Rebeka and Scott Rockart, who provided valuable suggestions that substantially improved the paper. We also thank seminar participants at the Strategic Management Society Meet- ings, Cass Business School, and Sabanci University School of Management. Isin Guler is a recipient of the BAGEP Research Award of the Science Academy in Turkey and the TUBA Turkish Sciences Academy GEBIP Research Award. 436 Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holders express written permission. Users may print, download, or email articles for individual use only.
Transcript
Page 1: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

r Academy of Management Journal2016, Vol. 59, No. 2, 436–459.http://dx.doi.org/10.5465/amj.2013.1109

FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROMSMALL FAILURES AND R&D PERFORMANCE IN THE

PHARMACEUTICAL INDUSTRY

RAJAT KHANNATulane University

ISIN GULERSabanci University

ATUL NERKARUniversity of North Carolina at Chapel Hill

Do firms learn from their failed innovation attempts? Answering this question is im-portant because failure is an integral part of exploratory learning. In this study, weconsider whether and under what circumstances firms learn from their small failures inexperimentation. Building on organizational learning literature, we examine the con-ditions under which prior failures influence firms’ R&D output, in terms of amount andquality. Our empirical analysis of voluntary patent expirations (i.e., patents that firmsgive up by not paying renewal fees) in 97 pharmaceutical firms between 1980 and 2002shows that the number, importance, and timing of small failures are associated witha decrease in R&D output (patent count) but an increase in the quality of the R&D output(forward citations to patents). Exploratory interviews further suggest that the results aredriven by a multilevel learning process from failures in pharmaceutical R&D. Ourfindings contribute to the organizational learning literature by providing a nuancedview of learning from failures in experimentation.

Failure is an integral part of the innovation pro-cess. Exploratory learning, a key building block ofinnovation, occurs through experimentation andsearch (March, 1991). The literature on organiza-tional innovation emphasizes the importance ofexperimentation and the establishment of organiza-tional structures and incentives that encourage it(e.g., Ahuja&Lampert, 2001; Cannon&Edmondson,2005; Lee, Edmondson, Thomke, & Worline, 2004;Nohria & Gulati, 1996; Thomke & Kuemmerle,2002). Inevitably, most experiments fail, but commonwisdom suggests that such failures provide valu-able feedback for future search efforts. However,

learning from failure is far fromautomatic, given thepsychological and organizational processes thatattach negative meaning to failure (see Cannon &Edmondson, 2005, for a review.) This observationhas led to a large body of prescriptive advice toembrace failure as a necessary part of the innovationprocess (e.g., Edmondson, 2011; McGrath, 2011).For instance, IDEO, an influential design firmknown as one of the most innovative in the world,has a slogan that encourages experimentation andtrial-and-error learning: “Fail often in order to suc-ceed sooner.” In spite of the central role of failure inexperimentation, there is little empirical researchexamining whether and how firms learn from fail-ure in innovation.

We aim to fill this gap by shedding light on learn-ing from failure in the course of knowledge genera-tion (March, 1991; cf. Sitkin, 1992). In particular, weask whether and under what circumstances firmslearn from their failed attempts at innovation. Whileprior literature provides many important insightsabout learning from failure (e.g., Baum & Dahlin,2007; Haunschild & Sullivan, 2002; Hayward, 2002;Madsen & Desai, 2010; Miner, Kim, Holzinger, &

We thank associate editor Dovev Lavie and three anon-ymous reviewers for their feedback and suggestions duringthe review process. We are grateful to Aleksandra Rebekaand Scott Rockart, who provided valuable suggestions thatsubstantially improved the paper. We also thank seminarparticipants at the Strategic Management Society Meet-ings, Cass Business School, and SabanciUniversity Schoolof Management. Isin Guler is a recipient of the BAGEPResearchAwardof theScienceAcademy inTurkey and theTUBATurkishSciencesAcademyGEBIPResearchAward.

436

Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’s expresswritten permission. Users may print, download, or email articles for individual use only.

Page 2: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Haunschild, 1999), most of these studies focus eitheron catastrophic failures, such as plane or orbitercrashes (Haunschild&Sullivan,2002;Madsen&Desai,2010), or operational failures, such as acquisition in-tegrationproblemsorproductdefects (e.g.,Haunschild& Rhee, 2004; Hayward, 2002; Henderson & Stern,2004). Small and frequent failures arising in thenatural course of experimentation are distinct fromthose studied in the prior literature, because failurein innovation is generally accepted as a likely, albeitunwelcome, outcome of the experimentation pro-cess. In contrast to operational failures, such as thosein product safety, acquisitions, or product selection,where it is desirable to minimize the instances offailures, small failures in experimentation are oftenthe only way to learn about causal relationshipswhen a complete understanding of the underlyingscience is unavailable to boundedly rational de-cision makers (Fleming & Sorenson, 2004; Sitkin,1992). Even though experimentation and failure areindispensable for innovation, learning from failedexperiments in organizational contexts is far fromstraightforward (e.g., Eggers, 2012a). Examining ifand under what conditions firms can leverage priorfailures to enhance innovation performance istherefore critical to our understanding of the in-novation process (Sitkin, 1992).

In this study, we focus on a particular type offailure in the context of R&D efforts in the pharma-ceutical industry. Specifically, we study how failedR&D projects in the sector are associated with thesubsequent R&D performance of firms. We observefirms’ voluntary patent expirations before the legallyallowed 20-year period, provided by the UnitedStates Patent and Trademark Office (USPTO), as anindicator of small failures in experimentation. Sincethe late 1980s, firms have been required to paymaintenance fees every four years to keep their pat-ents active. Recent empirical evidence shows thatfirms let a substantial fraction of their patents expirebefore the regular expiration date by not payingmaintenance fees (Serrano, 2010). Since pharma-ceutical firms have patents ranging from severalhundreds to several thousands, discontinuing somepatents earlier than their legally permitted expira-tion date may seem trivial. However, the effort andmoney invested in the discovery phase of each pat-ent as well as the relative ease of renewing a patentsuggest that firms make discontinuation decisionsmindfully. Hence, the voluntary discontinuation ofany patent is a self-admitted failure event. We askwhether and under what conditions these eventscreate learning opportunities in R&D.

We examine how such small failures in experi-mentation influence an important component of in-novation outcomes: R&D performance. Successfulinnovation entails high R&D performance, as well ascompetent commercialization (e.g., Fleming, 2002;Nerkar & Roberts, 2004; Schumpeter, 1934). In thisstudy,we focus on the former step and ask how smallfailures influence subsequent R&D outcomes. Indoing so, we distinguish between two important di-mensions of R&D performance: R&D output amount(patent count) and R&D output quality (forward ci-tations to patents). Organizational learning literaturesuggests a positive influence of learning on both di-mensions, but most prior research on R&D eitherfocuses on one dimension of R&D performance orcombines the two into one aggregate measure. Wesubmit that R&D output and quality are distinctoutcomes, and test the influence of failures on eachoutcome separately in order to get a more completepicture of organizational learning following smallfailures.

The paper makes three important contributions.First, we contribute to the literature on learning fromfailure by examining small failures in exploration.Our study specifically focuses on the experimen-tation that underlies the concepts of search in in-novation, and asks how failures in experimentationinfluence subsequent innovation outcomes. Weextend and test ideas on learning from failures inexploration, building on prior work on intelligentfailures (Sitkin, 1992) and learning from perfor-mance feedback (Greve, 2003; March & Simon,1958). Second, we investigate conditions underwhich firms are more likely to learn from smallfailures in experimentation. Prior literature sug-gests that experiences differ in terms of the learningthey provide (Eggers, 2012b). In particular,we focuson the timing and importance of failures as sourcesof learning. We argue that early failures in R&Dprovide better learning opportunities for firmsthan do failures that come later in the R&D process.We also suggest that failures of projects that arerelatively more important to the firm lead to higherlearning.

This study also contributes to the literature onorganizational innovation by examining failed in-novation attempts as a determinant of a firm’s sub-sequent R&D performance. While prior literaturehas studied many different factors that can influ-ence R&D outcomes (e.g., Ahuja, 2000; Griliches,1994; Henderson & Cockburn, 1994; Rothaermel& Thursby, 2007), few studies have investigatedlearning from prior failures as a contributing factor.

2016 437Khanna, Guler, and Nerkar

Page 3: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Weaim to fill this gap bydisentangling the impact offailed experiments on both R&D output and quality.

Ultimately, we find that small failures in experi-mentation are important for the R&D performance ofpharmaceutical firms but have opposite effects onR&D output and quality. Specifically, small failuresin experimentation lead to a decrease in subsequentR&D output but an increase in the quality of R&D out-comes. We develop an understanding of this coun-terintuitive result through our field interviews. Inparticular, the distinction between idea generationand idea selection suggests a multilevel model oflearning in pharmaceutical R&D.

ORGANIZATIONAL LEARNING FROMFAILURES

Early studies on organizational learning focusedprimarily on increasing efficiencies as a function ofexperience (e.g., Argote & Epple, 1990; Yelle, 1979).Recent work in this field has examined the effect ofexperience on such outcomes as service quality andsurvival rates of firms in service and banking in-dustries (Baum & Dahlin, 2007; Baum & Ingram,1998). Past experience also shapes the trajectory ofa firm’s innovations by increasing its absorptive ca-pacity (Cohen&Levinthal, 1990) and its competence(March, 1991; Sorensen & Stuart, 2000). Organiza-tions primarily learn through a search processtriggered by the feedback received from the environ-ment (Greve, 2003; Levinthal &March, 1993; March,1991).

Although a large body of literature has establishedthat organizations learn from their experiences, rel-atively few studies distinguish whether the experi-ence in question was a success or failure (see Sitkin,1992). Failure offers firms many opportunities tolearn, but learning from failure is far fromguaranteed.Most organizations find it challenging to learn fromfailures (Cannon & Edmondson, 2005; Edmondson,2002), due to the lack of sufficient information aboutthe failure as well as a difficulty in agreeing on itscauses (Eggers, 2012a). For instance, it is not un-common for organizational members to interpretcauses of failure in a way that is most beneficial tothemselves (Baumard & Starbuck, 2005).

However, if firms can at least partially overcomethese challenges, failures can be an important sourceof learning. Failures may lead to process improve-ments, increase reliability, reduce rates of futurefailure, and decrease failure-related costs (Baum &Dahlin, 2007; Haunschild & Sullivan, 2002; Kim &Miner, 2007; Madsen & Desai, 2010). They enhance

learning by challenging the understanding of thecause-and-effect relationships, helping firms replaceexisting routines and knowledge with more usefulandaccurate ones (e.g.,Haunschild&Sullivan, 2002;Henderson & Stern, 2004; March, Sproull, & Tamuz,1991).

Moreover, failures may change the scope and di-rection of the organization’s search activities. Suc-cess leads decision makers to remain on the sametrajectory (Audia, Locke, & Smith, 2000), restrictingthe breadth of search to the neighborhood of existingknowledge (March, 1981). In contrast, failure toreach aspiration levels may trigger problemisticsearch, causing firms to look for solutions or alter-natives that can address the problem of decreasedperformance (Cyert & March, 1963; Greve, 2003).Failures may also provide firms with informationto focus search in newdirections (Wildavsky, 1988). Infirmsthatholdaportfolioof innovations, trial-and-errorlearning often influences the composition of theportfolio of projects under consideration (e.g., Bower,1970; Burgelman, 1983, 1991; Henderson & Stern,2004).

In sum, as this brief review of the literature sug-gests, there is general agreement that failures are animportant source of organizational learning, althoughfew studies have specifically focused on learningfrom failures in the context of experimentation. In thenext section, we describe our empirical context. Wethen go on to develop and test our hypotheses aboutsmall failures in experimentation and R&D outcomesin subsequent sections.

CONTEXT: PATENT FAILURES IN THEPHARMACEUTICAL INDUSTRY

Patented R&D efforts of pharmaceutical firmsprovide the empirical context for our study oflearning fromsmall failures in experimentation. Thissetting provides an interesting and appealing con-text for several reasons. First, the highly research-intensive nature of the pharmaceutical industry(Henderson & Cockburn, 1994) makes it suitable forexamining the R&D process and outcomes. Morespecifically, patents have a particular strategic im-portance in this industry (Grabowski & Vernon,1992; Scott Morton, 2000). Pharmaceutical researchis expensive and risky. It takes up to one billion dol-lars to bring a new drug into the market (Grabowski,2002; Henderson & Cockburn, 1994; PhRMA,2007), and the lengthy nature of the clinical trialprocess requires pharmaceutical firms to disclosesensitive proprietary knowledge, putting them

438 AprilAcademy of Management Journal

Page 4: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

at risk of imitation (DeCarolis, 2003; Polidoro&Toh,2011). Pharmaceutical firms rely on patents to pro-tect the knowledge createdwithin the firm (Gilbert&Shapiro, 1990; Grabowski, 2002; Klemperer, 1990).Luckily, patents provide a relatively effectivemethod of guarding proprietary intellectual prop-erty for pharmaceutical firms (Levin, Klevorick,Nelson, Winter, Gilbert, & Griliches, 1987). Phar-maceutical firms patent every innovation possible(Cohen, Nelson, & Walsh, 2000; Levin et al., 1987;Paruchuri, Nerkar, & Hambrick, 2006) and startpatenting early in the research process (Penner-Hahn & Shaver, 2005). A survey-based study foundthat pharmaceutical firms have the highest pro-pensity to patent their innovations, with around80% of their innovations protected by patents, ascompared to an average of 35% across all industriesstudied in the survey (Arundel & Kabla, 1998).Moreover, pharmaceutical firms often reward sci-entists on the number of patents produced (Stern,2004), and previous research has established a pos-itive correlation between the number of patents andprofitability in science-based industries, such aspharmaceuticals (Cockburn &Griliches, 1988; Jaffe,1986).

The second reason for choosing this context isthat many patented inventions in the pharmaceu-tical industry end up as small failures in experimen-tation.Unlike other industries, such as electronics orcomputers, in which patents are granted for prod-ucts, patents in the pharmaceutical industry aregranted early in the R&D process for research ideasthat may or may not become products (Lehman,2003). As in many innovative industries, the out-comes of R&D in the pharmaceutical industry arehighly skewed (Scherer & Ross, 1990). Althoughpatented ideas have crossed the first hurdle withinthe firm, they have a long way before they can turninto drugs and provide returns. In fact, most patentsin the pharmaceutical industry do not lead toproducts. Firms typically apply for patents for leadcompounds before or during the preclinical trialsstage (Heled, 2012; Ward, 1992). Patented com-pounds then go through a long process of pre-clinical and human trials to identify suitable drugs(PhRMA, 2007). This process involves severalstages. First, scientists try to understand the diseasein question, and its underlying causes, throughstudies of changes in genes and how these changescan lead to the disease. Second, scientists look fora “target,” an altered gene or molecule, that can in-teractwith apotential drug. Third, scientists validatethe identified target for its role in the disease and

successful interaction with the drug moleculethrough extensive experiments. Once scientistshave an understanding of the disease and the po-tential drug, they begin the process of finding thelead compounds that can be used to treat the dis-ease. Fourth, after conducting preliminary safetytests and optimization studies, firms select a smallnumber of compounds that are further tested inpreclinical trials, a process that establishes thesafety of drugs in animals before these drugs can betested in humans. On average, of the 5,000–10,000compounds tested in the fourth step, only around250 are selected for preclinical testing (lead com-pounds). After extensive tests in preclinical stud-ies, one to five lead compounds (drug candidates)are selected for further study in clinical trials.Firms file an investigational new drug (IND) ap-plicationwith the Food andDrugAdministration tobegin clinical trials in humans for these drug can-didates. About one out of every five drug candi-dates successfully clears all three phases of clinicaltrials and is commercialized in the market for thetreatment of the disease in question (Grabowski,2002). In sum, failures in pharmaceutical researchmirror closely the idea of small failures in experi-mentation in this paper and are nicely captured in thepatent data (Thomke, 2003; Thomke & Kuemmerle,2002).

We examine the impact of small failures on twodistinct dimensions of R&D performance: (1) R&Doutput and (2) R&D quality. R&D output has beenused as a measure of innovation performance inmany studies, as it represents a higher rate of in-novation output (e.g., Ahuja, 2000; Cockburn &Henderson, 1998; Gambardella, 1992; Rothaermel& Thursby, 2007; Somaya, Williamson, & Zhang,2007). R&D output “represents an externally vali-dated measure of novelty” (Griliches, 1990, quotedin Ahuja, 2000: 433) and has economic significance(Scherer & Ross, 1990). An increase in R&D outputnot only suggests that a firm engages in more ex-perimentation, but also that the firm’s engage-ment in experimentation is likely to lead to a largerdiversity of solutions, which in turn increasesthe probability of finding a high-quality solution(Terwiesch & Ulrich, 2009; Terwiesch & Xu, 2008).Regarding the second measure of R&D perfor-mance, unlike output in manufacturing, in whichunits without defect are identical, R&D outputvaries greatly in terms of quality (Cardinal, 2001).It is therefore important to understand whetherand how prior failures influence the quality of R&Doutcomes.

2016 439Khanna, Guler, and Nerkar

Page 5: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

LEARNING FROM SMALL FAILURES INEXPERIMENTATION

Small failures in experimentation provide oppor-tunities for learning in multiple ways. First, fre-quency and scale of small failures provide firmswithvaluable feedback that helps shape the direction ofthe R&D portfolio. Firms that experience small fail-ures on a frequent basis can form a more developedunderstanding of causal relationships without in-curring large costs and can reallocate resources ac-cordingly (Eggers, 2012a). Second, small failuresencourage learning by initiating a search for thecauses of such failures without threatening the ex-istence of the firm or the decision makers, as larger,more visible failures may do. Since these failures donot endanger the survival of the firm, it is possible forthe firm to both attend to such failures and engage inthe search process. The experimental nature ofthese failures encourages decision makers withinfirms to analyze the outcomes more objectively(Baumard & Starbuck, 2005). Firms can learn fromexperimentation-driven failures internally beforethey make further investments in a particular prod-uct or innovation. 3M, one of the firms in our sample,does not favor following anymaster plan or engagingin complex strategic planning, but, rather, promotesa culture in which employees are not afraid to “trya lot of stuff and keep what works” (Collins & Porras,1994: 159).

Moreover, small failures in experimentation areassociated with deliberate and mindful learning pro-cesses (Levinthal & Rerup, 2006; Sitkin, 1992).Whenfirms have an understanding of the underlying sci-entific principles, they can come upwith innovativeideas deductively (Fleming & Sorenson, 2004). But,when the underlying science is poorly understoodand uncertaintywith respect to outcomes is high, thefirm needs to engage in experimentation in order toinnovate (Terwiesch & Ulrich, 2009). As scientistscan gain a better sense of the causal relationships bytesting and ruling out hypotheses, new ideas gene-rated through experimentation will on average be ofhigher quality.

In addition, experimentation stimulates learningby increasing the number and variety of solutionsgenerated, and, in turn, the quality of the final solu-tion (Terwiesch & Ulrich, 2009; Terwiesch & Xu,2008; Thomke, 2003). As firms actively engage inexperimentation, they are likely to experience morefailures, which in turn could lead tomore innovativeand successful outcomes (Thomke, 2003). Sincesuch failures stimulate distant search and further

experimentation, they are likely to result in a largervariability in the innovation output. Firms are thenmore likely to produce outliers in their innovationportfolios in terms of quality. Increasing variabil-ity enhances the likelihood of hitting “home runs”rather than producing consistently mid-range in-novations in terms of impact. We therefore hy-pothesize that:

Hypothesis 1a. As a firm’s small failure experi-ence increases, its subsequent R&D output willincrease.

Hypothesis 1b. As a firm’s small failure experi-ence increases, its subsequent R&D outputquality will increase.

Failure of Important Projects

Small failures in experimentation may vary interms of how important they were to the firm beforethe failure occurred. For instance, a failed projectthat has been endorsed by topmanagement might beconsidered a failure of greater importance than onethat was initiated by lower levels of management.In the present study, we define the importance ofa small failure in terms of the project’s expectedperformance before it eventually failed. In otherwords, decision makers in the firm had more ex-pectations from a more important project beforefailure. As such, these failures are likely to be morevisible within the firm. In the context of pharma-ceutical R&D, a project’s importance is highly relatedwith its scientific value, or its impact on subsequentresearch. Note that the importance of a project isdistinct from its size, which is often defined in termsof the investment at stake or consequences of failure.It is possible for a project that has required a lot ofinvestment to generate little scientific or commercialvalue, just as it is possible for a small project in termsof outlays to generate high value. By definition, wefocus only on small failures that are similar in thelevels of investment and limited in consequences.Still, they may vary in terms of perceived potentialand value.

We argue that important failures will elicit morelearning for the following reasons. First, more im-portant failures attract greater attention from de-cisionmakerswithin the firm. Given thatmanagerialattention is scarce and selective, it is likely to be al-located toprojects thatwereprominent before failure(Hoffman & Ocasio, 2001). Organizational decisionmakers may neglect to acknowledge failures in pro-jects of lower importance, focusing on successful

440 AprilAcademy of Management Journal

Page 6: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

projects instead (Cannon & Edmondson, 2005; Madsen& Desai, 2010). Managers may fail to attend to theweak cues createdby failures that deemedperipheralto the firm (Eggers, 2012a; Rerup, 2009). Failures ofprojects of little importance are not likely to chal-lenge decision makers’ core beliefs (Baumard &Starbuck, 2005). In contrast, failures with a higherprofile elicit surprise, are recognized more easily,and lead to changes in behavior more often, conse-quently affecting performance (Van de Ven, 1986).

In the context of small failures in experimentation,small failures of low importance to the firm may runthe risk of going unnoticed, being deliberately ignored,or being perceived at the aspiration level. In contrast,projects that were considered important to the firmbefore the failure trigger more extensive search for thecauses and a more careful reallocation of R&D re-sources, which in turn is likely to improve the firm’ssubsequent R&D outcomes. We therefore expect firmsthat experience small failures of higher importance toenjoy a subsequent increase in their R&Dperformance.

Hypothesis 2a. As a firm experiences smallfailures of higher importance, its subsequentR&D output will increase.

Hypothesis 2b. As a firm experiences smallfailures of higher importance, its subsequentR&D output quality will increase.

Fail Early or Fail Late?

Prior researchhas argued that timing of experienceis an important component of learning in the contextof innovation (Eggers, 2012b). The effect of the tim-ing of failures on R&D performance, however, isrelatively understudied. As discussed in the pre-vious section, failures provide firms with in-formation on what might be wrong, and work asfeedback that can improve performance going for-ward. The timing of the failure is important to thefirms’ learning outcomes, as it determines howquickly firms get feedback about a project. Experi-mental research shows that subjects who are givenquick feedback will eliminate incorrect choices andlearn faster. Also, efficacy of feedback diminisheswith the time elapsed in providing it (Skinner,1954). Delayed feedback may be muddled withnoise, making it hard for decision makers to assessrelationships betweenactions andoutcomes (Denrell,Fang, & Levinthal, 2004). Kettle and Haubl (2010)emphasized the importance of early feedback and itspositive impact on performance. Sitkin (1992) sug-gested that small failures lead to most learning when

they elicit quick feedback, so that the firm can learn,try new solutions, and generate new feedback.

In the context of innovation, the timing of a failurerefers to the point in the R&D process at which a firmfaces and acknowledges the failure. For instance, inthe pharmaceutical industry, a compound may failearly in development, even before preclinical trials,or many years later, during late-stage clinical trials.Whether a firm will face failure relatively early orlate in the development of an innovation depends onboth the external environment and internal prac-tices. If internal practiceswithin the firmare orientedtoward identifying failures, then the likelihood ofspotting failures early and learning from themwill behigher than when there is no specific attention givento the process. Sometimes, firms are so constrainedby the signals from the external environment thatthey have no choice but to wait before labelinga technology as a success or failure. Interim feedbackhelps firms identify potential problems and moti-vates them to engage in search to find solutions(Jordan &Audia, 2012). This idea also resonateswithanecdotal evidence on the approach of highly in-novative firms. For instance, Rich DeVaul, head ofthe Rapid Evaluation Team at Google declared,“Why put off failing until tomorrow or next week ifyou can fail now?” (Gertner, 2014)

Based on the discussion above, we expect thetiming of feedback to influence R&D performance.Early feedback in the R&D process allows firms tomanage available resources and limit allocation ofresources to unproductive arenas. Also, early fail-ures allow firms to experiment in more ways, com-pared to failures that come late in theR&Dprocess. Incontrast, when feedback comes late in the R&D pro-cess, itmay behard to pinpoint the exact decisions oractions that led to the failure, therefore confoundinglearning. Moreover, later failures may lead to esca-lation of commitment and cause a firm to continuerelated investments (Staw, 1976). R&D in high-techindustries is path dependent, and it is difficult forfirms to change direction after significant progresshas been made. A firm that receives early feedbackon a technologymay find it easier to reconfigureR&Dinvestments and implement the learning from fail-ures more effectively.

Hypothesis 3a. As a firm experiences smallfailures earlier in the R&D process, its sub-sequent R&D output will increase.

Hypothesis 3b. As a firm experiences smallfailures earlier in the R&D process, its sub-sequent R&D output quality will increase.

2016 441Khanna, Guler, and Nerkar

Page 7: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

METHODS

We test our hypotheses with data on patent expi-rations in the pharmaceutical industry. We oper-ationalize a small failure as a firm’s decision todiscontinue a patent, leading to its expiration beforethe end of its legal life of 20 years. In the late 1980s,USPTO made it necessary for firms to pay mainte-nance fees every 4, 8, and 12 years to keep theirpatents active. Any firm or entity failing to pay thefees at the scheduled time has its patent expired. Themost likely reason for early discontinuation of a pat-ent is the lack of relative value as perceived by thefirm (Serrano, 2010).

After the introduction of the maintenance fees inthe Manual of Patent Examining Procedure, Chapter2500 in 1980, pharmaceutical firms have discontin-ued a large number of patents, with expired patentsreaching up to 50% of total patents held (Serrano,2010). Figure 1 shows the premature expiration ofpatents between 1985 and 2002 for 97 pharmaceu-tical firms analyzed in the current study. A quicklook at the figure suggests that, immediately follow-ing the introduction of patent law changes in 1980,firms discontinued only a small proportion of exist-ing patents, but this share has increaseddramaticallyin recent years, by between 40%and 60%. The small

number of patents discontinued in the beginning isalso suggestive of the critical role of firms’ decisionmaking. In theabsenceof relevant informationon theirpatents during initial years, firms did not discontinueas many patents. With time, firms have started to usepatent expirations to manage their patent portfolio.

Serrano (2010)’s finding that patents that arepotentially less valuable are more likely to bediscontinued suggests that a firm’s decision todiscontinue its patents prematurely is not randombut rather deliberate innature.Aswediscussed in theprevious sections, patents are critical to the successof R&D in the pharmaceutical industry (Cockburn& Griliches, 1988; Jaffe, 1986). Interestingly, patentmaintenance fees are negligible compared to thecosts of acquiring a patent. Total fees to maintaina patent that has already been granted are less than$15,000 in total. Given the high potential value ofa patent and the lowmaintenance fee, it is reasonableto think that firmswould discontinue a patent only ifthey have good reason to believe that it has verylimited value. In the time between the patent grantand the renewal date, the firm receives informationabout the future value of the patent. This informationcould be external (e.g., about other technologicaladvancements) or internal (e.g., about the perceivedviability of the project). Discontinuation of a patent

FIGURE 1Number of Expired Patents for Firms Included in this Study between 1985 and 2002

0

2000

4000

6000

8000

10000

12000

1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002

Total Number of Patents Total Number of Patents ExpiredNumber of Patents Expired after 4 Years Number of Patents Expired after 8 YearsNumber of Patents Expired after 12 Years

Nu

mbe

r of

Exp

ired

Pat

ents

Application Year

442 AprilAcademy of Management Journal

Page 8: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

suggests that the firm has received negative feed-back. The high rate of discontinuation is consistentwith the fact that most patented compounds failbefore introduction to the market (Grabowski,2002). Early patenting activity represents the largenumber of “bets” taken by the pharmaceuticalfirms, while early expirations represent culling outunpromising opportunities from the research port-folio as a result of incoming information. Witha reasonable understanding that the patent dis-continuations represent small failure events, theuncertainty in estimating the value of a given patentin advance and availability of specific checkpointsto evaluate the fates of patents make this settingappealing for a study of learning from small failuresin experimentation.

Data and Sample

We obtained the data on patent expirations fromUSPTO. To be consistent with previous research, weconsidered the 3-digit USPTO classes 514 and 424 toidentify patents in pharmaceutical industry (Anand,Oriani, & Vassolo, 2010; Guler & Nerkar, 2012).These patents belonged to more than 200 firms.Since we are examining the effect of patent discon-tinuations on R&D outcomes, we only kept firms inour sample that were active in patenting. Therefore,we removed firms that did not patent for more than20 years between 1980 and 2002, leading to the finalsample consisting of 97 pharmaceutical firms.Table 1 lists 30 major pharmaceutical firms out ofthe 97 firms1 in our sample. USPTO provides in-formation on all expired patents and the stage atwhich patents were discontinued; that is, first stage(after 4 years from the application date), second stage

(after 8 years from the application date), or thirdstage (after 12 years from the applicationdate). Aswepropose to measure learning at the firm level, weaggregated the number of discontinued patents atthe firm–year level, leading to a final panel thatcontains 2,015 firm–year observations.2 After theintroduction of the maintenance fee regime in De-cember 1980 by USPTO, the first set of patents toexpire did so in 1985 (after the first deadline of 4years), so our dataset captures every discontinuedpatent following the regime change. Of 156,267 pat-ents granted to the 97 firms in our sample, 56,630patents (36.24%) were discontinued as of 2002. Wetracked all variables between 1980 and 2002 in orderto capture all patents that were at risk of discontin-uation in 1985, and tracked all expired patents be-tween 1985 and 2002.

Dependent Variables

Thedependent variable for this analysis is theR&Dperformance of the firm, measured as both firms’R&D output and the quality of their R&D output. Wemeasured R&D output as the number of successfulpatent applications by a firm in a given year. Patentoutput has been used as a measure of R&D perfor-mance in many studies (Ahuja, 2000; Cockburn& Henderson, 1998; Gambardella, 1992; Nicholls-Nixon & Woo, 2003; Penner-Hahn & Shaver, 2005;Rothaermel & Thursby, 2007; Somaya, Williamson,& Zhang, 2007). Even though they provide an im-perfect measure of a firm’s innovation output, “pat-ents are tangible manifestations of a firm’s ideas,

TABLE 1List of 30 Major Pharmaceutical Firms from the Sample of 97 Firms

Abbott Bristol-Myers Squibb MonsantoAjinomoto Chiron NovartisAllergan Eli Lilly Novo NordiskAmerican Cyanamid Fujisawa Pharmaceutical PfizerAmgen Genentech Pharmacia ABAstra Genzyme RocheAventis Glaxo Wellcome ScheringBaxter Janssen Pharmaceutical ShionogiBayer Mallinckrodt Medical SmithKline BeechamBoehringer Ingelheim Merck Zeneca

1 A complete list of firms is available from authors uponrequest.

2 The pharmaceutical industry experienced a big waveof mergers and acquisitions (M&As) during the period ofour study. In such cases, we kept the firms as separate en-tities before the M&A occurred and combined them ina single entity afterward.

2016 443Khanna, Guler, and Nerkar

Page 9: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

techniques, and products, and are therefore an im-portant indicator of innovation” (DeCarolis & Deeds,1999, quoted inSomaya,Williamson,&Zhang, 2007:922). There are twomain issueswith using patents asa proxy for innovation (Griliches, 1990). First, not allinnovations are patented. This is less of a problem inthe pharmaceutical industry, where patents providean effective way of protecting intellectual property(Levin et al., 1987) and are strategically essential tofirms (Grabowski & Vernon, 1992; Henderson &Cockburn, 1996; ScottMorton, 2000). Second, not allpatented inventions become innovations. Even so,patents have been shown to have a strong correlationwith other firm-level innovation outputs, such asthe number of new product introductions (Basberg,1982; Comanor & Scherer, 1969) and sales from newproducts (Comanor & Scherer, 1969) as well as in-novative activity (Acs & Audretsch, 1989).

It is possible that patents that contribute to R&Dperformance in period t can become failures in pe-riods after t; that is, patents that firms produce ina given period can be discontinued prematurely ata future time. Although specifications of ourmodel take into account this endogeneity, andthe model provides efficient estimates, we usedonly the number of patents that did not get pre-maturely discontinued in calculating our dependentvariable.3

We measured the quality of R&D output for eachfirm as the total number of citations to all successfulpatents; that is, patents that did not get discontinuedprematurely. Numerous studies have provided evi-dence of correlations between the importance ofpatents and citations to patents, and have estab-lished the use of citations as a legitimate proxy forthe quality of innovative or inventive performance(Jaffe, Trajtenberg, & Henderson, 1993; Pavitt, 1988;Trajtenberg, 1990). While prior literature favorsa citation-weighted measure of R&D output, we ex-amined R&D output and quality separately in orderto gain a better understanding of how learning fromfailure influences R&Dperformance. Bothmeasureswere log-transformed due to skewness.

Independent Variables

Quantity, timing, and relative importance ofsmall failures.The quantity of a firm’s small failureswas measured as the number of patents that werediscontinued due to non-payment of maintenance

fees by the firm each year between 1985 and2002. Totest our arguments related to the timing of failures,we calculated the number of discontinued patentsat each stage of patent discontinuation (i.e., at 4, 8,and 12 years). On average, we expect a patent dis-continued at 4 years to have elicited feedback ear-lier than a patent discontinued at 12 years.4 In orderto measure the importance of a discontinued pat-ent, we calculated the citations to the patent upuntil the year of expiration. Number of forward ci-tations is a commonly used measure of a patent’svalue and its impact on future inventions (Jaffe,Trajtenberg, & Henderson, 1993; Pavitt, 1988;Trajtenberg, 1990), and captures the expectations ofdecision makers about the potential of a given patentbefore expiration.

Control Variables

We used several control variables that could haveconfounding effects on R&D performance. First, wecontrolled for the size of firms’ R&D units, as largeR&D units are likely to have higher output. We cal-culated the size of the R&D unit by counting thenumber of scientistswith patent applications in eachfirm for every year (McFadyen & Cannella, 2004). Asfirms may also source innovation externally (Ahuja,2000; Ahuja & Katila, 2001; Sampson, 2007), wecontrolled for the count of the alliances for eachfirm in our sample for each year. Geographic di-versity may also affect innovation in multinationalfirms (MNCs) (Kobrin, 1991; Lahiri, 2010). Wetherefore includedacount of thenumberof countriesthat were represented in the patent applications ofa firm in a given year.

In this paper we are specifically interested insmall failures in the form of patent expirations in thepharmaceutical industry. Some patent expirations,however, may in fact represent larger failures, suchas a major failure in the firm’s research agenda, ortermination of a project due to litigation. We in-cluded two controls to ensure that our sample indeedcaptures learning from small failures. First, we con-trolled for the technological focus of the firm’s dis-continued patents at the firm-year level. It is possiblethat a large number of failures concentrated in a fewtechnological classes in fact represent one big failure

3 The conclusions are robust to using the raw patentcount as a measure of R&D output.

4 While this assumption need not always be true (firmsmight receive early feedback and continue to maintainsome patents), this noise is likely to make our results moreconservative. Please see the Discussion section for a moredetailed consideration of this possibility.

444 AprilAcademy of Management Journal

Page 10: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

in the firm’s research portfolio, as opposed to manysmall failures. The technological focus variable wascalculated with the Herfindahl index for the tech-nological classes of each firm’s prematurely dis-continuedpatents in anygivenyear.Thevalueof thisindex ranges from zero to one. A higher value in-dicates the presence of a large number of patents ina small number of technological classes, and a valueclose to zero indicates that the discontinued patentswere from different technological classes. Second,we included the number of litigations faced by eachfirm in our sample to account for the possibility oflarger failures in the sample. We constructed thisvariable using the number of lawsuits filed againsteach firm at both the state and federal levels ina given year, as reported in Lexis-Nexis Legal Re-search database.

We calculated the moving average of each controlvariable for the past three years to account for long-lasting effects and to smooth out sharp changes inthese variables. In addition to the time-varying var-iables, time-invariant variables specific to the firmare taken into account using the Arellano–Bondmethod (described below; Arellano & Bond, 1991)

via first differencing. Figure 2 provides a brief de-scription of all variables and includes a sample datapoint for Abbott in 1998.

Empirical Model

The following models test the relationships be-tween small failures andR&Dperformance in apaneldataset for 97 firms for a period of 22 years:

Pit 5b1Pi;t2 1 1b2Fi;t2 1 1b3Cit 1uit (1)

Qit 5b1Qi;t2 1 1b2Fi;t21 1b3Cit 1 vit (2)

In Equation (1), Pit is R&D output, measured as thenumber of patents filed by firm i in year t;Pi;t2 1 is thelagged value of R&D output for firm i; and Fi;t2 1 isthe vector of expired patent characteristics, in-cludingnumber, importance, and timing, for firm i inperiod t2 1.Cit is thematrix of control variables andincludes size of the R&D team, number of alliances,technological focus, number of litigations, and geo-graphical diversity. Equation (2) has the same right-hand side variables, but the dependent variable (Qit)is R&D output quality, measured as citations to the

FIGURE 2Sample Data Point for the Pharmaceutical Firm Abbott in 1998 with a Description of the Variables

DependentVariables

R&D output (log): log of number of patents that did not prematurely expire for a firm in a given year = 4.52

IndependentVariables

Importance of failures: Number of citations to the expired patents perfirm per year = 541

Control Variables

R&D unit size: Number of scientists in a firm in a given year (moving average, 3years) = 678

Number of alliances: Number of alliances made by a firm in a given year (moving average, 3 years) = 16

R&D output quality (log): log of number of citations to patents that never expired for a firm in a given year = 6.27

Technological focus: Herfindahl index for the technological classes of discontinued patents in a firm in agiven year (moving average, 3 years)= 0.09

Failures (total): Total number of patents expiredper firm per year = 40

Number of countries: Number of countries that were the origin of patents per firm per year (movingaverage, 3 years) = 4

Number of litigations: Number of lawsuits at both the state and federal leveled against a firm in a given year (moving average, 3 years) = 10

Failures after 4 years: Number of patents thatexpired after 4 years per firm per year = 14

Failures after 8 years: Number of patents thatexpired after 8 years per firm per year = 18

Failures after 12 years: Number of patents thatexpired after 12 years per firm per year = 8

Unit of Analysis: Firm-Year Firm: Abbott Year: 1998

2016 445Khanna, Guler, and Nerkar

Page 11: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

patents of firm i in period t, andQi;t2 1 is firm i’s R&Doutput quality in period t 2 1.

Several empirical issueswith Equations (1) and (2)arise in the present estimation. First, there is a pos-sibility that small failures are endogenous to R&Dperformance—that is, firms choose to give up pat-ents because they believe that they are going to pro-duce more patents in the next period. As such,causality can run in both directions, leading to cor-relation between the error term (uit and vit in Equa-tions (1) and (2), respectively) and independentvariables (Fi;t21).

Second, time-invariant fixed effects such as ge-ography and firmculturemaybe correlatedwith theindependent variables. Under such circumstances,unobserved fixed effects are combined with theerror term, leading to a correlation between in-dependent variables and the error term, a primaryreason for biased coefficients. If Zi’s are the firm-specific, time-invariant fixed effects, and eit is theobservation-specific error term, the error term inEquation (1) is:

uit 5Zi 1 eit (3)

Third, both Equations (1) and (2) have lagged de-pendent variables to account for the dynamics in theprocess of patenting. Adding the lagged dependentvariable on the right-hand side of the equation cap-tures the effect of what firms patented in period t2 1on what firms patent in period t, instead of in-correctly attributing it to the other explanatory vari-ables. However, this can significantly inflate thecoefficient on the lagged dependent variable anddeflate coefficients on other explanatory variables(Kelly, 2002).

To deal with these issues, we used the generalmethod of moments-based estimation introduced byArellano and Bond (1991). The Arellano–Bondmethod is standard when it comes to estimating dy-namic panelmodels where time-variant fixed effectscan substantially influence the coefficients on theindependent variables of interest. The Arellano–Bond method, first proposed by Holtzeakin, Newey,and Rosen (1988), uses appropriate lags of both de-pendent and independent variables as instrumentsfor first-differenced dependent and explanatoryvariables respectively. By using previous lags as in-struments, the Arellano–Bond model provides effi-cient estimates of parameters. Since both dependentand independent variables are first differenced,time-invariant fixed effects are subtracted out in themodel. Previous research has used the Arellano–Bondmodel toresolvesimilar issues (David,Yoshikawa,

Chari, & Rasheed, 2006; Knott, Posen, & Wu, 2009;Milanov & Shepherd, 2013; Uotila, Maula, Keil, &Zahra, 2009). After first differencing, Equation (1) lookslike this:

DPit 5b1DPi;t2 1 1b2DFi;t2 1 1b3DCit 1Deit (4)

Since Zi is time-invariant, it is subtracted out afterdifferencing. As the farthest lag that appears foroutput is Pi;t2 2 in Equation (4), lags of four or highercan be used as instruments in the model.5 Analo-gously, lags of four or higher of endogenous variablescan be used as instruments for DFi;t21. To check thevalidity of instruments used in the model, we per-formed Sargan’s test of over-identifying restrictions(Sargan, 1958). We were not able to reject the nullhypothesis that over-identifying restrictions are valid(p value of ;0.8), suggesting that instruments are un-correlated with the residuals; hence, estimates fromthe model are not biased (Davidson & MacKinnon,1993). In addition, after comparing Wald x2 values,which are quite high for all our models, with criticalvalues provided in Stock and Yogo (2005), we foundno evidence ofweak instruments in the present study.The Arellano–Bond estimator is also designed forlarge N and small T, which makes the use of thismodel for the present study more relevant. The sameapproach is used to estimate Equation (2).

RESULTS

Table 2 presents the descriptive statistics andpartial correlation matrix for our variables. Sinceboth measures of R&D performance used in thisstudy are count variables and are skewed, we usedthe logs6 of R&D output and R&D output quality inorder to correct for the non-normality of the distri-bution (Chaganti & Damanpour, 1991; Ruef &Patterson, 2009). As evident from Table 2, correla-tions between some of the independent variables are

5 We do not use the first lag as an instrument in ourmodels because we found autocorrelation of first order. If,for example, we have the term Pi;t2 1 in themodel, becauseof first order autocorrelation, we do not use Pi;t22 as aninstrument but instead use Pi;t2 3 and further lags asinstruments.

6 Usually it is appropriate to use Poisson or negativebinomial models when the dependent variable is a non-negative integer. However, as noted, the most appropriatemodel for our data were the Arellano–Bond model, andthis model does not allow a negative binomial or Poissonspecification. We therefore use a log-transformed de-pendent variable in our models. Sensitivity analyses withcount models are reported below.

446 AprilAcademy of Management Journal

Page 12: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

quite high. In order to ensure thatmulticollinearity isnot an issue in our model, we estimated the varianceinflation factors (VIFs). VIFs for all variables are lessthan 10, with an average value of 3.35 for modelstesting Hypothesis 1 and 2.22 for models testingHypotheses 2 and 3.Also, as the total expired patentsequals the sum of expired patents after 4, 8, and 12years, we excluded total expired patents from themodelwhen testing for the separate effects of expiredpatents after 4, 8, and 12 years.

Tables 3 and 4 present the results of the Arellano–Bond models that use the logarithm of R&D outputand R&D output quality as dependent variables, re-spectively. Our first set of analyses uses R&D out-put as a measure of R&D performance. Model 1 inTable 3 is the baseline model with only controls.Models 2 and 3 include the total number of expiredpatents and importance of failures, respectively. Thetotal number of expired patents in Model 3 hasa significant and negative effect on output. This iscontrary to Hypothesis 1a, which predicted a posi-tive relationship between number of failures andR&D output. Models 4, 5, and 6 contain number ofexpired patents after 4, 8, and 12 years, respectively.We included all independent variables except thetotal number of expired patents (to avoid multi-collinearity) inModel 7 (fullmodel). The importanceof failures has a negative and significant coefficientin Model 7, contrary to Hypothesis 2a, which pre-dicted a positive relationship between the impor-tance of failures and R&D output. Nor do the resultssupport Hypothesis 3a that early small failures willlead to a higher R&Doutput. They suggest no effect ofexpired patents on output after 4 and 12 years, anda negative effect of expired patents on output after 8years. Control variables for R&D unit size and geo-graphical diversity have expected signs and are sig-nificant atp, 0.001. Contrary to the expectation, thenumber of alliances has a negative and significanteffect on output. With respect to the output measureof R&D performance, we found results contrary toour hypotheses.Wediscuss the implications of thesefindings in the subsequent section.

Table 4 provides results with the logarithm of thequality of innovations (measured as the number ofcitations to the firm’s patents) as the dependentvariable. As before, Model 1 contains only controlvariables.Models 2 and 3 include the total number offailures (expired patents) and importance of failures(citations to expired patents), respectively. Model 3provides support for Hypothesis 1b that there isa positive relationship between a firm’s failures andits R&D output quality, as the coefficient on the total

number of expired patents is positive and significantatp,0.001.Models 4, 5, and6 contain thenumber ofexpired patents after 4, 8, and 12 years, respectively.We included all independent variables except thetotal number of expired patents (to avoid multi-collinearity) in Model 7 (full model). Hypothesis 2b,which predicted a positive relationship between theimportance of failures and R&D output quality, issupported, as the coefficient on the importance offailures in Model 7 is positive and significant at p ,0.001. Hypothesis 3b, which predicted that earlyfailures would increase R&D output quality morethan later failures, is also supported, as the co-efficient on expired patents after 4 years is positiveand significant, whereas the coefficient of expiredpatents after 8 years is not significant, and that ofexpired patents after 12 years is negative and sig-nificant. R&D unit size and number of alliances havea positive effect on innovation quality. Geographicdiversity has no effect. In other words, we foundsupport for all of our hypotheses with respect to thequality measure of R&D performance, but not withrespect to the output measure.

Based on calculations from Models 3 and 7 (inTables 3 and 4), pharmaceutical firms produced0.2% fewer patents while the quality of R&D outputincreased by 0.3% on average for each failure. Alongthe same lines, firms reduced the number of patentapplications by 0.03% but increased patent qualityby 0.04% following a unit increase in the importanceof a failure. Firms in our sample filed for approxi-mately 2 fewer patents and received 6more citationsper year for every 10 patents expired. Likewise, foran increase of 100 citations for expired patents, firmsproduced almost 2 fewer patents, and citations totheir patents increased by 8 per year.

Toward a Multilevel Model of Learning fromFailure

In order to better understand why an increase insmall failures lead to an increase in the quality ofR&D (citations) but a decline in R&D output (numberof patents), we followed up with some interviewswith individuals in the pharmaceutical industry.Specifically, we conducted informal interviewswithpatent attorneys in pharmaceutical firms as well asin-depth interviews with four scientists who haveworkedwith andpatented in the organizations in oursample. The interviews were unstructured but allfeatured the following open-ended questions: “Whatdrives firm patenting behavior?” “In your opinion,why do firms give up patents?” “Are individual

2016 447Khanna, Guler, and Nerkar

Page 13: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

TABLE2

Descriptive

Statisticsan

dPartial

Correlation

s

Variable

Mea

nSD

Min.

Max

.1

23

45

67

89

1011

12

1.R&Dou

tput(log)

3.31

1.55

0.00

7.09

12.

R&Dou

tputq

uality

(log

)4.36

2.01

0.00

9.46

20.00

1

3.Failures(total)

45.67

77.05

1.00

1076

0.08

0.01

14.

Failuresafter4ye

ars

16.37

25.16

1.00

180

20.11

20.02

0.87

15.

Failuresafter8ye

ars

22.17

37.16

1.00

544

20.11

20.05

0.77

0.36

16.

Failuresafter12

years

17.65

29.37

1.00

368

0.12

0.06

0.57

20.01

0.68

17.

Importance

offailures

303

687

0.00

8428

20.16

0.43

20.10

20.07

0.32

0.64

18.

Numbe

rof

allian

ces

2.82

4.80

0.00

4020.07

0.10

0.05

0.07

0.01

20.18

20.00

19.

R&Dunitsize

248

389

1.67

2835

0.05

20.11

0.44

0.46

0.19

20.05

0.33

0.19

110

.Numbe

rof

countries

3.56

2.67

1.00

13.67

0.41

0.29

0.26

0.22

20.06

0.02

0.08

0.30

0.07

111

.Technolog

ical

focu

s0.24

0.22

0.03

1.00

20.13

20.09

20.05

20.05

20.05

20.16

20.12

20.01

20.09

20.04

112

.Numbe

rof

litigation

s4.41

7.15

052

0.03

0.19

0.04

20.04

0.03

0.20

0.15

0.20

0.01

0.17

0.37

1

448 AprilAcademy of Management Journal

Page 14: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

TABLE3

Arellan

o–Bon

dMod

elEstim

ates

forR&DOutputa,b

Variable

Mod

el1

Mod

el2

Mod

el3

Mod

el4

Mod

el5

Mod

el6

Mod

el7

R&Dou

tput t

21(log

)0.48

***

(0.06)

0.54

***

(0.06)

0.78

***

(0.09)

0.44

***

(0.05)

0.68

***

(0.08)

0.80

***

(0.10)

0.72

***

(0.10)

Failures(total)

20.00

5***

(0.00)

20.00

2*(0.000

)Im

portance

offailures

20.00

3***

(0.000

)20.00

3***

(0.000

)20.00

2**

(0.00)

20.00

2**

(0.000

)20.00

3***

(0.000

)

Failuresafter

4ye

ars

20.00

3(0.02)

20.00

4(0.003

)

Failuresafter

8ye

ars

20.00

7***

(0.00)

20.01

***

(0.00)

Failuresafter

12ye

ars

0.01

(0.004

)0.01

(0.01)

Numbe

rof

allian

ces

20.10

***

(0.01)

20.06

***

(0.01)

20.04

***

(0.01)

20.04

***

(0.01)

20.06

***

(0.01)

20.06

***

(0.01)

20.05

***

(0.01)

R&Dunitsize

0.00

2***

(0.00)

0.00

1***

(0.00)

0.00

1***

(0.00)

0.00

1***

(0.00)

0.00

1***

(0.00)

0.00

1***

(0.00)

0.00

1***

(0.00)

Numbe

rof

countries

0.22

***

(0.03)

0.23

***

(0.03)

0.24

***

(0.03)

0.28

***

(0.03)

0.25

***

(0.03)

0.27

***

(0.04)

0.30

***

(0.04)

Technolog

ical

focu

s0.24

1(0.12)

20.04

(0.16)

20.02

(0.18)

0.06

(0.15)

20.12

(0.20)

20.15

(0.39)

22.10

**(0.65)

Numbe

rof

litiga

tion

s20.01

(0.00)

20.00

(0.00)

20.00

(0.00)

20.00

(0.00)

0.00

(0.00)

20.00

(0.01)

20.00

(0.01)

Con

stan

t20.32

*(0.15)

20.26

(0.17)

20.44

1(0.23)

20.40

**(0.16)

20.18

(0.22)

21.05

***

(0.30)

21.44

***

(0.33)

Numbe

rof

instrumen

ts23

926

912

630

014

310

311

5

Numbe

rof

observations

1003

859

849

868

753

498

447

Numbe

rof

grou

ps

9083

8181

8174

71W

aldx2

1177

1138

973

1210

1018

835

876

xp2

0.00

000.00

000.00

000.00

000.00

000.00

000.00

00

aNumbe

rsof

observationsaredifferentac

ross

differentmod

elsbe

cause

ofthedifference

inthenumbe

rof

observationsforex

pired

paten

tsafter4,

8,an

d12

years.

For

exam

ple,p

aten

tsthat

expired

after4ye

arsfirsta

ppea

redin

theye

ar19

85wherea

spaten

tsthat

expired

after8ye

arsfirstm

adeitinto

ourdataset

intheye

ar19

89,lea

dingto

lower

numbe

rof

observations.Followingthesamelogic,

numbe

rof

observationsin

mod

elswiththeva

riab

le“failuresafter12

years”

isev

enfewer.

bStandarderrors

inparen

theses.

1p,

0.10

*p,

0.05

**p,

0.01

***p,

0.00

1

2016 449Khanna, Guler, and Nerkar

Page 15: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

TABLE4

Arellan

o–Bon

dMod

elEstim

ates

forR&DOutputQ

ualitya

,b

Variable

Mod

el1

Mod

el2

Mod

el3

Mod

el4

Mod

el5

Mod

el6

Mod

el7

R&Dou

tputq

uality t

21

(log

)0.60

***

(0.03)

0.44

***

(0.04)

0.36

***

(0.04)

0.37

***

(0.04)

0.59

***

(0.04)

0.25

***

(0.05)

0.20

***

(0.05)

Failures(total)

0.00

7***

(0.00)

0.00

3**

(0.00)

Importance

offailures

0.00

5***

(0.00)

0.00

6***

(0.00)

0.00

2**

(0.000

)0.00

4***

(0.00)

0.00

4***

(0.00)

Failuresafter4ye

ars

0.01

**(0.00)

0.00

41(0.002

)Failuresafter8ye

ars

0.00

5*(0.002

)20.00

2(0.002

)Failuresafter12

years

20.00

7**

(0.00)

20.01

**(0.002

)Numbe

rof

allian

ces

0.05

***

(0.01)

0.05

***

(0.01)

0.04

**(0.01)

0.04

**(0.01)

0.04

***

(0.01)

0.01

(0.01)

0.02

1(0.01)

R&Dunitsize

0.00

(0.00)

0.00

(0.00)

0.00

1***

(0.00)

0.00

1***

(0.00)

0.00

(0.00)

0.00

(0.00)

0.00

(0.00)

Numbe

rof

countries

0.01

(0.03)

20.00

(0.03)

20.03

(0.03)

20.04

(0.03)

0.00

(0.02)

20.03

(0.03)

20.04

(0.03)

Tec

hnolog

ical

focu

s20.52

*(0.20)

20.52

*(0.24)

20.59

**(0.22)

20.61

**(0.21)

20.16

(0.17)

20.42

(0.30)

20.18

*(0.49)

Numbe

rof

litiga

tion

s0.01

1(0.00)

0.01

1(0.00)

0.01

1(0.00)

0.01

1(0.00)

0.01

(0.01)

0.00

(0.00)

0.01

1(0.00)

Con

stan

t1.76

***

(0.19)

2.29

***

(0.20)

2.69

***

(0.20)

2.59

***

(0.21)

2.11

***

(0.25)

4.28

***

(0.31)

4.58

***

(0.32)

Numbe

rof

instrumen

ts17

218

820

420

410

321

127

0Numbe

rof

observations

936

813

805

822

778

500

451

Numbe

rof

grou

ps

8682

8081

8273

71W

aldx2

780

853

943

936

915

178

180

xp2

0.00

000.00

000.00

000.00

000.00

000.00

000.00

00

aNumbe

rsof

observationsaredifferentacrossd

ifferentm

odelsb

ecau

seof

thedifference

inthenumbe

rofo

bserva

tion

sfor

expired

paten

tsafter4

,8,and12

years.For

exam

ple,

paten

tsthat

expired

after4

yearsfirsta

ppea

redin

theye

ar19

85,w

herea

spaten

tsthat

expired

after8

yearsfirstm

adeitinto

ourd

ataset

intheye

ar19

89,lea

dingto

lower

numbe

rof

observations.Followingthesamelogic,

numbe

rof

observationsin

mod

elswiththeva

riab

le“failuresafter12

years”

isev

enfewer.

bStandarderrors

inparen

theses.

1p,

0.1

*p,

0.05

**p,

0.01

***p,

0.00

450 AprilAcademy of Management Journal

Page 16: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

scientists aware that firms have not renewed theirpatents, and howdo they react to such information?”We refrained from asking leading questions thatwould either support or negate our hypotheses.Below are some conclusions drawn from ourinterviews.

First, decisions to file and renew patents are madeat the firm level, through the intellectual property(IP) office. With respect to patent applications, allinterviewees confirmed what prior research haddocumented. IP is critical for competitive advantage,and firms develop specific IP strategies to protect thedrugs that they hope to launch in the market. Sci-entists are asked to file disclosures with the IP officeas soon as they have reached a milestone in theirwork. This could be a new molecule, a new mecha-nism, or a new application. The IP office makes de-cisions as to whether the firm will file for a patent ornot. The decision to file for a patent in large,established firms is never made by the individualscientist.

Similarly, the decision to renew (or not) a patent isalso made by the firm’s IP office. Typically such de-cisions are based on a multitude of factors that in-clude but are not limited to the performance of theR&D program/laboratory from which the patentoriginates (this can include failure in field trials,additional patents that negate earlier work, and pat-ents from competitors that negate the work done bythe firm) and balancing of the overall IP portfolio inline with its corporate strategy. It is the patent attor-neys who compile and distill the knowledge fromfailedpatents and redirect the firm’s patent portfolio.One interviewee suggested that one of the many re-sponsibilities of a patent counsel is to look for com-monalities between failed patents and existingpatents to see if the existing portfolio can be im-proved. Interviewees concurred that individual sci-entists generally do not receive feedback about eachdiscontinuedpatent, but they do receive informationabout future research and new technologies from theIP office.

How do non-renewed patents influence the sci-entists’ work? The interviewees said that the non-renewal of a single patent was not a major decisioneither for the firm or for the scientist. One of thescientists we interviewed had a patent that was notrenewed but he did not seem too concerned about it.That said, the same scientist mentioned that if manypatents from the same inventor were not renewed,this would serve as feedback to the inventor. Also,most scientists are not likely to change their short-term behavior based on the non-renewal of their

patents; that is, they would continue to turn in thepatent disclosures to the IP office even though thefirm could choose to reduce filing patents from them.All scientists explained that the IP office workedwith respective program/laboratory managers tocollect and analyze information, and decidewhetherparticular programs were effective (i.e., create knowl-edge that could lead to IP that was useful). There wasless of an emphasis on telling individual scientiststo reduce or increase patent disclosures in particularareas.

Data from these interviews suggest a multilevelperspective of organizational learning in the phar-maceutical industry. The R&D process can be con-ceived as a two-step process: Patentable ideas aregenerated in the first step, and these ideas are filtered(at multiple gates) at the second step—akin to aninnovation tournament (Terwiesch & Ulrich, 2009).While individual scientists are the oneswhowork onresearchprograms andproducepatentablework, it isat the firm level (through the IP office) that theseideas are filtered based on value and place in thecorporate R&D portfolio. The accumulated feedbackis used to improve the firm’s filters in selecting pro-jects with higher expected returns. Adaptation ofthe firm’s selection filters occurs relatively quickly,since IP officers constantly analyze past failures,have a clear view of the firm’s overall R&D strategy,and can execute non-renewals relatively easily. Incontrast, this knowledge may not immediately in-fluence the generation of patents for several reasons.First, individual scientists may not necessarily pos-sess the knowledge about the overall health and di-rection of the research portfolio, and where theirwork stands relative to this portfolio. They may re-ceive feedback from failed patents as a signal tochange research direction only after several suchfailures have accumulated. Second, even when sci-entists receive such feedback, the path-dependentnature of the search process (e.g., Levinthal &March,1993; March & Simon, 1958; Nelson &Winter, 1982)is likely to lead them to continue working on whatthey did before. As put by one of our interviewees, “ageneral manager wants his scientists to start workingon new technologies more often than scientists arewilling to.” In other words, starting a brand new re-search agenda takes time. Third, scientists’ personalmotives and incentives may cause them to receivethe feedback in an unfavorable way. Prior worksuggests that scientists value independence and in-trinsic rewards, and their output is highly correlatedwith the perceived importance and value of theirwork (e.g., Sauermann & Cohen, 2010). A concern

2016 451Khanna, Guler, and Nerkar

Page 17: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

with job security may cause threat–rigidity re-sponses (Staw, Sandelands, & Dutton, 1981) and re-duce output (Sauermann & Cohen, 2010). Feedbackabout non-renewalsmay therefore cause scientists tostruggle with output.

In sum, the multilevel process of learning in theR&D process operates as follows. While the selec-tion mechanism operates at the firm level andadapts to negative feedback relatively promptlythrough non-renewals, the idea-generation mecha-nism operates at the individual scientist level and isslower to adapt to negative feedback. As a result,while IP officers are able to influence the overallquality of the R&D portfolio through non-renewals,they have little influence on the generation of newideas. The lag in transferring knowledge to scien-tists, combined with the path-dependent nature ofsearch and scientists’ motives, cause a dip in theR&D output as individual scientists slowly adapt toincoming feedback.

An examination of the lag structure for our twodependent variables is consistent with this model.In supplementary analyses of different lag struc-tures, we found that the decline in R&D output wasmore immediate, but the impact on R&D outputquality was more gradual. This suggests that selec-tion processes adapt to feedback from failures morepromptly, causing a decline in the number of pat-ents, followed by an increase in R&D output quality.Idea generation follows with a lag, and thus fails tocompensate for the decline in R&D output in theshort run.

Alternative Explanations and Robustness

We conducted several tests in order to rule outalternative explanations. A first potential explana-tion is that our results are due to the risk preferencesof the organizational decision makers and not due tolearning. Theory suggests two possible ways inwhich firms’ risk preferences may influence theirresponses to failure. On the one hand, performancefeedback theory suggests that firms are likely to be-come more risk seeking following a deviation fromtheir aspiration level (Cyert & March, 1963; Greve,2003). On the other hand, firmsmaymove away fromriskier projects by increasing their risk threshold dueto the “hot-stove effect” (Denrell & March, 2001). Inorder to see if firms changed their criteria for projectchoice in response to failure, we examined whetherand how small failures at time t 2 1 influenced thetotal number of subclasses and number of novelsubclasses in which the firms patented at time t. The

total numbers of subclasses and novel subclassesrepresent the breadth of the firm’s search, and havebeen used as measures of technological uncertaintyinvolved in the search process as well as of thelevel of technological exploration (Fleming, 2001;Rosenkopf & Nerkar, 2001). The results show nosignificant relationship between the number of totalsubclasses or novel subclasses and number of fail-ures, providing preliminary evidence that firms donot become significantly more or less risk seeking intheir project choice in response to failure.

A second alternative explanation is that, eventhough we observe premature patent expirations assmall failures, they may in fact be components ofa larger, systemic failure, such as a failure of a re-search agenda or a drug that is already in the mar-ket. As explained above, we controlled for both thenumber of litigations involving the firm and thetechnological focus of the failed patents to rule outthis possibility. According to Tables 3 and 4, theformer variable was not significant in either model,while the latterwas negatively significant in both theR&D output and quality models, suggesting thatfirms might struggle in R&D activities when failuresaremore concentrated in certain research areas. Still,these controls did not influence our core results thatpast failures decreased R&D output and increasedR&D output quality.

Next, we address the alternative explanation thatnot all patent expirations are failures, and not allcontinuations are successes. Firms may renew pat-ents for competitive or legal reasons, such as fendingoff a threat of litigation or for cross-licensing.We findit unlikely that such alternative explanations willsystematically bias our results for several reasons.First, competitive and legal uses of patents in thepharmaceutical industry are limited compared toother industries such as electronics or computers(Hall,Helmers,vonGraevenitz,&Rosazza-Bondibene,2013; Lehman, 2003). Second, our conversationswith patent attorneys in pharmaceutical firmssupported the notion of failed patents as a vehiclefor learning in a way consistent with our results.Third, we did not find any support for such uses inour empirical design, since number of litigationswas not significant or weakly significant in ourmodels. Finally, the noise in our measure of failureis likely to reduce the strength of our results, mak-ing our estimates conservative.

Another alternative explanation is that firms maylearn from the experiences of all other firms, not justtheir own.Thiswould suggest a differentmechanismof vicarious learning rather than the oneweput forth.

452 AprilAcademy of Management Journal

Page 18: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

In unreported models, we controlled for the num-ber, importance, and timing of failures of all firmsin the industry other than the focal firm. Thesepopulation-level controls revealed some interest-ing patterns. For instance, the total number offailures at the population level is associated witha drop in the firm’s R&D output, but does not sig-nificantly affect the firm’s R&D output quality, possi-bly suggesting that firms may find it harder to learnfrom other firms’ failures. At the same time, thesecontrols did not significantly change our results, rul-ing out population-level learning as an alternativeexplanation.

We also tested whether learning from failures inour context is cumulative in nature. Since previousvalues of the variables are used as instruments toestimate current values of both the dependent andthe independent variables in the Arellano–Bondmodel, using cumulative measures can potentiallylead to overestimation of the contribution of theprevious periods. Instead, we used the cumulativemeasure of our independent variables with an an-nual forgetting factor of 0.8 in the fixed effects neg-ative binomial model (Darr, Argote, & Epple, 1995).The results are consistent with those of the previousmodels.

As we only observed discontinuations at the le-gally determined 4, 8 and 12 years, we may bemissing the time at which discontinuation decisionsare actually made. For instance, it is possible thatamolecule associatedwith a patent fails at most fouryears earlier thanweobserve it in ourdata. In order totake into account this possibility, we assumed thatthe discontinuation decisions were made at an av-erage of twoyears beforeweobserved them, andusedindependent variables from t 2 2 to test our hy-potheses. This test led to fewer observations but theresults were robust.

We also tested for the presence of curvilinearityin our models. Although we find no support fora curvilinear relationship between the number andimportance of failures and R&D output, we do findthe presence of a curvilinear relationship betweennumber and importance of failures and R&Doutput quality. However, the inflection point of thecurve is estimated to be at 5,263 patent expirations,well beyond the maximum of 1,076 patent expira-tions observed in our sample. We also repeated ourmodels with negative binomial regression modelswith firm fixed effects (Hausman, Hall, & Griliches,1984; Henderson & Cockburn, 1994), and found thatthe results are similar to those from previousspecifications.

In short, the empirical patterns thatwe observed inthe data, the supplementary evidence ruling out al-ternative mechanisms, and the qualitative evidencecombined suggest that our results reflect learningprocesses, and that learning from failure in phar-maceutical firms is a multilevel process. Project-selection mechanisms at the firm level adapt tofeedback from failed patents faster than generationmechanisms at the individual scientist level, causingus to observe an increase in R&D output quality anda decline in R&D output.

DISCUSSION AND CONCLUSION

Previous research on learning from past experi-ence, including failures, has argued that experi-ence in general leads to higher productivity andlower unit cost in manufacturing and service in-dustries (Argote, 1996; Argote & Epple 1990).However, results obtained in the current study tella different story. Specifically, we found that, as thenumber of failures increases, R&D output, mea-sured as the number of patents filed by the firm,decreases, whereas the quality of the patents, mea-sured as citation to those patents, increases. Simi-larly, the importance and timing of small failurescause opposing changes in R&D output and R&Doutput quality. This difference in the effect ofnumber, timing, and importance of failures on R&Doutput and quality points to the potentially dif-ferent drivers of these two dimensions of R&Dperformance.

We offered a potential reconciliation of our resultsbased on our interviews with industry participants.As opposed to many studies that depict organiza-tions as monolithic entities, our interviews suggesta multilevel model of learning, in line with proc-ess models of resource allocation (Bower, 1970;Burgelman, 1983, 1991; Noda & Bower, 1996; seeGaba & Joseph, 2013, for a consideration of divergentaspiration levels at multiple levels). To reiterate, wesuggest that patent generation occurs at the individ-ual scientist level, whereas patent selection for theoverall portfolio takes place at the level of the IP of-fice. While the selection mechanisms adjust moreeasily to feedback from failed projects, the idea-generation process adjusts more slowly, due to thelag in relaying feedback to individual scientists, thepath dependency of search processes, and the mo-tives of individual scientists. As a result, fast adap-tion of selection leads to higher-quality patents,whereas slow adaptation of generation leads to animmediate drop in R&D output.

2016 453Khanna, Guler, and Nerkar

Page 19: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

We contribute to the organizational learning lit-erature in the following ways. First, our study con-firms the critical role of failures for organizationalperformance and adds to prior studies by examiningsmall failures in experimentation—an understudiedbut important type of failure (e.g., Sitkin, 1992). Weshow in this paper how such failures can providefirms with critical feedback in exploration. Datapresented in Figure 1 suggest that learning fromfailures did not reduce the number of subsequentfailures in our study, as opposed to the patterns inother types of failures covered in prior research(e.g., Haunschild & Rhee, 2004; Haunschild &Sullivan, 2002; Madsen & Desai, 2010). This maysuggest that firms do not use small failures to in-crease the reliability of their processes but to expandtheir search and identify new directions. This in-teresting observation underscores the unique natureof small failures in experimentation and the im-portance of studying them separately from otherfailures.

In addition, our finding that small failures lead toa decrease in R&Doutput but an increase in quality isa novel contribution to the literature on innovation.While prior studies on R&D outcomes have exam-ined different factors influencing R&D outcomes, therole of prior failures has so far been neglected. Ourfindings on the timing and importance of small fail-ures also provide novel insights on how firms learnfrom failures in the context of innovation and add tothe understanding of the innovation process.

We believe there are certain boundary conditionsfor these findings. For instance, firms in the phar-maceutical industry are faced with extreme un-certainty and long development times. The outcomesof the R&D process are highly skewed, with few pro-jects generating most of the returns. The abundanceof generated ideas and the need to filter themnecessitate the separation of idea-generation andselection processes. Our results are likely to hold inother innovative industries with similar charac-teristics, such as venture capital, corporate ventur-ing, or creative endeavors such as film production.Future studies can examine to what extent theseresults are generalizable to other R&D-intensiveindustries. Do firms in different industries perceivepatents differently in terms of their value? Do suchfirms vary in how they learn from such failures?These are some of the questions to be explored tobroaden the understanding of learning from smallfailures in the context of innovation.

Our findings also have implications for practic-ing R&D managers. While the study by no means

recommends failures, it encourages experimentationand conscious attention to evaluation of patents asearly as possible. Higher rate of experimentationwillincrease the number of failures but also the likeli-hood of finding the right bet. Similarly, our findingssuggest that firms should try to remove subpar pro-jects from their portfolio as early as possible, and notbe afraid to eliminate important projects from theportfolios.

Despite its contributions, the study has severallimitations. First, we could not observe the un-derlying reasons for patent failures or discriminatebetween these underlying reasons as determinantsof learning. We leave it to future studies to disen-tangle the relationships between different reasonsfor patent expirations and the subsequent learningoutcomes. At the same time, the process by whichthese failures lead to improvements in R&D per-formance also goes unobserved. While we believethat the improvements are both in ongoing pro-jects as well as in the reallocation of R&D re-sources, we did not distinguish between these twomechanisms and their relative importance in thisstudy. Future studies can greatly enhance ourunderstanding of underlying mechanisms govern-ing learning from failures. Last, future studies couldprovide more robust analysis of the multilevelmodel of learning at the idea-generation and selec-tion steps.

It is worth noting that patent expirations representa distinct type of failure in the pharmaceutical R&Dprocess. By studying all patents, as opposed to thosethat failed in clinical trials, we are able to get a morecomplete picture of small failures in experimenta-tion. It is also possible that some ideas fail even be-fore reaching the patenting stage. Given the high rateof patenting in the pharmaceutical industry, theseearly ideas probably show little promise to beginwith. Future studies may find it fruitful to comparethese different kinds of failures in terms of learningoutcomes.

REFERENCES

Acs, Z., & Audretsch, D. B. 1989. Patents as a measure ofinnovative activity. Kyklos, 42: 171–180.

Ahuja, G. 2000. Collaboration networks, structural holes,and innovation: A longitudinal study.AdministrativeScience Quarterly, 45: 425–455.

Ahuja, G., & Katila, R. 2001. Technological acquisitionsand the innovation performance of acquiring firms: Alongitudinal study. Strategic Management Journal,22: 197–220.

454 AprilAcademy of Management Journal

Page 20: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Ahuja, G., & Lampert, C. M. 2001. Entrepreneurship in thelarge corporation: A longitudinal study of how estab-lished firms create breakthrough inventions. Strate-gic Management Journal, 22: 521–543.

Anand, J., Oriani, R., & Vassolo, R. S. 2010. Alliance ac-tivity as a dynamic capability in the face of a discon-tinuous technological change.Organization Science,21: 1213–1232.

Arellano, M., & Bond, S. 1991. Some tests of specificationfor panel data: Monte Carlo evidence and an applica-tion to employment equations. The Review of Eco-nomic Studies, 58: 277–297.

Argote, L. 1996. Organizational learning curves: Persis-tence, transfer and turnover. International Journal ofTechnology Management, 11: 759–769.

Argote, L., & Epple, D. 1990. Learning curves inmanufacturing. Science, 247: 920–924.

Arundel, A., & Kabla, I. 1998. What percentage of in-novations are patented? Empirical estimates for Eu-ropean firms. Research Policy, 27: 127–141.

Audia, P. G., Locke, E. A., & Smith, K. G. 2000. The par-adox of success: An archival and a laboratory study ofstrategic persistence following radical environmen-tal change. Academy of Management Journal, 43:837–853.

Basberg, B. L. 1982. Technological change in the Norwe-gian whaling industry: A case study in the use of pat-ent statistics as a technology indicator. ResearchPolicy, 11: 163–171.

Baum, J. A. C., & Dahlin, K. B. 2007. Aspiration perfor-mance and railroads’ patterns of learning from trainwrecks and crashes. Organization Science, 18:368–385.

Baum, J. A. C., & Ingram, P. 1998. Survival-enhancinglearning in the Manhattan hotel industry, 1898–1980.Management Science, 44: 996–1016.

Baumard, P., & Starbuck, W. H. 2005. Learning from fail-ures: Why it may not happen. Long Range Planning,38: 281–298.

Bower, J. 1970. Managing the resource allocation pro-cess. Homewood, IL: Irwin.

Burgelman, R. A. 1983. A process model of internal cor-porate venturing in the diversified major firm. Ad-ministrative Science Quarterly, 28: 223–244.

Burgelman, R. A. 1991. Intraorganizational ecologyof strategy making and organizational adaptation:Theory and field research. Organization Science, 2:239–262.

Cannon, M. D., & Edmondson, A. C. 2005. Failing to learnand learning to fail (intelligently): How great organi-zations put failure to work to innovate and improve.Long Range Planning, 38: 299–319.

Cardinal, L. B. 2001. Technological innovation in thepharmaceutical industry: The use of organizationalcontrol in managing research and development. Or-ganization Science, 12: 19–36.

Chaganti, R., & Damanpour, F. 1991. Institutional owner-ship, capital structure, and firm performance. Stra-tegic Management Journal, 12: 479–491.

Cockburn, I., & Griliches, Z. 1988. Industry effects andappropriability measures in the stock markets valua-tion of R&D and patents. The American EconomicReview, 78: 419–423.

Cockburn, I., & Henderson, R. M. 1998. Absorptive capac-ity, coauthoring behavior, and the organization of re-search in drug discovery. Journal of IndustrialEconomics, 46: 157–182.

Cohen, W. M., & Levinthal, D. A. 1990. Absorptive capac-ity: A new perspective on learning and innovation.Administrative Science Quarterly, 35: 128–152.

Cohen, W. M., Nelson, R. R., & Walsh, J. P. 2000. Pro-tecting their intellectual assets: Appropriabilityconditions and why U.S. manufacturing firms pat-ent (or not). Cambridge, MA: National Bureau ofEconomic Research.

Collins, J. C., & Porras, J. 1994. Built to last: Successfulhabits of visionary companies. New York, NY:HarperBusiness.

Comanor, W. S., & Scherer, F. M. 1969. Patent statistics asa measure of technical change. Journal of PoliticalEconomy, 77: 392–398.

Cyert, R. M., & March, J. G. 1963. A behavioral theory ofthe firm. Englewood Cliffs, NJ: Prentice Hall.

Darr, E. D., Argote, L., & Epple, D. 1995. The acquisition,transfer, and depreciation of knowledge in serviceorganizations: Productivity in franchises. Manage-ment Science, 41: 1750–1762.

David, P., Yoshikawa, T., Chari, M. D. R., & Rasheed, A. A.2006. Strategic investments in Japanese corporations:Do foreign portfolio owners foster underinvestment orappropriate investment? Strategic Management Jour-nal, 27: 591–600.

Davidson, R., & MacKinnon, J. G. 1993. Estimation andinference in econometrics (2nd ed.). New York, NY:Oxford University Press.

DeCarolis, D.M. 2003. Competencies and imitability in thepharmaceutical industry: An analysis of their re-lationship with firm performance. Journal of Man-agement, 29: 27–50.

DeCarolis, D. M., & Deeds, D. L. 1999. The impact of stocksand flows of organizational knowledge on firm per-formance: An empirical investigation of the bio-technology industry.StrategicManagement Journal,20: 953–968.

2016 455Khanna, Guler, and Nerkar

Page 21: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Denrell, J., Fang, C., & Levinthal, D. A. 2004. FromT-mazesto labyrinths: Learning from model-based feedback.Management Science, 50: 1366–1378.

Denrell, J., & March, J. G. 2001. Adaptation as informationrestriction: The hot stove effect. Organization Sci-ence, 12: 523–538.

Edmondson, A. C. 2002. The local and variegated nature oflearning in organizations: A group-level perspective.Organization Science, 13: 128–146.

Edmondson, A. C. 2011. Strategies of learning from failure.Harvard Business Review, 89: 48–55.

Eggers, J. P. 2012a. Falling flat: Failed technologies andinvestment under uncertainty. Administrative Sci-ence Quarterly, 57: 47–80.

Eggers, J. P. 2012b. All experience is not created equal:Learning, adapting, and focusing in product portfoliomanagement. Strategic Management Journal, 33:315–335.

Fleming, L. 2001. Recombinant uncertainty in technolog-ical search. Management Science, 47: 117–132.

Fleming, L. 2002. Finding the organizational sources oftechnological breakthroughs: The story of Hewlett-Packard’s thermal ink-jet. Industrial and CorporateChange, 11: 1059–1084.

Fleming, L., & Sorenson, O. 2004. Science as a map intechnological search. Strategic Management Jour-nal, 25: 909–928.

Gaba, V., & Joseph, J. 2013. Corporate structure and per-formance feedback: Aspirations and adaptation inM-form firms.Organization Science, 24: 1102–1119.

Gambardella, A. 1992. Competitive advantages from in-house scientific research: The U.S. pharmaceuticalindustry in the 1980s. Research Policy, 21: 391–407.

Gertner, J. 2014, May. The truth about Google X: An ex-clusive look behind the secretive lab’s closeddoors. FastCompany. http://www.fastcompany.com/3028156/united-states-of-innovation/the-google-x-factor.Accessed April 30, 2014.

Gilbert, R., & Shapiro, C. 1990. Optimal patent length andbreadth. The RAND Journal of Economics, 21:106–112.

Grabowski, H. G. 2002. Patents, innovation, and access tonew pharmaceuticals. Journal of International Eco-nomic Law, 5: 849–860.

Grabowski, H. G., & Vernon, J. M. 1992. Brand loyalty,entry, and price competition in pharmaceuticals afterthe 1984 Drug Act. Journal of Law & Economics, 35:331–350.

Greve, H. R. 2003. Organizational learning from per-formance feedback: A behavioral perspective oninnovation and change. Cambridge, England: Cam-bridge University Press.

Griliches, Z. 1990. Patent statistics as economic indicators:A survey. Journal of Economic Literature, 28:1661–1707.

Griliches, Z. 1994. Productivity, R&D, and the data con-straint. American Economic Review, 84: 1–23.

Guler, I., & Nerkar, A. 2012. The impact of global and localcohesion on innovation in the pharmaceutical in-dustry. Strategic Management Journal, 33: 535–549.

Hall, B., Helmers, C., von Graevenitz, G., & Rosazza-Bon-dibene, C. 2013. Technology entry in the presence ofpatent thickets. Cambridge, MA: National Bureau ofEconomic Research.

Haunschild, P. R., & Rhee, M. 2004. The role of voli-tion in organizational learning: The case of auto-motive product recalls. Management Science, 50:1545–1560.

Haunschild, P. R., & Sullivan, B. N. 2002. Learning fromcomplexity: Effects of prior accidents and incidents onairlines’ learning.AdministrativeScienceQuarterly,47: 609–643.

Hausman, J. A., Hall, B. H., & Griliches, Z. 1984. Econo-metric models for count data with an application tothe patents–R&D relationship. Cambridge, MA: Na-tional Bureau of Economic Research.

Hayward, M. L. A. 2002. When do firms learn from theiracquisition experience? Evidence from 1990–1995.Strategic Management Journal, 23: 21–39.

Heled, Y. 2012. Why primary patents covering biologicsshould be unforceable against generic applicants un-der the Biologics Price Competition and InnovationAct. Annals of Health Law, 21: 211–222.

Henderson, A. D., & Stern, I. 2004. Selection-based learn-ing: The coevolution of internal and external selectionin high-velocity environments. Administrative Sci-ence Quarterly, 49: 39–75.

Henderson, R., & Cockburn, I. 1994. Measuring com-petence: Exploring firm effects in pharmaceuti-cal research. Strategic Management Journal, 15:63–84.

Henderson, R., & Cockburn, I. 1996. Scale, scope, andspillovers: The determinants of research productivityin drug discovery. The RAND Journal of Economics,27: 32–59.

Hoffman, A. J., & Ocasio, W. 2001. Not all events areattended equally: Toward a middle-range theory ofindustry attention to external events. OrganizationScience, 12: 414–434.

Holtzeakin, D., Newey,W., & Rosen, H. S. 1988. Estimatingvector autoregressions with panel data. Econo-metrica, 56: 1371–1395.

Jaffe, A. B. 1986. Technological opportunity and spilloversof R&D: Evidence from firms’ patents, profits and

456 AprilAcademy of Management Journal

Page 22: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

market value. The American Economic Review, 76:984–1001.

Jaffe, A. B., Trajtenberg, M., & Henderson, R. 1993. Geo-graphic localization of knowledge spillovers as evi-denced by patent citations. The Quarterly Journal ofEconomics, 108: 577–598.

Jordan, A. H., & Audia, P. G. 2012. Self-enhancement andlearning from performance feedback. Academy ofManagement Review, 37: 211–231.

Kelly, N. J. 2002.The nature and degree of bias in laggeddependent variable models. Paper presented at theannual meeting of the Southern Political Science As-sociation, Savannah, Georgia.

Kettle, K. L., & Haubl, G. 2010. Motivation by anticipation:Expecting rapid feedback enhances performance.Psychological Science, 21: 545–547.

Kim, J. Y., &Miner, A. S. 2007. Vicarious learning from thefailures and near-failures of others: Evidence from theU.S. commercial banking industry. Academy ofManagement Journal, 50: 687–714.

Klemperer, P. 1990. How broad should the scope of patentprotection be? The RAND Journal of Economics, 21:113–130.

Knott, A. M., Posen, H. E., & Wu, B. 2009. Spilloverasymmetry and why it matters. Management Sci-ence, 55: 373–388.

Kobrin, S. J. 1991. An empirical-analysis of the de-terminants of global integration. Strategic Manage-ment Journal, 12: 17–31.

Lahiri, N. 2010. Geographic distribution of R&D activity:How does it affect innovation quality? Academy ofManagement Journal, 53: 1194–1209.

Lee, F., Edmondson,A.C., Thomke,S., &Worline,M.2004.The mixed effects of inconsistency on experimen-tation in organizations. Organization Science, 15:310–326.

Lehman, B. 2003. The pharmaceutical industry and thepatent system. Washington, D.C.: International In-tellectual Property Institute.

Levin, R. C., Klevorick, A. K., Nelson, R. R., Winter, S. G.,Gilbert, R., & Griliches, Z. 1987. Appropriating thereturns from industrial research and development.Brookings Papers on Economic Activity, 1987:783–831.

Levinthal, D. A., & March, J. G. 1993. The myopiaof learning. Strategic Management Journal, 14:95–112.

Levinthal, D.,&Rerup,C. 2006.Crossingan apparent chasm:Bridging mindful and less-mindful perspectives onorganizational learning. Organization Science, 17:502–513.

Madsen, P. M., & Desai, V. 2010. Failing to learn? Theeffects of failure and success on organizationallearning in the global orbital launch vehicle in-dustry. Academy of Management Journal, 53:451–476.

March, J. G. 1981. Footnotes to organizational change.Administrative Science Quarterly, 26: 563–577.

March, J. G. 1991. Exploration and exploitation in organi-zational learning. Organization Science, 2: 71–87.

March, J. G., & Simon, H. A. 1958. Organizations. NewYork, NY: Wiley.

March, J. G., Sproull, L. S., & Tamuz, M. 1991. Learningfrom samples of one or fewer.Organization Science,2: 1–13.

McFadyen,M. A., & Cannella, A. A. 2004. Social capitaland knowledge creation: Diminishing returnsof the number and strength of exchange rela-tionships. Academy of Management Journal, 47:735–746.

McGrath, R. G. 2011. Failing by design.Harvard BusinessReview, 89: 77–83.

Milanov,H.,&Shepherd,D.A. 2013.The importanceof thefirst relationship: The ongoing influence of initialnetwork on future status. Strategic ManagementJournal, 34: 727–750.

Miner, A. S., Kim, J. Y., Holzinger, I. W., & Haunschild, P.1999. Fruits of failure: Organizational failure andpopulation-level learning. In J. A. C. Baum, A. S.Miner, & P. Anderson, Advances in strategic man-agement, vol. 16: Population-level learning and in-dustry change: 187–220.

Nelson, R. R., &Winter, S.G. 1982.Anevolutionary theoryof economic change. Cambridge, MA: Belknap Press.

Nerkar, A., & Roberts, P. W. 2004. Technological andproduct–market experience and the success ofnew product introductions in the pharmaceuticalindustry. Strategic Management Journal, 25:779–799.

Nicholls-Nixon, C. L., & Woo, C. Y. 2003. Technologysourcing and output of established firms in a regime ofencompassing technological change. Strategic Man-agement Journal, 24: 651–666.

Noda, T., & Bower, J. L. 1996. Strategy making as iteratedprocesses of resource allocation. Strategic Manage-ment Journal, 17(S1): 159–192.

Nohria, N., & Gulati, R. 1996. Is slack good or bad for in-novation? Academy of Management Journal, 39:1245–1265.

Paruchuri, S., Nerkar, A., & Hambrick, D. C. 2006. Acqui-sition integration and productivity losses in the tech-nical core: Disruption of inventors in acquiredcompanies. Organization Science, 17: 545–562.

2016 457Khanna, Guler, and Nerkar

Page 23: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Pavitt, K. 1988. Uses and abuses of patent statistics. InA. F. J. van Raan (Ed.), Handbook of quantitativestudies of science and technology: 509–536.Amsterdam, The Netherlands: Elsevier.

Penner-Hahn, J. D., &Shaver, J.M. 2005.Does internationalresearch and development increase patent output? Ananalysis of Japanese pharmaceutical firms. StrategicManagement Journal, 26: 121–140.

PhRMA. 2007. Drug discovery and development: Un-derstanding the R&D process. Washington, D.C.:Pharmaceutical Research and Manufacturers ofAmerica.

Polidoro, F., & Toh, P. K. 2011. Letting rivals come close orwarding them off? The effects of substitution threat onimitation deterrence. Academy of ManagementJournal, 54: 369–392.

Rerup, C. 2009. Attentional triangulation: Learning fromunexpected rare crises. Organization Science, 20:876–893.

Rosenkopf, L., & Nerkar, A. 2001. Beyond local search:Boundary-spanning, exploration, and impact in theoptical disk industry. Strategic Management Jour-nal, 22: 287–306.

Rothaermel, F. T., & Thursby, M. 2007. The nanotech ver-sus the biotech revolution: Sources of productivity inincumbent firm research. Research Policy, 36:832–849.

Ruef, M., & Patterson, K. 2009. Credit and classification:The impact of industry boundaries in nineteenth-centuryAmerica.Administrative ScienceQuarterly,54: 486–520.

Sampson, R. C. 2007. R&D alliances and firm performance:The impact of technological diversity and allianceorganization on innovation. Academy of Manage-ment Journal, 50: 364–386.

Sargan, J. D. 1958. The estimation of economic relation-ships using instrumental variables, Econometrica,26: 393–415.

Sauermann, H., & Cohen, W. M. 2010. What makes themtick? Employee motives and firm innovation. Man-agement Science, 56: 2134–2153.

Scherer, F. M., & Ross, D. 1990. Industrial market struc-ture and economic performance (3rd ed.). Boston,MA: Houghton Mifflin.

Schumpeter, J. A. 1934. The theory of economic devel-opment: An inquiry into profits, capital, credit, in-terest, and the business cycle, vol. XLVI. Cambridge,MA: Harvard University Press.

Scott Morton, F. M. 2000. Barriers to entry, brand adver-tising, and generic entry in the U.S. pharmaceuticalindustry. International Journal of Industrial Orga-nization, 18: 1085–1104.

Serrano, C. J. 2010. The dynamics of the transfer and re-newal of patents. The RAND Journal of Economics,41: 686–708.

Sitkin, S. B. 1992. Learning through failure: The strategy ofsmall losses. Research in Organizational Behavior,14: 231–266.

Skinner, B. F. 1954. The science of learning and the art ofteaching. Harvard Educational Review, 24: 86–97.

Somaya, D., Williamson, I. O., & Zhang, X. 2007.Combining patent law expertise with R&D forpatenting performance. Organization Science,18: 922–937.

Sorensen, J. B., & Stuart, T. E. 2000. Aging, obsolescence,and organizational innovation. Administrative Sci-ence Quarterly, 45: 81–112.

Staw, B. M. 1976. Knee-deep in the big muddy: A study ofescalating commitment to a chosen course of action.Organizational Behavior and Human Performance,16: 27–44.

Staw, B. M., Sandelands, L. E., & Dutton, J. E. 1981. Threatrigidity effects in organizational behavior: A multi-level analysis.AdministrativeScienceQuarterly, 26:501–524.

Stern, S. 2004. Do scientists pay to be scientists? Man-agement Science, 50: 835–853.

Stock, J. H., & Yogo,M. 2005. Testing for weak instrumentsin linear IV regression. In D. W. K. Andrews &J. H. Stock (Eds.), Identification and inference foreconometric models: Essays in honor of ThomasRothenberg: 80–108. Cambridge, England: Cam-bridge University Press.

Terwiesch, C., & Ulrich, C. 2009. Innovation tour-naments: Creating and selecting exceptionalopportunities. Boston, MA: Harvard BusinessSchool Press.

Terwiesch, C., & Xu, Y. 2008. Innovation contests, openinnovation, and multiagent problem solving. Man-agement Science, 54: 1529–1543.

Thomke, S. 2003. Experimentation matters: unlockingthe potential of new technologies for innovation.Boston, MA: Harvard Business School Press.

Thomke, S., & Kuemmerle, W. 2002. Asset accumulation,interdependence and technological change: Evidencefrom pharmaceutical drug discovery. Strategic Man-agement Journal, 23: 619–635.

Trajtenberg, M. 1990. A penny for your quotes: Patent ci-tations and the value of innovations. The RANDJournal of Economics, 21: 172–187.

Uotila, J., Maula, M., Keil, T., & Zahra, S. A. 2009. Explora-tion, exploitation, and financial performance: Analysisof S&P 500 corporations. Strategic ManagementJournal, 30: 221–231.

458 AprilAcademy of Management Journal

Page 24: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Van de Ven, A. H. 1986. Central problems in the man-agement of innovation. Management Science, 32:590–607.

Ward, M. R. 1992. Drug approval overregulation. Regula-tion, 15: 47–53.

Wildavsky, A. 1988. Searching for safety. New Bruns-wick, NJ: Transaction Books.

Yelle, L. E. 1979. The learning curve: Historical reviewand comprehensive survey. Decision Sciences, 10:302–328.

Rajat Khanna ([email protected]) is an assistant pro-fessor in the Management Department of the FreemanSchool of Business at Tulane University. His research fo-cuses on understanding how technological failures andsearch processes affect innovation within firms. He

received his PhD from the University of North Carolina atChapel Hill and his undergraduate degree from the IndianInstitute of Technology Delhi.

Isin Guler ([email protected]) is an associateprofessor of strategy at Sabanci University’s School ofManagement in Turkey. Her current research interests arein the areas of innovation strategy and venture capital. Shereceived her PhD from the Wharton School of the Univer-sity of Pennsylvania.

Atul Nerkar ([email protected]) is the Jeffrey A AllredDistinguished Scholar and professor of strategy and en-trepreneurship at the Kenan–Flagler Business School atthe University of North Carolina at Chapel Hill. His re-search interests are in the areas of technology, innovation,and entrepreneurship. Atul has a PhD in strategy from theWharton School of the University of Pennsylvania.

2016 459Khanna, Guler, and Nerkar

Page 25: FAIL OFTEN, FAIL BIG, AND FAIL FAST? LEARNING FROM SMALL ...atulnerkar.web.unc.edu/files/2018/07/amj2016-2.pdf · RAJAT KHANNA Tulane University ISIN GULER Sabanci University ATUL

Copyright of Academy of Management Journal is the property of Academy of Managementand its content may not be copied or emailed to multiple sites or posted to a listserv withoutthe copyright holder's express written permission. However, users may print, download, oremail articles for individual use.


Recommended