+ All Categories
Home > Documents > Evaluating the Relative Performance of Engineering Design Projects: A Case Study Using Data...

Evaluating the Relative Performance of Engineering Design Projects: A Case Study Using Data...

Date post: 12-Nov-2023
Category:
Upload: ttu
View: 0 times
Download: 0 times
Share this document with a friend
12
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 471 Evaluating the Relative Performance of Engineering Design Projects: A Case Study Using Data Envelopment Analysis Jennifer A. Farris, Richard L. Groesbeck, Eileen M. Van Aken, and Geert Letens Abstract—This paper presents a case study of how Data En- velopment Analysis (DEA) was applied to generate objective cross-project comparisons of project duration within an engi- neering department of the Belgian Armed Forces. To date, DEA has been applied to study projects within certain domains (e.g., software and R&D); however, DEA has not been proposed as a general project evaluation tool within the project management literature. In this case study, we demonstrate how DEA fills a gap not addressed by commonly applied project evaluation methods (such as earned value management) by allowing the objective comparison of projects on actual measures, such as duration and cost, by explicitly considering differences in key input characteris- tics across these projects. Thus, DEA can overcome the paradigm of project uniqueness and facilitate cross-project learning. We describe how DEA allowed the department to gain new insight about the impact of changes to its engineering design process (re- designed based on ISO 15288), creating a performance index that simultaneously considers project duration and key input variables that determine project duration. We conclude with directions for future research on the application of DEA as a project evaluation tool for project managers, program office managers, and other decision-makers in project-based organizations. Index Terms—Concurrent engineering, ISO 15288, military or- ganizations, performance measurement, project management. I. INTRODUCTION C ROSS-PROJECT learning is vital for any organization seeking to continuously improve its project management practices and identify competitive strengths and weaknesses [1]. The Project Management Institute (PMI) [2] describes a knowledge base that integrates performance information and lessons learned from previous projects as a key asset of an organization. However, cross-project learning is difficult to achieve [3], in particular due to the reigning paradigm of project uniqueness [4]. As described in [1, p. 17] “the more a project is perceived as unique the less likely are teams to try and learn from others.” Cross-project learning, thus, requires a method of identifying sets of similar projects for which the observed performance information and lessons learned may be “fairly” aggregated [5]. Yet, to date, the project management literature has proposed few tools for enabling comparisons of performance across projects which explicitly consider differences in key project Manuscript received March 1, 2005; revised July 1, 2005 and September 1, 2005. Review of this manuscript was arranged by Department Editor, J. K. Pinto. The authors are with Virginia Polytechnic Instittute and State University, Blacksburg, VA 24061 USA (e-mail: [email protected]) Digital Object Identifier 10.1109/TEM.2006.878100 input characteristics. Commonly applied project evaluation methods, such as earned value management (EVM) [6], pro- vide organizations with a method of systematically comparing actual project performance to project goals, encouraging or- ganizations to document causes of variance, as well as the reasoning behind the corrective actions taken. However, these methods do not take into account differences in many project input characteristics, such as scope, technical complexity, and staffing, which impact project performance and the suitability of making cross-project comparison [7]. One modeling methodology that provides a flexible and powerful approach for overcoming the paradigm of project uniqueness and facilitating cross-project learning is Data Envelopment Analysis (DEA), a linear programming-based performance evaluation methodology. DEA can simultaneously consider project performance on outcome dimensions, as well as differences in key project input characteristics. While DEA has been widely applied in the operations research and decision sciences literature, DEA has not been proposed as a general project evaluation tool within the project management litera- ture. This paper demonstrates how DEA can be readily adopted as a project evaluation tool using a case study of an engineering department in the Belgian Armed Forces. A key feature of this case study was the close collaboration between the researchers and the case study site (one of the authors was also a manager within the case study organization), in order to ensure that the performance model and results were readily understood by or- ganizational personnel, as well as reflective of actual practices within the organization. We describe how the application of DEA allowed the department to gain new insights about the impact of changes to its engineering design process on project duration by creating a performance index that simultaneously considered project duration and input variables. The paper is organized as follows. Section II provides a brief review of the relevant literature. The case study organization is described in Section III, while the DEA model and results are described in Sections IV and V, respectively. Finally, Section VI concludes with directions for future research. II. LITERATURE REVIEW A. Project Evaluation Extensive project management research has identified a wide variety of measures that describe the outcomes of a project and the input characteristics that impact outcomes. The most com- monly cited project outcome measures include cost, schedule, technical performance outcomes [8] and client satisfaction 0018-9391/$20.00 © 2006 IEEE
Transcript

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 471

Evaluating the Relative Performance of EngineeringDesign Projects: A Case Study Using Data

Envelopment AnalysisJennifer A. Farris, Richard L. Groesbeck, Eileen M. Van Aken, and Geert Letens

Abstract—This paper presents a case study of how Data En-velopment Analysis (DEA) was applied to generate objectivecross-project comparisons of project duration within an engi-neering department of the Belgian Armed Forces. To date, DEAhas been applied to study projects within certain domains (e.g.,software and R&D); however, DEA has not been proposed as ageneral project evaluation tool within the project managementliterature. In this case study, we demonstrate how DEA fills a gapnot addressed by commonly applied project evaluation methods(such as earned value management) by allowing the objectivecomparison of projects on actual measures, such as duration andcost, by explicitly considering differences in key input characteris-tics across these projects. Thus, DEA can overcome the paradigmof project uniqueness and facilitate cross-project learning. Wedescribe how DEA allowed the department to gain new insightabout the impact of changes to its engineering design process (re-designed based on ISO 15288), creating a performance index thatsimultaneously considers project duration and key input variablesthat determine project duration. We conclude with directions forfuture research on the application of DEA as a project evaluationtool for project managers, program office managers, and otherdecision-makers in project-based organizations.

Index Terms—Concurrent engineering, ISO 15288, military or-ganizations, performance measurement, project management.

I. INTRODUCTION

CROSS-PROJECT learning is vital for any organizationseeking to continuously improve its project management

practices and identify competitive strengths and weaknesses[1]. The Project Management Institute (PMI) [2] describes aknowledge base that integrates performance information andlessons learned from previous projects as a key asset of anorganization. However, cross-project learning is difficult toachieve [3], in particular due to the reigning paradigm of projectuniqueness [4]. As described in [1, p. 17] “the more a projectis perceived as unique the less likely are teams to try and learnfrom others.” Cross-project learning, thus, requires a methodof identifying sets of similar projects for which the observedperformance information and lessons learned may be “fairly”aggregated [5].

Yet, to date, the project management literature has proposedfew tools for enabling comparisons of performance acrossprojects which explicitly consider differences in key project

Manuscript received March 1, 2005; revised July 1, 2005 and September 1,2005. Review of this manuscript was arranged by Department Editor, J. K. Pinto.

The authors are with Virginia Polytechnic Instittute and State University,Blacksburg, VA 24061 USA (e-mail: [email protected])

Digital Object Identifier 10.1109/TEM.2006.878100

input characteristics. Commonly applied project evaluationmethods, such as earned value management (EVM) [6], pro-vide organizations with a method of systematically comparingactual project performance to project goals, encouraging or-ganizations to document causes of variance, as well as thereasoning behind the corrective actions taken. However, thesemethods do not take into account differences in many projectinput characteristics, such as scope, technical complexity, andstaffing, which impact project performance and the suitabilityof making cross-project comparison [7].

One modeling methodology that provides a flexible andpowerful approach for overcoming the paradigm of projectuniqueness and facilitating cross-project learning is DataEnvelopment Analysis (DEA), a linear programming-basedperformance evaluation methodology. DEA can simultaneouslyconsider project performance on outcome dimensions, as wellas differences in key project input characteristics. While DEAhas been widely applied in the operations research and decisionsciences literature, DEA has not been proposed as a generalproject evaluation tool within the project management litera-ture. This paper demonstrates how DEA can be readily adoptedas a project evaluation tool using a case study of an engineeringdepartment in the Belgian Armed Forces. A key feature of thiscase study was the close collaboration between the researchersand the case study site (one of the authors was also a managerwithin the case study organization), in order to ensure that theperformance model and results were readily understood by or-ganizational personnel, as well as reflective of actual practiceswithin the organization. We describe how the application ofDEA allowed the department to gain new insights about theimpact of changes to its engineering design process on projectduration by creating a performance index that simultaneouslyconsidered project duration and input variables.

The paper is organized as follows. Section II provides a briefreview of the relevant literature. The case study organization isdescribed in Section III, while the DEA model and results aredescribed in Sections IV and V, respectively. Finally, Section VIconcludes with directions for future research.

II. LITERATURE REVIEW

A. Project Evaluation

Extensive project management research has identified a widevariety of measures that describe the outcomes of a project andthe input characteristics that impact outcomes. The most com-monly cited project outcome measures include cost, schedule,technical performance outcomes [8] and client satisfaction

0018-9391/$20.00 © 2006 IEEE

472 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006

[9], although a universal definition of project success remainselusive [10]. In addition, a wealth of research has studied fac-tors which impact project outcomes (e.g., [3] and [11]–[16]).Based on a review of several key studies, Belassi and Tukel[17] identified four overall groups of project success factors:factors related to the project (e.g., size, urgency), factors relatedto the project manager and team members (e.g., competence,technical background), factors related to the organization (e.g.,top management support), and factors related to the externalenvironment (e.g., client, market). Recently, the PMI [5] identi-fied ten dimensions of project performance measures for studyin benchmarking efforts (e.g., cost, schedule performance,staffing, alignment to strategic business goals, and customersatisfaction).

Yet, despite the multidimensional nature of project perfor-mance, traditionally, most organizations have evaluated projectperformance primarily through cost and schedule performancemeasures, such as EVM, which is cited by the PMI as a com-monly used method of project performance evaluation [2].These methods center on measuring ongoing and final projectperformance against project goals. While these approachesprovide some basis for evaluating the extent of success acrossprojects, they do not explicitly take into account differences inproject characteristics which may impact cost and schedule per-formance, and they depend upon the appropriateness of projectgoals [18]. Other evaluation methods may include additionaloutcome measures (e.g., [19]), but still do not explicitly takeinto account many key project input characteristics.

Only a few project evaluation tools have been proposed toallow project managers to explicitly consider differences ininput characteristics across projects when evaluating projectoutcomes [20]. Slevin and Pinto [21] developed the Project Im-plementation Profile (PIP), a questionnaire-based instrument,which managers can use to assess the relative presence of tencritical success factors [15]. While assessing the presence ofkey factors that are within management control, the PIP doesnot appear to be intended to measure cross-project differencesin many less controllable project characteristics (e.g., tech-nical complexity). Andersen and Jessen [22] present a similarquestionnaire-based project evaluation tool which incorporatesa different set of success factors, and directly includes somesuccess measures. Pillai et al. [20] present an integrated frame-work for measuring the performance of R&D projects, whichcalls for the measurement of both controllable and uncontrol-lable factors that influence project outcomes. However, thediscussion in [20] is largely theoretical, primarily identifyingthe high-level factors that should be considered in determiningthe integrated performance index and an overall method ofcombining these factors (i.e., an index using a weighted sum),rather than a detailed implementation scheme. Furthermore,all of the above methods require the a priori specification ofweights for variables, assuming that the weights assigned toeach variables should be the same for all projects. As will bedescribed in the next section, DEA overcomes this difficulty byremoving the requirement for managers to specify weights forvariables a priori.

B. Data Envelopment Analysis

DEA, developed by Charnes et al.[23], is a linear program-ming-based technique for determining the relative efficiency of

decision making units (DMUs), based on the performance of allother DMUs in the data set. Using a linear programming model,the best performing (“best practice”) DMUs in the data set areused to define an efficient frontier, against which all other DMUsare benchmarked. The “best practice” DMUs are assigned anefficiency score of “1” or 100%. The efficiency score for eachinefficient DMU is calculated based on its distance from the effi-cient frontier. The method of calculating distance from the fron-tier depends on the type of DEA model used (e.g., input mini-mizing, output maximizing, additive, etc.), however, each ineffi-cient DMU is measured against the portion of the efficient fron-tier defined by the “best practice” DMUs with the most similarinput/output mix, thus allowing each inefficient DMU to achieveits maximum efficiency score. For brevity, the mathematical for-mulations, as well as detailed description of the different typesof DEA models, are not presented here. Instead, the reader is re-ferred to several excellent, comprehensive texts (e.g., [24] and[25]). Currently, several DEA software packages exist to allowmanagers and researchers to implement DEA models withoutdirectly solving a linear program for each DMU [26], [27]. Theanalysis in this study used Banxia Frontier Analyst®, a commer-cially available DEA software package [28].

DEA has many strengths as a performance analysis method(e.g., as described in [29] and [30]). Two primary strengths areof particular importance to this research. First, DEA is a mul-tidimensional measurement method that can incorporate mul-tiple input and output variables, including variables with dif-ferent units of measure and levels of managerial control, suchas exogenous variables, as well as categorical variables [24].Second, as a nonparametric benchmarking methodology, DEAdoes not require that all units weigh inputs and outputs the sameway. Instead, DEA utilizes the weight for each input and outputthat will give each DMU its maximum possible efficiency score[24]. Therefore, DEA is appropriate for analyzing complex ac-tivities that require considerable flexibility in the detailed “pro-duction” approach [31]. Engineering design projects are clearlyone such activity in which the detailed approach of design ac-tivities can vary from project to project. The only caveat to thispoint is that DMUs must be similar enough to form a relativelyhomogeneous set [32]. That is, they must complete similar typesof activities (although the detailed approach to completing theseactivities may differ), produce similar products and service, con-sume similar types of resources, and perform under similar envi-ronmental constraints (although differences in environment canbe accounted for through nondiscretionary or categorical vari-ables).

In the nearly 30 years since its introduction, DEA has beenwidely applied to a multitude of problems in a variety of do-mains [33]. Several application areas related to this researchhave been explored in previous research. For instance, DEA hasbeen applied to study the performance of military vehicle main-tenance units [34]–[37]. These prior applications are relevant tothe present research since the case study organization is also amilitary support unit. However, the case study organization dif-fers from those studied in [34]–[37] in that it is an engineeringdepartment that designs new communications equipment formilitary vehicles, rather than a maintenance department. In ad-dition, while [34]–[37] compared departments within the or-

FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 473

ganization, the present research compares the performance ofprojects within a given department. In another related applica-tion, Paradi et al.[38] used DEA to analyze the performance ofengineering design teams at Bell Canada. This work is relatedto the present research since both use DEA to examine the per-formance of engineering activities, although the unit of analysisin [38] was the team, while the unit of analysis in the presentresearch is the project.

To date, the application of DEA to projects appears to be al-most solely limited to software and R&D projects. Examples ofsoftware project applications include [29], [30], and [39]–[47].R&D project applications have focused primarily on selectingthe best set of R&D projects to receive funding (e.g., [31] and[48]–[55]). However, there have been some applications thatused DEA to measure the efficiency of completed or ongoingR&D projects (e.g., [56]–[58]).

Our literature review only revealed four project-level applica-tion areas that did not involve either software projects or R&Dprojects. These related studies have focused on the selection ofprojects [59], [60] or on assessing performance of completedprojects [61], [62]. In the application most similar to the presentresearch, Busby and Williamson [62] applied DEA to the projectand “work package” (subassembly) level of engineering designprojects in the aerospace industry. Their research was focusedon studying methods of measuring performance for engineeringactivities, rather than investigation of the usefulness of DEAas a general project evaluation tool for organizational decisionmakers in project-based organizations.

III. BACKGROUND ON CASE STUDY ORGANIZATION

The Department of Technical Studies and Installations (TSI)designs and installs communication and information systems(CISs) for military vehicles in the Belgian Armed Forces (BAF).The work of the Department is entirely project-based and iscomprised of engineering design projects to develop CISs, ad-dressing specific design problems such as vibrations and elec-tromagnetic compatibility.

Historically, the TSI Department used a sequential en-gineering (SE) process based on “over the wall” hand-offsbetween functions, with little or no interaction between thesefunctions. This way of working appeared to lead to reworkand, ultimately, increased project duration, due to conceptualmisunderstandings between functions, lack of informationsharing and formal documentation of project knowledge, andbottlenecks of projects waiting for action between functions.

To address these issues, the TSI Department had completedthe first phase of redesigning its engineering design processprior to the start of the current research (see [63] for more in-formation on the TSI Department and the redesign of the engi-neering design process). The new process incorporated concur-rent engineering (CE) concepts and a stage-gate system basedon the ISO 15288 standard for the systems engineering life-cycle [64]. Key CE practices included in the new design processwere a dedicated, cross-functional project team and overlappingof project activities. The dedicated project team cooperativelyplanned project work from the conceptual stages until comple-tion. This involvement of all project personnel from conceptual

Fig. 1. Project duration versus engineering design process (n = 15 projects).

stages enabled the overlapping of many design activities. In ad-dition to the use of a dedicated team and overlapping activities,the stage-gate system structured decision-making throughoutthe project and improved documentation and communication ofproject knowledge, reducing the potential for misunderstand-ings leading to costly delays and/or rework. The TSI Depart-ment chose to pilot this new approach in its most technicallycomplex projects. (“Technical complexity” describes the tech-nical difficulty and uncertainty of a project).

To empirically evaluate the effectiveness of the new engi-neering design process, the TSI Department needed a way to as-sess the relative performance of projects before and after the im-plementation of the new design process. Prior to the introductionof the new process, the Department rarely attempted to com-pare performance across projects, due to the paradigm of projectuniqueness. Therefore, soon after the introduction of the newdesign process, in addition to continuing to evaluate the perfor-mance of each project compared to its targets, the Departmentbegan to use two measures to compare performance across mul-tiple projects. The two measures selected were project duration(in working days) and a measure of efficiency called the projectworkflow index, which was defined as effort (work content of theproject, measured in person-days) divided by project duration.While the measures used by the TSI Department are less sophis-ticated than other existing project evaluation approaches (e.g.,EVM or measures of perceived success), they share the samemajor weaknesses. They fail to adequately account for the dif-ferences in all the key project input characteristics, which wouldbe necessary to enable “fair” comparison of outcomes acrossprojects.

The Department’s internal evaluation of the first four projectscompleted using the new CE design process – using project du-ration and the project workflow index – did not unequivocallysupport the continued use of the new process. When the dura-tion of the first four projects completed under the new devel-opment method was compared to the eleven projects of sim-ilar technical complexity completed under the old SE designprocess, it appeared that project duration had decreased signifi-cantly under the new method (see Fig. 1). However, there was no

474 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006

Fig. 2. Project workflow index versus engineering design process (n = 15

projects.

evidence that project workflow index performance had changed(see Fig. 2), indicating that there was no strong evidence thatprojects under the new process had proportionally fewer de-lays and nonproductive waiting time than projects under the oldSE process. Furthermore, variation in staffing (personnel, offi-cers) and priority was evident even for these fifteen projects inthe same technical complexity category. Thus, many personnelwithin the TSI Department were not convinced that the new en-gineering design process was effective in reducing project du-ration. These personnel argued that the shorter project durationfor the projects completed using the new engineering designmethod may be due to differences in other project characteris-tics compared to projects completed under the old SE approach.

Findings from the CE literature provided even more evidencefor questioning the expected impact of the new engineeringdesign process, particularly given the characteristics of the TSIDepartment’s projects. Although CE practices have becomecommon in the last twenty years and many studies suggest thatCE practices are successful in reducing project duration [65],some research has found that the success of CE practices hasvaried across organizations [66]–[68], and other research hassuggested that outcomes may depend upon project character-istics. In particular, the impact of the two key CE practicesincluded in the new design process, overlapping activities anddedicated cross-functional project teams, may be affected byproject uncertainty (the degree to which the information neededto complete a project is incomplete or unstable), which is oneaspect of technical complexity. For instance, [69] found thatoverlapping activities actually increased project duration inhigh uncertainty projects due to increased rework. Similarly,[70] found that overlapping reduced project duration for lowuncertainty projects, but not for high uncertainty projects. Theuse of cross-functional project teams, on the other hand, hasbeen found to reduce project duration more for high uncertaintyversus low uncertainty projects (e.g., [69], [71], and [72]), butthis finding has been questioned [65]. Further, most studies ofCE practices only focus on the effects of the practices on project

duration and do not consider other project characteristics, suchas effort or quality [65], [69].

Thus, it became clear that the key question for analysisposed by the TSI Department was: Does the new CE designprocess appear to result in shorter project duration than the oldSE design process, given differences in characteristics acrossprojects? To address this question, the TSI Department requireda multidimensional index of project performance, capable ofmeasuring relative performance across similar projects. DEAwas identified as the modeling methodology to be applied togenerate this index. Other modeling methodologies that wereconsidered were performance indexes (ratio measures) andregression. These modeling methodologies contain certainassumptions that were not desirable in the present case. Forinstance, the development of performance indexes requires thea priori specification of variable weights, assuming that allprojects combine inputs in exactly the same way to produceoutputs. For complex tasks, such an assumption may be toolimiting [31]. Although not requiring the a priori specificationof weights, regression similarly assumes that all projects in adata set use the same set of input and output weights. Further,regression identifies the mean performance for a set of inputswhile DEA seeks to identify best performing units given theset of inputs and outputs. For additional comparison of DEAto ratio and regression modeling methodologies see [73]. Thedevelopment of the DEA model is discussed in the next section.

IV. MODEL SPECIFICATION

The TSI Department’s data set contained 15 projects withcomparable technical complexity completed over a period ofeight years, with 23 input and output variables recorded for eachproject. As previously indicated, eleven of these projects wereconducted under the old engineering design process, and fourwere conducted under the new process. The authors made thedecision to build the model using only the performance mea-sures extant in the data set, due to difficulty in collecting accu-rate post-hoc data for completed projects, as well as precedent inthe DEA literature (e.g., [41]). Therefore, the data set employeddoes not capture all potential project characteristics (input vari-ables) that could impact project duration or other project out-comes (such as measures of customer satisfaction). In addition,since the TSI Department’s database did not preserve informa-tion such as project cost and schedule goals, it is impossibleto compare the performance of the DEA model to cost andschedule variance measures. Table I defines the variables ref-erenced in the model specification discussion described in thissection.

The first step in modeling project performance using DEAwas to work with Department managers to identify the inputand output variables of interest. TSI Department managementagreed that project duration (Variable 1) was the key outputof interest to the case study application. The driving force be-hind the process redesign was the need to reduce project dura-tion. In the project management literature, “time” (along withcost, scope, technical performance and satisfaction) representsa key category of project performance measures [74], [75], andminimizing project duration is one objective the project-basedorganization can pursue [76]. Project duration has frequently

FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 475

TABLE IVARIABLE DESCRIPTIONS

been included in DEA models of software projects (e.g., [30],[41], [45], and [61]). Other potentially relevant output measures,such as quality and customer satisfaction, were not currentlytracked by the TSI Department. Project cost was included in-directly through the input variable effort (person days), whichreflects the major component of project cost. The decision toincorporate project cost in the model as an input rather thanas an output is based on the fact that cost represents the con-sumption of organizational resources used to create project out-comes. This decision is aligned with previous DEA applicationsto project work (e.g., [41], [47], [54], and [55]), which have alsomodeled project cost as an input variable. Thus, a single outputDEA model using project duration was used in the case studyapplication.

After identifying the output variable of interest, the next stepin specifying the DEA model was to identify the input variablesnecessary to capture important differences between projects.Input variables were identified through consultation with theTSI Department management (to accurately describe their prac-tices) and through a review of the DEA and project managementliterature. While it is important that the input variables that mostimpact project duration are identified, if too many variables areincluded, a DEA model loses discriminatory power – that is, allor most units become efficient due to their unique levels of in-puts and outputs. The recommended maximum number of inputand output variables is equal to one-half the number of DMUs inany given category or analysis [32]. Because this analysis con-cerns the 15 projects of greatest technical complexity, the max-imum number of input and outputs variables that could be in-cluded in the DEA model for this analysis was seven. In additionto the single output variable, four input variables were identifiedfor inclusion in the model, for a total of five variables. The fourinput variables identified were: effort, project staffing, priority,and number of officers. In addition, because all 15 projects hadthe same level of technical complexity, this was a control vari-able in the present analysis.

Effort (Variable 2) describes the total amount of person-daysconsumed by the project (i.e., the work content of the project.).This variable is under the influence of the project manager, but isfixed beyond a certain minimum point. While inefficient projectmanagement practices can increase effort, through rework, thereis a certain minimum amount of work that must be completed

to meet the objectives of the project – that is, there is a min-imum level of effort. Therefore, effort can be viewed as a costmeasure, and also as a measure related to project scope or size.Effort, measured as labor hours, has been studied as an inputin DEA applications to software projects (e.g., [29], [40], [43],and [46]) and R&D projects (e.g., [31]). In the project manage-ment literature, project size or scope is considered a dimensionof project performance, as is cost [8].

Project staffing (Variable 3) describes the concentration oflabor resources on the project. Specifically, project staffingdescribes the average number of people scheduled to work ona project each project day, thus capturing resource assignmentdecisions within the TSI Department. Obtaining and schedulinglabor resources is a significant portion of any project manager’sjob, and is also a concern of top management. All else beingequal, scheduling more people to concurrently work on aproject – that is, increasing overlapping – could decreaseproject duration, although, as previously indicated, this hasbeen debated in the CE literature. Project staffing – in the formof average team size during the life of the project – has beenstudied in past DEA applications to project work (e.g., [45]).In addition, the project management literature has studied therelationship between average number of employees assignedto a project during its lifespan and project outcomes [77].Finally, project staffing relates to top management support andproject personnel, which have been identified as critical projectsuccess factors [15].

Priority (Variable 4) indicates the importance (urgency)assigned to a project by top management. The TSI Departmentrated project priority on a nine-point scale, with “1” repre-senting the lowest level of priority and “9” representing thehighest level of priority. Thus, while priority is actually an in-terval variable, the relatively large number of intervals suggeststhat it can be treated like a continuous variable. All else beingequal, a higher-urgency project would be expected to achieveshorter project duration than a lower-urgency project, becausehigher urgency projects would receive more attention andexperience shorter turnaround times in resource requests andother administrative tasks. Priority is, therefore, a constraintreflecting the satisfaction of top management objectives (i.e.,stakeholder satisfaction). Aspects of project priority have beenconsidered in past applications of DEA to project work (e.g.,

476 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006

[47]). In the project management literature, priority relates toboth top management support [15] and project urgency [16].

Number of officers (Variable 5) indicates the number of offi-cers available in the TSI Department to support a project, notthe actual number of officers directly assigned to a project. Allelse being equal, increasing the number of officers should allowofficers to give more attention to individual projects, therebyreducing the turnaround time for administrative tasks and, ulti-mately, reducing project duration. Increasing the number of of-ficers could also allow officers to specialize in a particular typeof project, thereby increasing efficiency of project oversight.Clarke [36] included number of officers as an input variable inhis application of DEA to military maintenance units. Numberof officers can be related to both top management support andproject personnel (since officers are a personnel resource avail-able to the team), which have long been identified as key projectsuccess factors [15].

Finally, technical complexity (Variable 6) describes the tech-nical difficulty and uncertainty of a project. Although relatedto effort (i.e., more technically complex projects tend to involvemore work content), technical complexity captures additional el-ements that affect project duration — such as the extent of risk,need for testing, need for increased coordination between func-tions, and degree of technological uncertainty. Uncertainty, inparticular, is likely to increase the need for rework under over-lapping of project tasks [69], [78]. The TSI Department catego-rized projects according to general level of technical complexity,with “1” representing the most technically complex projects,“2” representing projects of medium technical complexity, and“3” representing the least technically complex projects. For cat-egorical inputs under DEA, projects in a disadvantaged cate-gory (e.g., greater technical complexity) are not directly com-pared to projects in a more advantaged category (e.g., lowertechnical complexity). However, projects in a more advantagedcategory can be benchmarked against “best practice” projectsin a disadvantaged category [33]. That is, technical complexitycategory 2 or 3 projects would not form part of the efficientfrontier for category 1 projects, since these projects are moredifficult. This is the reason why projects in category 2 and 3were excluded from the analysis on the impact of the new en-gineering design process, since the new process had only beenpiloted with projects in technical complexity category 1. Tech-nical complexity has been considered in some past applicationsof DEA to project work (e.g., [30], [40], [56], and [61]). Tech-nical complexity and related variables (uncertainty, technicaldifficulty) have also emerged as a key performance dimensionin the project management literature (e.g., [14] and [79]) and theCE literature (e.g., [69], [70], and [78]).

It should be noted here that the Department’s project work-flow index (Variable 7) is equal to effort divided by project du-ration. Therefore, the project workflow index was not used asa separate input variable, as the variables used to calculate theindex are already in the model.

After identifying the output and input variables to be studiedin the case study application, the final steps before executingthe DEA model included specifying the particular DEA modelto be applied and transforming two of the model variables. Twobasic DEA model orientations are output maximizing and input

minimizing [see [25] for a more detailed discussion of the dif-ferences summarized here]. The model employed for a partic-ular case depends upon the objectives and questions of interest,as well as the relative controllability of inputs versus outputs.An output maximizing model determines the maximum pro-portional increase in outputs possible for the given level of in-puts. Output maximizing models are generally used when outputlevels are discretionary, but input levels are relatively fixed, orwhen it is desirable to set targets for maximum output levels.Input minimizing models determine the decrease in inputs thatshould be possible while producing the current level of outputs.Input minimizing models are generally used when input levelsare relatively controllable, but output levels are pre-specified, orwhen it is desirable to evaluate internal process efficiency. Be-cause the key analysis question here involved estimating max-imum project duration, an output maximizing model was used.This was also appropriate since many of the input variables werenot under the direct control of the project manager: priority andnumber of officers were determined by top management, tech-nical complexity is an exogenous variable, and effort, the workcontent of a project, is only partially controlled by the projectmanager. Only project staffing is, arguably, primarily controlledby the project manager.

DEA models can also assume a variety of returns to scale.Two basic models are Charnes, Cooper, Rhodes (CCR) [23],which assumes constant returns to scale (CRS), and Banker,Charnes, Cooper (BCC) [80], which assumes variable returns toscale (VRS). CRS models provide the most conservative mea-sure of efficiency (i.e., the most aggressive DEA project dura-tion targets). Under CRS, all units are compared against a fron-tier defined by units operating under the most productive scalesize. Units operating under any diseconomies of scale, there-fore, cannot be 100% efficient. On the other hand, VRS modelsallow units operating under diseconomies of scale to form partof the frontier, as long as they perform better than their mostsimilar peers (e.g., those operating under similar diseconomiesof scale).

Choosing which model to use depends on both the charac-teristics of the data set and the question being analyzed. In thecase study application, the TSI Department sought to determineto what extent the use of the new engineering design processhad succeeded in improving the overall performance of projects(e.g., reducing project duration, given project inputs). This anal-ysis focuses on a small group of projects (15 in total), all intechnical complexity category 1. In the case study application,diseconomies of scale could exist for many input variables. Forinstance, increasing project staffing beyond a certain level mayyield diminishing returns in project duration, due to conges-tion. Similarly, projects with large amounts of effort could alsoexperience diminishing returns of scale. This could be due ei-ther to inherent project characteristics (project scope) or poorproject management practices (coordination difficulties or re-work). However, for this analysis, the researchers and the TSIDepartment did not want units operating under diseconomies ofscale (e.g., over-staffed projects or projects with increased ef-fort due to rework or other inefficient practices) to be considered100% efficient. Instead, they wanted to draw aggressive compar-isons based on the performance of best practice units. For these

FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 477

TABLE IIPROJECT DATA FOR 15 PROJECTS INTECHNICAL COMPLEXITY CATEGORY 1

TABLE IIIMEAN VALUES AND STATISTICAL SIGNIFICANCE TEST RESULTS FOR ENGINEERING DESIGN PROCESS (n = 15 projects)

reasons, the CRS model output was used, since the CRS identi-fies inefficiency due to diseconomies of scale and benchmarksperformance against units that are operating under the most pro-ductive scale size.

Finally, executing the DEA model required transformingtwo variables: project duration and effort. Because increasingproject duration is undesirable, project duration is an “undesir-able output” in DEA terminology. There are several differentmethods for modeling undesirable outputs in DEA. The methodused for the case study application was the transforma-tion [81], a common practice in the DEA literature (e.g., [32]and [41]). In the transformation, the undesirable output( ) is subtracted from a significantly large scalar ( ), suchthat all resulting (transformed) values ( ) are positiveand increasing values are desirable. The chosen is generallya value just slightly larger than the maximum value of theundesirable output observed in the data set, since choosing avalue that is much greater than this maximum value can distortmodel results [32]. In this case application, the maximumproject duration was 1930 days, thus, 2000 was chosen asand all project durations were subtracted from this to create

the variable quickness. Similarly, the input effort had to betransformed because an increase in an input should contributeto increased output, in this case quickness. Thus, thetransformation was applied to effort. The maximum effort valuein the data set was 815 days, and the selected was 850 days.Following full specification, the DEA model was executed withthe DEA software.

V. RESULTS

Table II presents the project data, as well as DEA results forthe 15 projects in technical complexity category 1. Table IIIpresents summary statistics and statistical test results forprojects completed under the new CE design process versusthe old SE design process. Kolmogorov–Smirnov tests ofnormality suggested that several variables, including the distri-bution of DEA scores, significantly departed from the normaldistribution. Thus, the nonparametric Mann-Whitney U testwas used to test for differences across groups in the currentanalysis. In addition, nonparametric tests have been specificallyrecommended for analysis of DEA results [82], [83].

478 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006

Fig. 3. DEA efficiency versus engineering design process (n = 15 projects).

A Mann-Whitney U test revealed that there was a significantdifference in project duration between projects completed underthe new CE design process versus the old SE design process( ). Differences among input variables were not statisti-cally significant. However, for number of officers and effort, the

-value was fairly low ( ). Given the small sample size,the statistical power of the U test to detect significant effects waslow. Thus, one could argue that the number of projects was toosmall to be very confident that effort and the number of officersdid not have an effect on project duration. Therefore, when in-dividual study variables were compared for the new CE processversus the old SE process, it was not clear initially how much ofthe noted reduction in project duration was due to the new CEdesign project and how much was due to subtle differences inproject characteristics.

Examination of DEA results, however, clearly revealed theimpact of the new design process. DEA efficiency scores forprojects completed under the new design process were signifi-cantly higher than the efficiency scores for projects completedunder the old process ( ) (see Fig. 3 and Table III). Thatis, when differences in input characteristics between projectsare explicitly taken into account, and each project is comparedonly to its most similar peers in the data set, projects com-pleted under the new engineering design process are superior toprojects completed under the old engineering design process interms of project duration. This result indicates how decision-makers in project-based organizations can use DEA to over-come the paradigm of project uniqueness, ensuring that varia-tions in project input characteristics are explicitly taken into ac-count when drawing cross-project comparisons of project out-comes. In fact, a closer examination of DEA scores (Table II)reveals that three out of four projects completed under the newengineering design process form part of the efficient frontier(i.e., are 100% efficient). Two technical complexity category 1projects completed under the old engineering design process arealso 100% efficient.

Reference frequencies and cross-efficiencies for technicalcomplexity category 1 projects also provide support for the

effectiveness of the new engineering design process in reducingproject duration. Reference frequency indicates the number ofinefficient units that calculate their efficiency scores at least inpart from a particular efficient unit. An efficient unit with a highreference frequency is, therefore, similar to many other less ef-ficient DMUs and is more likely to have achieved its efficiencyscore through best practices, rather than simply by virtue ofuniqueness. DEA reference frequencies (Table II) indicatedthat, out of the five efficiency leaders in the current analysis,two projects completed under the new process accounted for41% (nine) and 27% (six), respectively, of all references bytheir less efficient peers. The three remaining “best practice”units are more unique in their input mix and, therefore, hadlower reference frequencies.

A cross-efficiency score is the calculation of the efficiencyscore that would result for a particular unit if its efficiency scorewere calculated based on the input and output weightings usedby another unit and can, thus, be viewed as a form of sensi-tivity analysis of a unit’s DEA efficiency score. Examination ofcross-efficiency scores (Table II) reveals that the two projectswith the highest reference frequencies also had the highest av-erage cross-efficiency scores: 97% and 91%, respectively. Thatis, these projects are highly efficient regardless of the weightingof inputs used to evaluate the project. Thus, their efficiencyscores are robust. The mean cross-efficiency score for the 11projects completed under the old process was 43% (with a lowof 11% and a high of 82%), while the mean cross-efficiencyscore for the 4 projects completed under the new process was88% (with a low of 78% and a high of 97%).

The reference frequencies and cross-efficiencies providepowerful tools for organizational decision-makers to iden-tify appropriate projects to benchmark, as well as indicatingwhat types of other projects the lessons learned from theseprojects will apply to. Thus, DEA provides additional projectperformance evaluation information not attained through theapplication of traditional project evaluation techniques.

Finally, evidence suggests that the new engineering designprocess shifted the efficient frontier, indicating enhanced po-tential for shorter project duration. In Table IV, Column 2presents the resulting DEA project duration targets when theDEA model was executed using the data set containing all 15projects. Column 3 in Table IV presents the resulting DEAproject duration targets when the DEA model was executedusing a data set containing only the 11 projects completed underthe old engineering design process. Thus, as Table IV indicates,executing the DEA model with (Column 2) versus without(Column 3) the four projects completed under the new processshowed that new process projects shifted the frontier an averageof 22%. That is, individual project duration targets for projectscompleted under the old process were an average of 22%shorter when projects completed under the new engineeringdesign process were included in the analysis. The maximumobserved change was 48%. In terms of actual project duration,this shift indicates that the “average” project completed underthe old process would be expected to be 206 days shorter ifperformed under the new process. The maximum expectedreduction in project duration for an inefficient project under theold design process, if conducted under the new design process,was 533 days.

FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 479

TABLE IVEFFECT OF NEW ENGINEERING DESIGN PROCESS ON DEA TARGET DURATIONS

VI. CONCLUSIONS AND FUTURE RESEARCH

This case study demonstrated how DEA allowed the TSI De-partment to gain new insight about the impact of changes to itsengineering design process by creating a performance index thatsimultaneously considered project duration and key input vari-ables that determine project duration. Analysis of DEA outputunequivocally demonstrated that changes to the engineering de-sign process were successful in reducing project duration.

Although DEA has been widely applied in the operations re-search and Decision Sci. literature, DEA has not been proposedas a general project evaluation tool within the project manage-ment literature. Yet, DEA fills a gap not addressed by com-monly applied project evaluation methods used by organiza-tions. By explicitly considering differences in project input char-acteristics when evaluating performance, DEA allowed the casestudy organization to draw more unequivocal and robust conclu-sions regarding the effectiveness of its new engineering designprocess than it could have achieved through its current projectevaluation practices or other methods. In particular, DEA cre-ated comparison groups, by utilizing projects with similar inputcharacteristics and comparing the performance of each projectto its most similar peers. Thus, DEA can overcome the para-digm of project uniqueness by providing organizations with amethod of accounting for differences in project input character-istics when measuring performance across projects. Unlike ratioand regression methods, DEA does not require that all projectscombine inputs in exactly the same way to produce outputs, thusallowing the flexibility necessary for the analysis of a complextask. A final outcome of the DEA analysis reported in this casestudy, as well as additional analyses not reported here, is thatthe TSI Department became convinced that DEA is a usefulproject evaluation tool and has purchased DEA software to per-form model computations on an ongoing basis. Additional workincludes further analysis of effectiveness of the engineering de-sign process within the TSI Department, particularly as projectsare completed in technical complexity categories 2 and 3.

A more general research area is the application of DEA toproject portfolio management – a challenging competency areastill emerging in the project management domain [3]. Projectportfolio management has already been somewhat explored forR&D projects (e.g., [84]), but additional research is needed forother types of projects. Another general research area whereDEA can be particularly useful is identifying best practices andproblem areas in project management practices, both within andacross organizations. For instance, DEA efficiency scores canbe used to guide case study research by identifying “best per-forming,” “worst performing,” and “average” projects, whichcan then be analyzed in detail to identify specific project man-agement practices that contributed to their relative efficiency orinefficiency (e.g., [56]). For example, the TSI Department is cur-rently using DEA to develop a profile of the types of projects thatare good candidates for outsourcing. By examining the worst-performing DMU in the data set, the TSI Department can iden-tify distinguishing characteristics of these projects, and developan outsourcing profile. From this analysis, the TSI Departmentfound that projects requiring the purchase of certain types ofcomponents with long supply lead-times are one category ofprojects that is a good candidate for outsourcing. Legal require-ments for supplier selection for the military result in substan-tially longer supply lead-times if these projects are completedby the TSI Department, while industry can more quickly pur-chase the needed components, or may even possess an inven-tory on hand, thus enabling faster completion of these projects.While the TSI Department can complete these projects at sig-nificantly lower cost than industry, this cost efficiency had to beweighed against the tradeoff in project duration, since respon-siveness (e.g., reduced project duration) is critical for the Bel-gian military.

One additional promising avenue for future research is theapplication of DEA to project planning. The target-setting ca-pability of DEA is a powerful tool, which project managers canapply to determine appropriate, objective targets for project du-ration or other project variables. Accurate and appropriate es-

480 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006

timates are vital both to project planning, and to goal-basedproject evaluation methods such as EVM. However, obtainingaccurate estimates has often been problematic [18]. DEA couldbe used to obtain estimates (targets) for project duration of newprojects, by including estimates of the input variables and adummy variable (i.e., a very large value) for project duration.Finally, research can further investigate how DEA can be ap-plied to evaluate the performance of ongoing projects, by gen-erating an efficiency score using estimated final input and outputlevels (based on current project performance). If the efficiencyscore is lower than a desired threshold and/or indicates that theestimated final levels of some input(s) or output(s) differ fromacceptable levels, action can be taken to bring project perfor-mance more in line with management targets.

ACKNOWLEDGMENT

The authors wish to acknowledge the support of Lt. Col.I.M.M. Duhamel and Col. I.M.M. Rotsaert for their support ofthis research.

REFERENCES

[1] S. Newell, “Enhancing cross-project learning,” Eng. Manag. J., vol. 16,no. 1, pp. 12–20, Mar. 2004.

[2] A Guide to the Project Management Body of Knowledge Project Man-agement Institute, Newton Square, PA, 2004, ANSI/PMI 99-001-2004,3rd.

[3] T. Cooke-Davies, “The ’real’ success factors on projects,” Int. J.Project Manag., vol. 20, no. 3, pp. 185–190, Apr. 2002.

[4] J. P. Lewis, The Project Manager’s Desk Reference. New York: Mc-Graw Hill, 2000.

[5] “Effective Benchmarking for Project Management,” White PaperProject Management Institute, Newton Square, PA, 2004.

[6] A Practice Standard for Earned Value Management Project Manage-ment Institute, Newton Square, PA, 2005.

[7] A. J. Shenhar, A. Tishler, D. Dvir, S. Lipovetsky, and T. Lechler, “Re-fining the search for project success factors: A multivariate, typologicalapproach,” R&D Manag., vol. 32, no. 2, pp. 111–126, Mar. 2002.

[8] R. J. Might and W. A. Fischer, “The role of structural factors in deter-mining project management success,” IEEE Trans. Eng. Manag., vol.EM-32, pp. 71–77, May 1985.

[9] J. K. Pinto and D. P. Slevin, “Project success: Definitions and measure-ment techniques,” Project Manag. J., vol. 19, no. 1, pp. 67–73, Feb.1988.

[10] A. M. M. Liu and A. Walker, “Evaluation of project outcomes,” Con-struction Manag. Econ., vol. 16, no. 2, pp. 209–219, Mar. 1998.

[11] D. Dvir, S. Lipovetsky, A. Shenhar, and A. Tishler, “In search of projectclassification: A non-universal approach to project success factors,”Res. Policy, vol. 27, pp. 915–935, 1998.

[12] H. Kerzner, “In search of excellence in project management,” J. Syst.Manag., vol. 38, no. 2, pp. 30–39, Feb. 1987.

[13] M. E. Pate-Cornell and R. L. Dillon, “Success factors and future chal-lenges in the management of faster-better-cheaper projects: Lessonslearned from NASA,” IEEE Trans. Eng. Manag., vol. 48, no. 1, pp.25–35, Feb. 2001.

[14] J. K. Pinto and S. J. Mantel Jr., “The causes of project failure,” IEEETrans. Eng. Manag., vol. 37, no. 4, pp. 269–275, Nov. 1990.

[15] J. K. Pinto and D. P. Slevin, “Critical factors in successful project im-plementation,” IEEE Trans. Eng. Manag., vol. EM-34, no. 1, pp. 22–27,Feb. 1987.

[16] ——, “Critical success factors in R&D projects,” Res. Technol.Manag., vol. 32, no. 1, pp. 31–35, Jan./Feb. 1989.

[17] W. Belassi and O. I. Tukel, “A new framework for determining criticalsuccess/failure factors in projects,” Int. J. Project Manag., vol. 14, no.3, pp. 141–151, Jun. 1996.

[18] M. Freeman and P. Beale, “Measuring project success,” ProjectManag. J., vol. 23, no. 1, pp. 8–17, Mar. 1992.

[19] S. W. Hughes, D. D. Tippett, and W. K. Thomas, “Measuring projectsuccess in the construction industry,” Eng. Manag. J., vol. 16, no. 3,pp. 31–37, Sep. 2004.

[20] A. S. Pillai, A. Joshi, and K. S. Rao, “Performance measurement ofR&D projects in a multi-project, concurrent engineering environment,”Int. J. Project Manag., vol. 20, no. 2, pp. 165–177, Feb. 2002.

[21] D. P. Slevin and J. K. Pinto, “The project implementation profile: Newtool for project managers,” Project Manag. J., vol. 17, no. 4, pp. 57–70,Sep. 1986.

[22] E. S. Andersen and S. A. Jessen, “Project evaluation scheme: A tool forevaluating project status and predicting results,” Project Manag., vol.6, no. 1, pp. 61–69, 2000.

[23] A. Charnes, W. W. Cooper, and E. Rhodes, “Measuring the efficiencyof decision making units,” Eur. J. Oper. Res., vol. 2, no. 6, pp. 429–444,Nov. 1978.

[24] A. Charnes, W. W. Cooper, A. Y. Lewin, and L. M. Seiford, Eds.,Data Envelopment Analysis: Theory, Methodology, and Application.Boston, MA: Kluwer Academic, 1994.

[25] E. Thanassoulis, Introduction to the Theory and Application of DataEnvelopment Analysis. Boston, MA: Kluwer Academic , 2001.

[26] B. Hollingsworth, “A review of data envelopment analysis software,”Econ. J., vol. 107, no. 443, pp. 1268–1270, Jul. 1997.

[27] I. Herrero and S. Pascoe, Estimation of Technical Efficiency: A Reviewof Some of the Stochastic Frontier and DEA Software CHEER, 2002,vol. 15(1) [Online]. Available: http://www.economicsnetwork.ac.uk/cheer/ch15_1/dea.htm

[28] Frontier Analyst®Professional, 3.0.3 ed. Kendal, Cumbria, UK,Banxia Software Ltd., 2001.

[29] M. A. Mahmood, K. J. Pettingell, and A. I. Shaskevich, “Measuringproductivity of software projects: A data envelopment analysis ap-proach,” Decision Sci.., vol. 27, no. 1, pp. 57–80, Winter 1996.

[30] P. D. Chatzoglou and A. C. Soteriou, “A DEA framework to assess theefficiency of the software requirements capture and analysis process,”Decision Sci.., vol. 30, no. 2, pp. 503–531, Spring 1999.

[31] P. Kauffmann, R. Unal, A. Fernandez, and C. Keating, “A model forallocating resources to research programs by evaluating technical im-portance and research productivity,” Eng. Manag. J., vol. 12, no. 1, pp.5–8, Mar. 2000.

[32] R. G. Dyson, R. Allen, A. S. Camanho, V. V. Podinovski, C. S. Sarrico,and E. A. Shale, “Pitfalls and protocols in DEA,” Eur. J. Oper. Res.,vol. 132, no. 2, pp. 245–259, Jul. 2001.

[33] W. W. Cooper, L. M. Seiford, and K. Tone, Data Envelopment Anal-ysis: A Comprehensive Text with Models, Applications, References andDEA-Solver Software. Boston, MA: Kluwer Academic, 2000.

[34] A. Charnes, C. T. Clark, W. W. Cooper, and B. Golany, “A develop-mental study of data envelopment analysis in measuring the efficiencyof maintenance units in the US air forces,” Ann. Operations Res., vol.2, pp. 95–112, 1985.

[35] Y. Roll, B. Golany, and D. Seroussy, “Measuring the efficiency ofmaintenance units in the Israeli air force,” Eur. J. Oper. Res., vol. 27,no. 2, pp. 136–142, Nov. 1989.

[36] R. L. Clarke, “Evaluating USAF vehicle maintenance productivity overtime: An application of data envelopment analysis,” Decision Sci., vol.23, no. 2, pp. 376–384, Mar./Apr. 1992.

[37] S. Sun, “Assessing joint maintenance shops in the Taiwanese armyusing data envelopment analysis,” J. Oper. Manag., vol. 22, no. 3, pp.233–245, Jun. 2004.

[38] J. C. Paradi, Smith, and Schaffnit-Chatterjee, “Knowledge worker per-formance analysis using DEA: An application to engineering designteams at Bell Canada,” IEEE Trans. Eng. Manag., vol. 49, no. 2, pp.161–172, May 2002.

[39] R. D. Banker, S. M. Datar, and C. F. Kemerer, “Factors affecting soft-ware maintenance productivity: An exploratory study,” in Proc. 8th Int.Conf. Information Systems, 1987, pp. 160–175.

[40] ——, “A model to evaluate variables impacting the productivity of soft-ware maintenance projects,” Manag. Sci., vol. 37, no. 1, pp. 1–18, Jan.1991.

[41] J. C. Paradi, D. N. Reese, and D. Rosen, “Applications of DEA to mea-sure the efficiency of software production at two large Canadian banks,”Ann. Oper. Res., vol. 73, pp. 91–115, 1997.

[42] C. Parkan, K. Lam, and G. Hang, “Operational competitiveness anal-ysis on software development,” J. Oper. Res. Soc., vol. 48, no. 9, pp.892–905, Sep. 1997.

FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 481

[43] R. D. Banker and C. F. Kemerer, “Scale Economies in New SoftwareDevelopment,” IEEE Trans. Softw. Eng., vol. 15, pp. 1199–1205, Oct.1989.

[44] R. D. Banker, H. Chang, and C. F. Kemerer, “Evidence on economiesof scale in software development,” Inf. Softw. Technol., vol. 36, no. 15,pp. 275–282, May 1994.

[45] R. D. Banker and S. A. Slaughter, “A field study of scale economies insoftware maintenance,” Manag. Sci., vol. 43, no. 12, pp. 1709–1725,Dec. 1997.

[46] E. Stensrud and I. Myrtveit, “Identifying high performance ERPprojects,” IEEE Trans. Softw. Eng., vol. 29, no. 5, pp. 398–416, May2003.

[47] Z. Yang and J. C. Paradi, “DEA evaluation of a Y2K software retrofitprogram,” IEEE Trans. Eng. Manag., vol. 51, no. 3, pp. 279–287, Aug.2004.

[48] M. Oral, O. Kettani, and P. Lang, “A methodology for collective eval-uation and selection of industrial R&D projects,” Manag. Sci., vol. 37,no. 7, pp. 871–885, Jul. 1991.

[49] W. D. Cook, M. Kress, and L. M. Seiford, “Data envelopment analysisinn the presence of both quantitative and qualitative factors,” J. Oper.Res. Soc., vol. 47, no. 7, pp. 945–953, Jul. 1996.

[50] R. H. Green, J. R. Doyle, and W. D. Cook, “Preference voting andproject ranking using DEA and cross-evaluation,” Eur. J. Oper. Res.,vol. 90, no. 3, pp. 461–472, May 1996.

[51] J. D. Linton, S. T. Walsh, and J. Morabito, “Analysis, ranking and se-lection of R&D projects in a portfolio,” R&D Manag., vol. 32, no. 2,pp. 139–147, Winter 2002.

[52] S. A. Thore and L. Lapao, , S.A. Thore, Ed., “Prioritizing R&D projectsin the face of technological and market uncertainty: Combining sce-nario analysis and DEA,” in Technology Commercialization: DEA andRelated Analytical Methods for Evaluating the Use and Implementa-tion of Technical Innovation. Boston, MA: Kluwer Academic, 2002,pp. 87–110.

[53] S. A. Thore and G. Rich, , S. A. Thore, Ed., “Prioritizing a portfolio ofR&D activities, employing data envelopment analysis,” in TechnologyCommercialization: DEA and Related Analytical Methods for Evalu-ating the Use and Implementation of Technical Innovation. Boston,MA: Kluwer Academic, 2002, pp. 53–74.

[54] C. C. Liu and C. Y. Chen, “A two-dimensional model for allocatingresources to R&D programs,” J. Amer. Acad. Bus., vol. 5, no. 1/2, pp.469–473, Sep. 2004.

[55] H. Eliat, B. Golany, and A. Shtub, “Constructing and evaluatingbalanced portfolios of R&D projects with interactions: A DEA basedmethodology,” Eur. J. Oper. Res., 2005, to be published.

[56] D. Verma and K. K. Sinha, “Toward a theory of project interdependen-cies in high tech R&D environments,” J. Oper. Manag., vol. 20, no. 1,pp. 451–468, Sep. 2002.

[57] B. Yuan and J. N. Huang, , S. A. Thore, Ed., “Applying data envelop-ment analysis to evaluate the efficiency of R&D projects – A case studyof R&D energy technology,” in Technology Commercialization: DEAand Related Analytical Methods for Evaluating the Use and Imple-mentation of Technical Innovation. Boston, MA: Kluwer Academic,2002, pp. 111–134.

[58] E. Revilla, J. Sarkis, and A. Modrego, “Evaluating performance ofpublic-private research collaborations,” J. Oper. Res. Soc., vol. 54, no.2, pp. 165–174, Feb. 2003.

[59] D. K. Chai and D. C. Ho, “Multiple criteria decision model for resourceallocation: A case study in an electric utility,” INFOR, vol. 36, no. 3,pp. 151–160, Aug. 1998.

[60] S. A. Thore and F. Pimentel, , S. A. Thore, Ed., “Evaluating a port-folio of proposed projects, ranking the relative to a list of existingprojects,” in Technology Commercialization: DEA and Related Analyt-ical Methods for Evaluating the Use and Implementation of TechnicalInnovation. Boston, MA: Kluwer Academic, 2002, pp. 233–246.

[61] J. D. Linton and W. D. Cook, “Technology implementation: A com-parative study of Canadian and U.S. factories,” INFOR, vol. 36, no. 3,pp. 142–150, Aug. 1998.

[62] J. S. Busby and A. Williamson, “The appropriate use of performancemeasurement in non-production activities: The case of engineeringdesign,” Int. J. Operations Production Manag., vol. 20, no. 3, pp.336–358, Mar. 2000.

[63] E. Van Aken, D. Van Goubergen, and G. Letens, “Integrated enterprisetransformation: Case application in a project organization in the Bel-gian armed forces,” Eng. Manag. J., vol. 15, no. 2, pp. 3–16, Jun. 2003.

[64] Systems Engineering – System Life Cycle Processes International Or-ganization for Standardization, Geneva, 2002, ISO/IEC 15288: 2002.

[65] D. Gerwin and N. J. Barrowman, “An evaluation of research on inte-grated product development,” Manag. Sci., vol. 48, no. 7, pp. 938–953,Jul. 2002.

[66] M. Ainscough and B. Yazdani, “Concurrent engineering within Britishindustry,” Concurrent Eng.: Res. Applicat., vol. 8, no. 1, pp. 2–11, Mar.2000.

[67] H. Maylor and R. Gosling, “The reality of concurrent new product de-velopment,” Integrated Manuf. Syst., vol. 9, no. 2, pp. 69–76, 1998.

[68] B. Haque, “Problems in concurrent product development: An in-depthcomparative study of three companies,” Integrated Manufact. Syst., vol.14, no. 3, pp. 191–207, May 2003.

[69] N. Bhuiyan, D. Gerwin, and V. Thomson, “Simulation of the newproduct development process for performance improvement,” Manag.Sci., vol. 50, no. 12, pp. 1690–1703, Dec. 2004.

[70] C. Terwiesch and C. H. Loch, “Measuring the effectiveness of overlap-ping development activities,” Manag. Sci., vol. 45, no. 4, pp. 455–465,Apr. 1999.

[71] K. Eisenhardt and B. Tabrizi, “Accelerating adaptive processes:Product innovation in the global computer industry,” Admin. Sci.Quart., vol. 40, no. 1, pp. 84–110, Mar. 1995.

[72] A. Griffin, “The effect of project and process characteristics on productdevelopment cycle time,” J. Marketing Res., vol. 34, no. 1, pp. 24–35,Feb. 1997.

[73] H. D. Sherman, , R. H. Silkman, Ed., “Managing productivity of healthcare organizations,” in Measuring Efficiency: An Assessment of DataEnvelopment Analysis. San Francisco: Jossey-Bass, 1986, pp. 31–46.

[74] H. Kerzner, In Search of Excellence in Project Management. NewYork: Van Nostrand Reinhold, 1998.

[75] A. P. C. Chan and A. P. L. Chan, “Key performance indicators formeasuring construction success,” Benchmarking, vol. 11, no. 2, pp.203–221, 2004.

[76] O. I. Tukel and W. O. Rom, “An empirical investigation of projectevaluation criteria,” Int. J. Oper. Production Manag., vol. 21, no. 3,pp. 400–416, Mar. 2001.

[77] A. J. Shenhar and D. Dvir, “Toward a typological theory of projectmanagement,” Res. Policy, vol. 25, no. 4, pp. 607–632, Jun. 1996.

[78] V. S. D. Krishnan, “Managing the simultaneous execution of coupledphases in concurrent product development,” IEEE Trans. Eng. Manag.,vol. 43, no. 3, pp. 210–217, May 1996.

[79] T. Raz, A. J. Shenhar, and D. Dvir, “Risk management, project suc-cess, and technological uncertainty,” R&D Manag., vol. 32, no. 2, pp.101–109, Mar. 2002.

[80] R. D. Banker, A. Charnes, and W. W. Cooper, “Some models for esti-mating technical and scale inefficiencies in data envelopment analysis,”Manag. Sci., vol. 30, no. 9, pp. 1078–1092, Sep. 1984.

[81] H. Scheel, “Undesirable outputs in efficiency valuations,” Eur. J. Oper.Res., vol. 132, no. 2, pp. 400–410, Jul. 2001.

[82] P. L. Brockett and B. Golany, “Using rank statistics for determiningprogrammatic efficiency differences in data envelopment analysis,”Manag. Sci., vol. 42, no. 3, pp. 466–472, Mar. 1996.

[83] T. Sueyoshi and S. Aoki, “A use of a nonparametric statistic for DEAfrontier shift: The Kruskal and Wallis rank test,” Omega, vol. 29, no. 1,pp. 1–18, Feb. 2001.

[84] B. Golany and S. A. Thore, , S. A. Thore, Ed., “On the ranking of R&Dprojects in a hierarchical organizational structure subject to global re-source constraints,” in Technology Commercialization: DEA and Re-lated Analytical Methods for Evaluating the Use and Implementationof Technical Innovation. Boston, MA: Kluwer Academic, 2002, pp.253–274.

Jennifer A. Farris received the B.S. degree in in-dustrial engineering from the University of Arkansas,Fayetteville, and the M.S. degree in industrial andsystems engineering from Virginia Polytechnic In-stitute and State University (Virginia Tech), Blacks-burg. She working toward the Ph.D. degree in the En-terprise Engineering Research Laboratory, the GradoDepartment of Industrial and Systems Engineering,Virginia Tech.

She is a Graduate Research Assistant, EnterpriseEngineering Research Laboratory, the Grado Depart-

ment of Industrial and Systems Engineering, Virginia Tech. Her research inter-ests are in performance measurement, lean production, kaizen event processes,product development and project management. She is a member of IIE andAlpha Pi Mu.

482 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006

Richard L. Groesbeck received the B.S. degree incivil engineering from Brigham Young University,Provo, UT, the M.B.A. degree from Case WesternReserve University, Cleveland, OH, and the Ph.D.degree in industrial engineering from VirginiaPolytechnic Institute and State University (VirginiaTech), Blacksburg.

He is currently a Research Assistant Professor inthe Grado Department of Industrial and Systems En-gineering at Virginia Tech. His research interests areperformance measurement and team-based work sys-

tems. Prior to receiving the Ph.D. degree, he spent more than 20 years in in-dustry, working for U.S. Steel, Ore-Ida, and Clorox, in a variety of engineeringand manufacturing management positions, including field engineer, design engi-neer, production supervisor, plant manager, and Division Technology Manager.

Dr. Groesbeck is a member of IIE, a senior member of ASQ, a CertifiedQuality Engineer, and an Examiner for the U.S. Senate Productivity and QualityAward for Virginia.

Eileen M. Van Aken received the B.S., M.S., andPh.D. degrees in industrial engineering from VirginiaPolytechnic Institute and State University (VirginiaTech), Blacksburg.

She is currently an Associate Professor and Assis-tant Department Head in the Grado Department ofIndustrial and Systems Engineering, Virginia Tech.She also serves as Director of the Enterprise Engi-neering Research Laboratory where she conductsresearch and teaches in the areas of performancemeasurement, organizational transformation, lean

production, and team-based work systems. Prior to joining the faculty atVirginia Tech, she was a Process Engineer with AT&T Microelectronics,Richmond, VA.

Dr. Van Aken is a member of ASEM, ASQ, ASEE, a senior member of IIE,and a fellow of the World Academy of Productivity Science.

Geert Letens received the M.S. degree in telecom-munications engineering from the Royal MilitaryAcademy, Brussels, Belgium, the M.S. degree inmechatronics from Katholieke Universiteit, Leuven,Belgium, and the M.S. degree in total qualitymanagement from Limburg University Center,Diepenbeek, Belgium.

He is currently the TQM-Coordinator of theCompetence Center on Communications and In-formation Systems of the Belgian Armed Forces.He also serves an external military Professor in

the Department of Management and Leadership, the Royal Higher DefenseInstitute, Laken, Belgium. His research interests are performance measurement,organizational transformation, lean six sigma for service, product development,and project management. He has several years of consulting experience in theareas of organizational change and management systems as President of ChIConsulting.

Dr. Letens is member of ASEM, IIE, SAVE, and PMI.


Recommended