+ All Categories
Home > Documents > Developing performance metrics for a design engineering department

Developing performance metrics for a design engineering department

Date post: 22-Sep-2016
Category:
Upload: rk
View: 214 times
Download: 2 times
Share this document with a friend
12
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000 309 Developing Performance Metrics for a Design Engineering Department Robert K. Buchheim Abstract—Performance metrics can be effectively used by De- sign Engineering organizations, to improve competitiveness, high- light areas needing improvement, help to focus design emphasis on the customer’s desires and priorities, and build teamwork between engineering and the other functions of the corporation. This paper describes one organization’s successful implementation of perfor- mance metrics for mechanical and electronic design. Index Terms—Design engineering, metrics, performance mea- surement. I. INTRODUCTION I N the past several years, quite a number of books and papers have highlighted the virtues of nonfinancial performance metrics as an aid in improving operations and competitive posture, and for evaluating the results of investments in new technologies, techniques, or capital assets [1]–[4]. There is also an expanding literature on the application of performance metrics to project management, and product development teams. However, with the notable exception of the software engineering discipline [5], there are few published descriptions of the practical application of performance metrics to the hardware design engineering activities of large corporations. At the Aeronutronic Division of Lockheed Martin Corporation, we developed and used a set of engineering performance metrics which helped guide our productivity and quality improvement efforts in the Mechanical and Electronic Design Departments, and demonstrated the benefits of these efforts. This paper describes what we did, how we did it, and offers some advice based on our experience. II. WHY DEVELOP PERFORMANCE METRICS? If we view the engineering design activity as a process, and we want to improve that process, then we need a way to mea- sure its results, and the trend of those results over time. Most en- gineering managers and design engineers already monitor cost and schedule variances, prepare design reviews, audit specifica- tion compliance, and perform design-for-producibility studies. Yet, these parameters are not sufficient to measure the perfor- mance of the corporation’s engineering function because they are project or product focused, and as a result, they do not extend beyond the life of a single project. Most measurements of the effectiveness of product design effort—including such cost-ori- Manuscript received September 17, 1997; revised July 24, 2000. Review of this manuscript was arranged by Department Editor B.V. Dean. The author is with Lockheed Martin Undersea Systems, Irvine, CA 92623–4340 USA (e-mail: [email protected]). Publisher Item Identifier S 0018-9391(00)06630-7. ented metrics as design-to-cost or value engineering—have the same problem: they apply to a single product or single project, and do not give us data with which to assess and improve the overall engineering process. In order to support efforts to achieve continuous improvement across multiple projects, and over a longer time frame than a single product-development effort, a longer lived set of metrics is needed. These metrics will indicate the performance of the design process (rather than the performance of a specific design project), and should target competitiveness, provide a focus for “continuous improvement” initiatives, and facilitate teamwork [6]. A. Demonstrate Competitiveness The design engineering function is subject to both internal and external competitive pressures. Internally, we compete with other departments for capital, R&D, and training funds. Exter- nally, we are critical players in the corporation’s competition for project awards: the cost, quality, responsiveness, and time- liness of engineering activities can be a huge lever on the suc- cess or failure of a proposal, or of a new-product development. We also compete (either explicitly or implicitly) against the en- gineering departments at other companies: senior management must periodically consider the merits of outsourcing the design engineering function. Financial metrics cannot adequately describe the design engi- neering competitive environment. They do not directly address the dimensions of productivity, quality, timeliness, and respon- siveness to customer needs, which are precisely the dimensions on which design engineering is most likely to compete. It is wise to have appropriate, objective data which indicate our level of competitiveness, and progress at improving them, along these dimensions [7]. B. Provide Focus for Improvement Efforts Financial metrics rarely provide a meaningful focus for engineering performance improvement activities or criteria for evaluating the results of these activities. Metrics such as sales, profits, or cash flow are too far removed from the daily concerns of engineering leaders. What is needed is a set of non- financial indicators which engineering supervisors and working engineers can relate to, and which senior managers can accept as objective, quantitative evidence of the benefits which accrue from investments in equipment, processes, and people. The periodic reporting of these engineering performance metrics provides a focus on the continuous improvement of parameters which are closely related to the tasks which engineers actually 0018–9391/00$10.00 © 2000 IEEE
Transcript
Page 1: Developing performance metrics for a design engineering department

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000 309

Developing Performance Metrics for a DesignEngineering Department

Robert K. Buchheim

Abstract—Performance metrics can be effectively used by De-sign Engineering organizations, to improve competitiveness, high-light areas needing improvement, help to focus design emphasis onthe customer’s desires and priorities, and build teamwork betweenengineering and the other functions of the corporation. This paperdescribes one organization’s successful implementation of perfor-mance metrics for mechanical and electronic design.

Index Terms—Design engineering, metrics, performance mea-surement.

I. INTRODUCTION

I N the past several years, quite a number of books and papershave highlighted the virtues of nonfinancial performance

metrics as an aid in improving operations and competitiveposture, and for evaluating the results of investments in newtechnologies, techniques, or capital assets [1]–[4]. There isalso an expanding literature on the application of performancemetrics to project management, and product developmentteams. However, with the notable exception of the softwareengineering discipline [5], there are few published descriptionsof the practical application of performance metrics to thehardware design engineering activities of large corporations. Atthe Aeronutronic Division of Lockheed Martin Corporation, wedeveloped and used a set of engineering performance metricswhich helped guide our productivity and quality improvementefforts in the Mechanical and Electronic Design Departments,and demonstrated the benefits of these efforts. This paperdescribes what we did, how we did it, and offers some advicebased on our experience.

II. WHY DEVELOP PERFORMANCEMETRICS?

If we view the engineering design activity as a process, andwe want to improve that process, then we need a way to mea-sure its results, and the trend of those results over time. Most en-gineering managers and design engineers already monitor costand schedule variances, prepare design reviews, audit specifica-tion compliance, and perform design-for-producibility studies.Yet, these parameters are not sufficient to measure the perfor-mance of the corporation’s engineering function because theyare project or product focused, and as a result, they do not extendbeyond the life of a single project. Most measurements of theeffectiveness of product design effort—including such cost-ori-

Manuscript received September 17, 1997; revised July 24, 2000. Review ofthis manuscript was arranged by Department Editor B.V. Dean.

The author is with Lockheed Martin Undersea Systems, Irvine, CA92623–4340 USA (e-mail: [email protected]).

Publisher Item Identifier S 0018-9391(00)06630-7.

ented metrics as design-to-cost or value engineering—have thesame problem: they apply to a single product or single project,and do not give us data with which to assess and improve theoverall engineering process.

In order to support efforts to achieve continuous improvementacross multiple projects, and over a longer time frame than asingle product-development effort, a longer lived set of metricsis needed. These metrics will indicate the performance of thedesign process (rather than the performance of a specific designproject), and should target competitiveness, provide a focus for“continuous improvement” initiatives, and facilitate teamwork[6].

A. Demonstrate Competitiveness

The design engineering function is subject to both internaland external competitive pressures. Internally, we compete withother departments for capital, R&D, and training funds. Exter-nally, we are critical players in the corporation’s competitionfor project awards: the cost, quality, responsiveness, and time-liness of engineering activities can be a huge lever on the suc-cess or failure of a proposal, or of a new-product development.We also compete (either explicitly or implicitly) against the en-gineering departments at other companies: senior managementmust periodically consider the merits of outsourcing the designengineering function.

Financial metrics cannot adequately describe the design engi-neering competitive environment. They do not directly addressthe dimensions of productivity, quality, timeliness, and respon-siveness to customer needs, which are precisely the dimensionson which design engineering is most likely to compete. It is wiseto have appropriate, objective data which indicate our level ofcompetitiveness, and progress at improving them, along thesedimensions [7].

B. Provide Focus for Improvement Efforts

Financial metrics rarely provide a meaningful focus forengineering performance improvement activities or criteriafor evaluating the results of these activities. Metrics such assales, profits, or cash flow are too far removed from the dailyconcerns of engineering leaders. What is needed is a set of non-financial indicators which engineering supervisors and workingengineers can relate to, and which senior managers can acceptas objective, quantitative evidence of the benefits which accruefrom investments in equipment, processes, and people. Theperiodic reporting of these engineering performance metricsprovides a focus on the continuous improvement of parameterswhich are closely related to the tasks which engineers actually

0018–9391/00$10.00 © 2000 IEEE

Page 2: Developing performance metrics for a design engineering department

310 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000

perform, and which are “results oriented” so that they areindicative of improvements in the design process.

It is a common cliché that, in order to manage something,we must be able to measure it. In order to improve something,we must measure it consistently over time. Happily, it is alsoa common experience that the mere fact of measuring some-thing will yield an increment of improvement, as the organiza-tion picks off the “low hanging fruit” to make rapid, inexpensiveimprovements in measured performance.

C. Contribute to Competitive Bidding on Future Projects

By capturing trend data over several years, productivity andquality metrics can play a useful role in planning and competingfor new projects. A consistent trend of improved quality, shortercycle time, and lower costs can be projected into the future, andtranslated into more attractive proposal estimates.

D. Focus on Customers’ Desires and Priorities

The selected engineering process metrics should focus onthose aspects of design engineering performance which are ofparticular interest to the customers [8]. The process of devel-oping a set of metrics can be an important step toward explicitlyidentifying just what the customers want, expect, or are partic-ularly concerned about. This identification is a critical step inprogressing toward a more “customer-oriented” design process.

E. Analogy to Manufacturing Metrics

Our effort to develop quantitative performance metrics for thedesign engineering activity was guided by the idea that thereis a conceptual similarity between the design function and themanufacturing function. The manufacturing function uses itsprocesses to transform raw materials and component parts intoa finished, physical product which is described by the engi-neering drawings and specifications. The design engineeringfunction applies its processes (such as system engineering, re-quirements allocation, functional analysis, tradeoff studies, pro-totyping, and testing) to transform market needs or customerdesires into adescriptionof the end product. This description isembodied in the engineering drawings and specifications usedby the manufacturing process. The manufacturing function hasa long history of using an array of metrics to evaluate its per-formance. These include some financial data (e.g., achievementof cost targets or “learning-curve” rates), but are highlightedby such nonfinancial items as manufacturing process metrics(throughput and cycle time), compliance metrics (reject rates,yields, scrap, and rework rates), and customer-satisfaction met-rics (on-time delivery rate, returns, complaints, or back orders).

Starting with the conceptual similarity between the manufac-turing process and the design engineering process—the goal ofeach is to transform an input into an output—it seemed reason-able to create a set of metrics which would indicate the level ofperformance of engineering activities in terms analogous to effi-ciency, cycle time, quality, scrap, and rework. There are limits tothis analogy, of course: the output of the engineering design ac-tivity is a mix of service, knowledge, and invention, and it is un-likely that a set of metrics will completely describe the “output”of a particular organization. Never the less, the analogy can go a

long way toward clarifying the results which are expected fromthe engineering process, and make the idea of quantitative met-rics a bit less foreign to the engineering staff.

F. Objections to the Use of Metrics

We faced a few immediate objections to the idea of estab-lishing quantitative engineering performance metrics. These in-cluded the following.

• “There is no way to quantify the creativity, innovation,and nearly artistic skill which characterizes the very bestengineering design efforts.”

• “Any metric which is defined can be achieved in a waywhich will be counterproductive to the true goals of theorganization and the project.”

• “No finite set of metrics will completely describe the“health” of the engineering process.”

• “Because design engineering is a creative function, itcannot be “pigeon-holed” by rigid constraints defined inperformance measures; sometimes tradeoffs are neededbetween competing goals.”

These objections are reasonable. However, they apply equallywell to functions such as manufacturing (where quantitativeperformance metrics have a long and successful history) or,for that matter, the entire corporation, where the “bottom line”is still the final word on success or failure. Most certainly,the sort of engineering process performance metrics whichwe developed cannot be treated in isolation. They must beused in conjunction with project-oriented metrics (which helpensure that each project is meeting its particular goals), andengineering management controls (which help ensure the sub-jective quality of the design results). Happily, these objectionsturned out not to be serious problems because we recognizedthe potential for conflict, were straightforward in expressingto the design staff what our purpose in using the metrics was,and worked together to minimize potential ill effects. Mostimportantly, we maintained as our goal that of improving thedesign process—“the system”—rather than using the metrics tojudge the performance of individual engineers and designers.

III. ENGINEERING DEPARTMENT METRICS “FIT IN” TO

CORPORATEGOALS

Metrics which indicate the health of the design engineeringprocess will be different from those that indicate the merits ofa particular product design (such as its functionality or pro-duction cost), or those that indicate the health of the overallcorporation (which are typically financial, such as earnings orsales growth rate). For example, most organizations cannot un-ambiguously calculate the sales or profit traceable to an im-proved engineering process (although we have certainly tried,with some hand-waving about how the process improvementtranslated into lower R&D costs, or faster time-to-market, orsome other “nonengineering” parameter). Still, the engineeringperformance metrics must represent a quantification of the mis-sion statement, so that improvement in the engineering processwill be congruent with improvement in the overall corporategoals [9]. By analogy with the manufacturing process, any ac-tivity which improves productivity, reduces cycle time, reduces

Page 3: Developing performance metrics for a design engineering department

BUCHHEIM: DEVELOPING PERFORMANCE METRICS FOR A DESIGN ENGINEERING 311

scrap, and reduces rework (all other things being equal) willcontribute to improved corporate operating performance.

In order to support overall corporate operating performanceimprovement, Engineering metrics should also be harmoniouswith the metrics and goals of other departments, particularlythose which Engineering views as “internal customers” or “in-ternal teammates.” The Manufacturing Department is almost al-ways the “internal customer” to whom Engineering delivers newproduct designs and drawings. The Purchasing Department isalmost always an “internal teammate” with Engineering in theselection of suppliers. Engineering must specify the character-istics of purchased parts and assemblies, while Purchasing mustnegotiate prices and contractual terms consistent with overallproduct strategy. Not surprisingly, we found it expeditious tomimic the metrics which were being used by those other or-ganizations, so that improvement efforts by Engineering wouldautomatically improve our support to these internal customers.

IV. THE PROCESS OFCREATING METRICS

A. The “Textbook Approach”

The published approaches to developing, implementing, andusing performance measurement systems for operating depart-ments can be distilled into the following guidelines [10].

A performance metric must include three elements: a definedunit of measure (e.g., hours per drawing or cycle time for com-pletion), a “sensor” which gathers and records the raw data (e.g.,a clerk or a data file from an automated test station), and a fre-quency with which measurements and reports are to be made(e.g., monthly average reject rate or annual average produc-tivity). The unit of measure should be as simple as possible, sothat the metric will be easy to understand. The “sensor” shouldbe placed at an appropriate step in the overall design process, tominimize the effort associated with data collection, and to facil-itate the use of data for identifying process improvements. Thefrequency of data collection and reporting should be appropriateto the nature and use of the data. Some data will be gathered ona fixed-calendar basis (e.g., daily, weekly, or monthly), whileother data will become available at certain performance mile-stones (e.g., drawing completion or conduct of final testing).The more frequently the data can be reported, the more timelythey will be, and the more opportunities you will have to at-tempt corrective actions. For example, if you have to report per-formance results once per year, you will want to check on yourprogress at least quarterly, so that you have some opportunityto correct any areas of deficiency. On the other hand, the moredifficult or expensive it is to gather the data, the longer the ap-propriate interval between reports will be, and hence the slowerthe rate at which meaningful corrective actions can be made.

The selected metrics should measure parameters which yourcustomer cares about, should measure results rather than ac-tivities (e.g., measure “level of demonstrated skill” rather than“number of hours spent in training classes”), and focus on pa-rameters which correlate to the overall corporate mission. Typ-ical examples of such metrics are quality of product, timely de-livery, cost reduction, cycle time reduction, and customer satis-faction.

TABLE IGENERALLY ACCEPTEDAPPROACH TOCREATING AND USING PERFORMANCE

METRICS(BASED ON [10])

The process of creating the performance measurement systemgenerally includes the steps outlined in Table I.

B. Our Implementation Philosophy

We used a process very similar to that described in Table Ito develop our engineering performance metric system. In ad-dition, we tried to adhere to three principles which greatly sim-plified the task of creating our metric system, reduced the costof implementing it, and simplified the challenge of maintainingharmony with other functional departments. These were the fol-lowing.

1) Get the Measurement System “Up and Running”Quickly: One of our implicit goals was to be “up and running”as quickly as possible with a set of metrics which we werereasonably confident were of value, based on easily gathereddata, and then add to them or modify them as indicated byexperience and opportunity. This approach had several benefits.It led us to the analogy with manufacturing, which forcedus to consider the aspects of the engineering process whichwere analogous to productivity, cycle time, quality, scrap, andrework. By focusing on the need to generate real metrics, andshow real data quickly, it forced us to take maximum advantageof work that had already been done elsewhere within theCorporation. This led to the second principle.

2) Use Data and Measurement Systems which AlreadyExist: We investigated the parameters which were alreadybeing collected by our management information systems, andfound that many of them could be easily reformatted to meetour needs. This was valuable because the data were alreadyaccepted as being important, and also because it offered anexpedient defense against “gaming.” As an example of this,consider one amusing anecdote which came out of our qualitymanagement information system. QC routinely tracked the

Page 4: Developing performance metrics for a design engineering department

312 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000

costs and causes of scrap and rework in the factory. Eachincident of scrap/rework was analyzed, and its cause attributedto a specific factor, such as poor workmanship, discrepantsupplier parts, design error, etc. Thanks to diligent efforts byour Manufacturing Manager to improve the overall qualityof workmanship, and the aggressive stance taken by ourPurchasing Manager to improve the quality of supplier parts,the people who analyzed the rework/scrap data knew that theirmonthly report would be carefully scrutinized. They also knewthat if the cause of an incident was less than crystal clear, butit was blamed on poor workmanship, then the ManufacturingManager was sure to demand further investigation. Similarly,if the incident was blamed on supplier errors, the PurchasingManager would demand more careful investigation. But if theincident was blamed on design error, no one demanded extraeffort! Is it any surprise that the reported percentage of designerror in our rework/scrap report had gradually increased overthe years?

Once we in Engineering started using the scrap/rework reportto track our own performance, and took serious action to reduceour contribution to scrap/rework problems, we became mem-bers of the team which determined the root cause of each in-cident. This contributed to agreement on the causes, teamworkto implement corrective actions, and elimination of “gaming”of the reporting system. As a result of our efforts to reducethe incidence of factory scrap/rework caused by design error,we improved the product design (drawings and specifications),the manufacturing process (fabrication and assembly methods),and the overall product-development process (by improving thecommunication and mutual understanding between design en-gineering and manufacturing).

3) Determine What is Important to Your Customers:Ourthird principle was that engineering performance metrics shouldrelate directly to attributes which are of importance to the cus-tomers. Therefore, it was necessary to find out what was impor-tant to them. This could be done literally or figuratively. Liter-ally asking, “What do you expect from engineering?” is mosteasily done with internal customers.

a) Ask your internal customers:We asked our ProgramOffices to define what they expected from Engineering, andasked them to rank order their priorities. We asked Manufac-turing what they expected of us; some of their top concerns werequality of data packages, and responsiveness in providing engi-neering help when production problems arose. These were coreconcerns which led to the definition of specific metrics: produc-tivity, cost compliance, data package quality, and cycle time forengineering changes. Similarly, we asked for and received spe-cific expectations from our other internal customers and team-mates.

b) Infer your external customer’s needs if you can’t askdirectly: With external customers, it may not be appropriate toliterally ask, “What aspects of engineering are most important toyou?” Still, the contracts with these customers can offer obviousclues to the answer. What products does engineering deliver di-rectly to the customer? What award fee or profit penalties areincluded? What design-quality parameters are specified? Whatcomplaints have customers made in the recent past, or what en-gineering issue contributed to the loss of recent orders?

Many customers buy more than just hardware. As a designcontractor, we prepared drawings and reports which weresubmitted to our customers for approval. If they were foundwanting, they were returned to us for revision or correction.This was a classic example of “rework”—if the customerrejected a report, not only did we bear the cost of preparing it inthe first place, and the customer bear the cost of reviewing (andrejecting) it, but we then had to revise it, and the customer hadto rereview it, and there was likely to be an adverse scheduleimpact elsewhere in the project because of the lateness of satis-factory design data. For example, structural loads testing mightbe delayed, awaiting correction of the stress analysis report.This example formed the core of one of our metrics—data itemquality as measured by customer acceptance rate.

Once we set about improving our performance in this area, wefound our customers to be not just cooperative, but enthusiasticabout helping us since our success would reduce their costs forreview and rereview.

C. Challenges in Implementing Performance Metrics

In developing our set of engineering performance metrics, wefaced some challenges which will doubtless be faced by otherdesign engineering organizations attempting to implement sucha system. Our experiences and solutions may be useful to others.

1) The Challenge of Rigorous Definition:A candidatemetric may start out as a vague concept. It must be translatedinto a careful definition which explains what data are used(where they come from, who will provide them, and whatformat they will come in), how they are manipulated (e.g., theequation or procedure), and any special culling of the raw databefore using them. Consider, for example, “improve designquality.” This was an important goal, but it required quite a bitof further definition to develop a useful metric.

Our drawings had to pass through Design Check beforebeing released to Manufacturing or Purchasing. It seemedthat “first-time pass through design check” was analogous tothe manufacturing measurement of quality based on “rejectrate” at an inspection station. We set up a simple systemwhich would log each drawing as it came into Design Check,record whether it was approved or rejected, and prepared amonthly report of the percentage of “first-time pass.” Sincesome tooling and test equipment drawings were prepared byelements of the Manufacturing organization, we removed thesefrom the raw data before preparing the monthly reports. Thus,a vague concept was translated into a specific prescription forgathering, analyzing, and reporting a metric. A recent report ofperformance against this metric is shown in Fig. 1.

This single metric could have permitted a situation of “per-fect drawings, but terrible designs,” so we augmented it with ameasure of the change rate on drawings after release, and thecause of the changes. This metric, shown in Fig. 2, served as anindicator of problems which were not caught until they reachedManufacturing. We also tracked the number of rework/scrap in-cidents caused by design flaws, as a third indicator of designquality. Together, these three metrics provided a useful pictureof the quality of our designs. For each of these metrics, we alsodeveloped a report which described the circumstances of any

Page 5: Developing performance metrics for a design engineering department

BUCHHEIM: DEVELOPING PERFORMANCE METRICS FOR A DESIGN ENGINEERING 313

Fig. 1. A desirable objective—improve drawing quality—is translated into ametric which demonstrates “continuous improvement” success.

“failure” (e.g., a rejected drawing or a scrapped part), and im-plemented a standard practice of determining the root cause forfailures and identifying corrective/preventive measures to elim-inate the root cause of the most common failures. This is, ofcourse, a never-ending process: once the “most common” failuremode has been tackled, then the next one on the list becomes thetarget, and so on ad infinitum.

Once a clear definition of a candidate metric was developed,our final test was to ask: “What is the direction of ‘better’?”If you plot your performance over time, do you want the curveto “go up” or “go down?” In most cases, the answer was ob-vious: first-time pass through check should rise toward 100%;data report rejections should descend toward 0%. This seems sotrivial that it can be easily overlooked. However, we did, in fact,drop a few proposed metrics when we found it impossible toreach agreement on the direction of “better.” An example of aproblem in this area was drawing change rate: during the designprocess, is a low change rate desirable (indicating that “goodquality” designs are being made the first time), or is a highchange rate desirable (indicating that the design is being opti-mized, and problems are being identified and corrected beforethe start of production)? Does the rate of design change (priorto final production release) indicate the rate of error in the orig-inal version, or does it indicate that additional features are beingadded to the product? The answer may depend more on personalphilosophy than on objective criteria, and has generated heatedarguments, not just at our company, but at others as well [11].Absent a consensus on the direction of “better,” we searched formore appropriate metrics in this area.

2) The Challenge of “Buy-In”: The simple metric offirst-time pass at Design Check (i.e., “how many of ourdrawings are judged acceptable by our own engineering

Fig. 2. A “lagging indicator” of design quality—change rate duringproduction—is more difficult to improve, but is still subject to continuousimprovement when worked over several years.

quality-inspection process?”) was a real eye opener in severalways. We suspected that our quality rate was less than stellar,but we had no idea just how poor it was until we saw the firstfew data points in Fig. 3. In the first two months of trackingthis metric, we averaged less than 15% “first-time pass” rate atDesign Check!

Surprisingly, not everyone thought that this represented aproblem. We had to contend with designers who propoundedthe theory that this low “pass rate” was actually a good thing.In their theory, improving the “pass rate” would require theengineers and drafters to take longer to prepare the drawings;there were so many elements of information contained in theaverage engineering drawing that it was unreasonable to expectit to be error free; and therefore, the most efficient approachwould be to continue letting Design Check catch the mistakes,so that the drafters could focus on just fixing those few prob-lems found. Clearly, before we could take actions to improvethe “first-time pass” rate, we had to convince the design staffthat it was, in fact, better to do it right than to do it over.

We countered this theory with three lines of argument. First,every designer knew that he or she had complained about draw-ings which were rejected by Check, corrected by drafting, thenresubmitted to Check only to be rejected again for errors whichweren’t noticed on the first check cycle. Sometimes, these draw-ings went around and around until the engineer, the designer,and the checker were all angry at each other, and the managerwas also getting quite upset about being so late at releasingdrawings. From this experience, it could be seen that getting atleast a little bit better would be useful. Second, most people rec-ognize that no inspector is perfect. If you were the inspector ona production line which made six or seven “bad” parts to every

Page 6: Developing performance metrics for a design engineering department

314 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000

Fig. 3. Gathering and tracking performance data enables you to take correctiveactions based on facts, and to measure the results of actions taken.

“good” part, pretty soon you would be overwhelmed by themistakes, and inevitably “bad” parts would slip through. Thesewould then have to be dealt with at a later, and more expensive,stage of the process. Third, while it is true that an engineeringdrawing contains a vast array of information, all of that infor-mation is there for a reason. If any single piece of it is wrong orhard to understand, bad things are likely to happen during fab-rication, assembly, or test. Hence, striving for something a bitcloser to perfection than an 85% reject rate seemed worthwhile.Finally, for the person who did not see the merit in these argu-ments, I offered that, “I’m the Manager, and this is the way Iwant it. Please humor me in this for a while, and we’ll see howit goes.”

3) The Challenge of Trust:“How it went” played rightinto a trap which Deming [12] had warned of: the importanceof achieving trust between management and employees, anddriving fear out of the workplace. In order to track the “passrate,” we used a computer database which identified eachdrawing, the drafter, the checker, whether the drawing “passed”or “failed,” and the reason for rejection. The existence of thisdatabase caused some very real concerns among the designstaff: Would it be used to attack individual employees? Wouldit be an adverse factor in salary reviews, or layoff decisions?In short, was the design staff being “set up?” I made twopromises. One was that this database would be used to identifytechnical problem areas, not “problem people.” The secondwas that no manager other than their direct supervisor wouldbe privy to the data regarding their individual results. Thestaff took these promises with a very large grain of salt. Then,following a month of particularly poor results, I was askedby an executive to identify who the poor performers were.With some trepidation, I reminded him that: 1) the objective oftracking this metric was to improve the design process, not to

judge the individuals; and 2) in any report of a statistical nature,you need to look at the trend, and not be overly exercised bya single data point. He agreed, and never asked again (whichgave me some comfort). More importantly, the rumor millsomehow heard of this discussion, and it contributed towardrelieving the fears of the design staff that they were being putat hazard as individuals by this performance tracking system.

“How it went” was a delight for those who believe that num-bers speak for themselves. Over the course of a few years, theoverall pass rate climbed from our starting point of less than15% to a sustained level of over 95%. The pass rate becameso high that we felt comfortable with implementing a self-as-sessment system in which only a random sample of drawingsactually went through Check, the balance going directly fromdrafting to release. The combination of higher quality, reducedrework, and selective sample inspection so dramatically reducedthe Design Check workload that we reduced its staff to only 20%of what it had been when we started. There was no noticeableincrease in design or drafting time. In fact, the evidence was thatdrawing costs actually went down while quality was going up.And there was no increase in “downstream” errors caught by themanufacturing or quality processes.

4) The Challenge of Surveys:Metrics based on data col-lected in the normal process of design and production mini-mized the effort to gather and interpret the data, and were com-fortably “solid” and “objective.” However, sometimes, the de-sired information could only be gathered from survey instru-ments, in which we periodically asked our internal customershow we were doing. For example, in order to judge the ef-fectiveness of Engineering’s teamwork with other departments(Procurement, Finance, Program Management Offices), we in-stituted annual surveys in each of those organizations. Surveysare problematic because few of us are qualified developers ofquestionnaires (which is a science in its own right), and becausethe gathering of survey data is a greater imposition on the cus-tomer organization than is the analysis of “normal” quality orproductivity data.

In order to provide data which were useful as a goad to sub-stantive improvement, our surveys tended to be long and multi-dimensional. A survey which simply asks, “How are we doing(poor, good, excellent)?” and “What is the trend (improving,no change, getting worse)?” will provide no information whichcan be used to develop corrective actions. The use of complexsurveys was feasible because we were dealing with “internal”customers, who would benefit from our improved performance,and because we were not an overly large organization. Whenfocused information on a specific topic or resolution of a timecritical problem was needed, we could quickly and easily orga-nize a tiger team with the other organizations involved. Largeror geographically dispersed organizations might benefit frommore frequent, shorter survey instruments to augment the globalsurvey of performance.

Our approach to surveys had two cardinal features. First, thesurvey questions were developed jointly by a small team of en-gineers and practitioners from the “internal customer” organiza-tion, and approved by the manager of the customer organizationto ensure that they captured the most important expectations.For example, a group of three engineers and four purchasing

Page 7: Developing performance metrics for a design engineering department

BUCHHEIM: DEVELOPING PERFORMANCE METRICS FOR A DESIGN ENGINEERING 315

Fig. 4. Survey instruments can provide metrics for “continuous improvement,” but they impose more burdens on both the user and the customer organization.

experts (buyers and subcontract administrators) constructed thesurvey which judged Engineering’s support to the Procurementfunction. The result was a survey consisting of 67 questions cov-ering four key areas of Engineering–Procurement interaction.This was typical of the surveys of each internal customer organ-ization.

Second, the surveys were taken only once per year, to min-imize the imposition on the customer organization. This longcycle time contained the seeds of potential failure: there mightbe a temptation to let the survey results gather dust during the in-tervening year, and there might be no sense of urgency to identi-fying issues and taking corrective actions. These concerns weredealt with by assigning each of our Engineering managers to bethe “champion” for improvement of support to one of the in-

ternal customer organizations. The “champion” was required tomeet with the “customer” VP at least quarterly for an informaldiscussion of issues and actions. This approach worked well,and the net result was a consistent effort to open communica-tions, share problems, and improve working results. An exampleof these results is shown in Fig. 4.

An interesting observation about these subjective surveyswas that, as Engineering’s performance improved, the ex-pectations of the customer organization tended to increase.Performance which was judged “excellent” one year would beconsidered only “satisfactory” the next year. This phenomenonof higher expectations was sometimes explicit (a clear changein the scoring criteria), but most often implicit (reflectedin lower survey scores, despite objectively constant or even

Page 8: Developing performance metrics for a design engineering department

316 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000

improved performance). We groused about this a bit, of course,but decided that it was a fair replica of our external competitiveenvironment. Today’s customer wants higher quality, morefeatures, faster delivery, and lower cost than did yesterday’scustomer. Stasis is equivalent to descent. We must continuallyget better, just to stay even.

5) The Challenge of Data-Driven Improvement Ac-tions: One driving reason for developing and tracking metricsis that they encourage the use of facts—statistical data—ratherthan anecdotes as the basis for process improvement andproblem solving. This means that the data must be viewed in asimilar way to the SPC data which are gathered in the factory:there will always be some fluctuation in the results frommonth to month due to the natural variability of the process.Recognizing this, we tried to not get overly exercised by amodest drop in a single month, nor greatly excited by a modestimprovement in one month. Although we did not implement thestandard SPC practice ofX-bar andRcharts, we did emphasizethat the important feature of our charted metrics was the trendline, not the minor fluctuation of data points about the trend.

Never the less, translating the trend in a particular metric intosubstantive actions for continuous improvement can take a fairamount of imagination. It may be necessary to take actions inareas which are not quite aligned to the metric in order to achievepositive results. This should not be surprising since the metricsdisplay the results of the design process, but improvement ac-tions must alter theprocess itself. For example, productivity maybe improved more by offering focused training and improvedtools than by exhorting the troops to work harder and faster.

a) Sometimes they are obvious:Sometimes, the indicatedefforts are obvious, from the definition of the metric and the databeing gathered. For example, in the case of our “first-time passthrough design check” metric, Pareto analysis characterized themost common reasons for drawing rejection, which were at-tacked head on. We instituted mandatory training sessions cov-ering the areas which were the most common causes of rejec-tion, to “refresh” the design staff on drawing requirements. Weupdated the drawing standards to modern practice. We devel-oped custom modules within our CAD system to simplify someaspects of meeting the drawing format standards. We had no il-lusion that a single iteration of these activities would solve theproblem of improving drawing quality. This iterative approachwas to be continued indefinitely: examine the quality results todetermine the cause of rejections, take corrective measures toimprove the process of creating drawings, and monitor the re-sults. As shown in Fig. 3, our initial efforts did lead to fairlyrapid improvements, although some of the improvement wasdoubtless attributable to the “Hawthorne effect” of improvedresults simply because of the fact of measurement. The mul-tiple-year results (Fig. 1) show that continuous, unending im-provement is achievable.

b) Sometimes they are subtle:Sometimes, it requires a bitof imagination to identify an appropriate corrective action. Onesuch case was our metric of data item quality, which trackedthe “acceptance rate” of engineering documents submitted toour external customers for approval. One of the most importantsteps toward improving our performance in this area occurredwhen our Data Manager asked a major customer to identify his

Fig. 5. Products delivered directly from engineering to the externalcustomer—e.g., technical reports—can provide performance metrics for“continuous improvement.”

reviewers before we started preparing the documents. That way,our engineer was able to discuss any peculiar issues with hiscustomer counterpart, go over any potential problem areas, andagree on expectations before the formal review began. Sinceboth parties agreed on the details of the objectives of the reportand the review criteria, our engineers were better able to tailorthe reports to the needs of the customer. This, and other steps, ledto a significant improvement in the “acceptance rate,” as shownin Fig. 5.

c) Sometimes they require trial and error:Sometimes, theeffort to improve a metric will generate changes in a cross-func-tional process. One of our metrics tracked Engineering’s respon-siveness to design changes which were requested by Manufac-turing. The original idea was to track—and reduce—our “cycletime” from the approval of a change request to the release of arevised design. This was another case in which the data werereadily available. Our configuration management system hadbeen recording the relevant data for years, despite the fact thatno one had taken the trouble to use them. The cycle time, asmeasured by the average “age” of approved design changes,was found to be quite stable, and quite long, as shown in Fig.6(a). We tried increasing the frequency of Configuration Con-trol Board meetings, but this had no effect on the overall cycletime. We assigned “responsible engineers” for each subsystem,with responsibility to expedite all design change activities, butthis had only a minor effect. We issued “aging reports” on long-cycle-time design changes to the responsible supervisors, butthis, too, had little substantive effect. We began assigning “duedates” to each new design change request, and therein discov-ered a source of the problem. There had been an age-old in-formal understanding at the working level of Engineering andManufacturing that there were two classes of change requests:

Page 9: Developing performance metrics for a design engineering department

BUCHHEIM: DEVELOPING PERFORMANCE METRICS FOR A DESIGN ENGINEERING 317

(a)

(b)

Fig. 6. (a) Process changes and “working of the backlog” did not significantlyimprove design-change cycle time. (b) But with backlog reduced and processesimproved “on-time” responsiveness to the internal customer skyrocketed.

those which were needed critically, right now, and those whichwould be “nice to have someday, when you get around to it,” butfor which there was no customer requirement, nor any cost orschedule benefit. This recognition prompted an intensive reviewof the backlog. As a result, more than half of the “open” changerequests were canceled, and the metric was revised. We begantracking “compliance to due date” rather than cycle time. Withthese changes, “compliance to due date” skyrocketed to a sus-tained 100% [see Fig. 6(b)], and the cost of responding to designchange requests dropped substantially since we had eliminatedmany change requests which would have required engineeringlabor, but would have provided no substantive benefit in return.

6) The Challenge of Data Reporting:Most of the publishedexamples of performance metrics, like the figures in this report,provide a simple picture of trends which can be presented toupper management, but which are not a very useful source ofdata for developing corrective actions. The issue of reportingthe data to the people who must develop and take correctivemeasures should be given serious consideration. In most cases,we found that each metric required three separate reports, forthree separate groups of users.

• A summary graph, like the figures in this report, was pre-sented to senior management.

• An intermediate breakdown, usually segregated by depart-ment, provided a statistical summary (e.g., a Pareto anal-ysis) of the nature of problems.

• A detailed report containing all of the raw data which wentinto the metric was used by task teams chartered with spe-cific improvement initiatives.

Especially in regard to the detailed reports, and most espe-cially during the early months of implementing the engineeringmetrics, it is wise to be sensitive to the twin issues of data secu-rity and human insecurity, as noted above. It cannot be repeatedtoo many times that the purpose of gathering, analyzing, andtracking these metrics was to improve the engineering process,not to “target” individual engineers.

7) Issues Related to Defining Metrics:We dealt with threeissues in our evaluation of candidate metrics, which have beenmentioned [6], but not elaborated on in the literature.

a) The issue of “fluctuations in the denominator”:Manymetrics will be presented in the form of ratios. For example, our“first-time pass through design check” metric was

first-time passnumber of drawings passed this month

total number submitted this month

Before using such a ratio, it is wise to evaluate the effectwhich typical month-to-month fluctuations in the denominatorwill have on the reported results. For example, consider a monthin which 50 drawings were submitted, and only one failed. Thiswould be scored as 98% pass. Suppose that the next month onlytwo drawings were submitted, and one failed. The score wouldbe an abysmal 50% pass. Did our performance really plummet,or did we fall into the trap of a wildly fluctuating denominator?In the case of drawing quality, examination of our historical datashowed that the total number of drawings submitted did not varytoo much, and therefore we decided to accept the modest vari-ation which might come about because of fluctuations in thenumber of drawings completed each month.

For other cases, it might be appropriate to use a longer datainterval or to calculate the metric using a “moving window”in order to average out the month-to-month variations in thedenominator.

b) The issue of “competition in the denominator”:Atrickier issue was raised by a metric which we used to assessdesign quality, as measured by rework/scrap incidents. Itsoriginal definition was

rework/scrap performanceno. incidents caused by design error (per month)

total no. incidents (per month)

The pitfall here was that the denominator equaled the sumof Engineering, Manufacturing, and supplier-caused incidents.If, for example, Manufacturing and Purchasing improved theirperformance, the denominator would shrink. If Engineering (thenumerator) improved a bit less rapidly than the other organiza-tions, our performance would appear to be worsening, ratherthan improving slowly. If the mix of incidents changed a bitfrom month to month, the resulting data could be so erratic asto be unintelligible. This is exactly what happened, as shown inFig. 7. The denominator was changing as rapidly as the numer-ator [Fig. 7(a)], and as a result, the reported performance [Fig.

Page 10: Developing performance metrics for a design engineering department

318 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000

(a)

(b)

Fig. 7. (a) Examine the denominator of a ratio metric, to avoid “competitionin the denominator.” (b) “Competition in the denominator” can generate erraticresults which are difficult to interpret, and hence nearly impossible to act on.

7(b)] was erratic. In effect, the choice of denominator in thismetric created a “zero sum” competition among Engineering,Manufacturing, and Purchasing—if one organization improved,the others would be viewed as having worsened. In addition tocausing confusion, this could be an impediment to teamwork,and could encourage counterproductive “gaming” of the mea-surement system, to protect an organization’s reputation.

In order to provide a more useful indicator, we revised themetric to be

rework/scrap performanceno. incidents caused by design errortotal number of tests and inspections

The result is shown in Fig. 8. This definition not only elimi-nated the “competition in the denominator,” but had the furthervirtue of normalizing our performance against changes in pro-duction rate. It was far simpler to interpret, and improve team-work by enabling all departments to show simultaneously im-proving performance.

c) The issue of time lag and counting rules:The finalissues which came up in connection with several metricswere those of “time lag” or data latency, and establishment of“counting rules.” Most of our metrics were reported monthly.

Fig. 8. An improved definition of “design quality” impact on productioneliminated “competition in the denominator,” and demonstrated substantiveimprovement in performance.

This meant that we had to define the method for dealing withitems which were “partly done” at the end of the month. As atrivial example, while tracking drawing quality through check,what should be done with the drawing which is submitted tocheck on January 30, but which the Checker does not finishreviewing until February 3? Should it be counted as part ofthe “January” score, or part of the “February” score? In thebig scheme of things, it does not matter which is chosen—buta procedure must be selected and followed consistently. Wechose to count items as being scored in the month in which afinal decision was made (not necessarily the month in whichthe “product” was made).

In some cases, this meant the data were a bit “stale” by thetime they were was reported. For example, we tracked the per-centage of data reports which were rejected by the customer.Should a report be counted in the month it was prepared, orin the month the customer gave us his decision? We chose thelatter, recognizing that there would be some staleness to the data(e.g., a report prepared in January might not complete the cus-tomer review cycle until March; hence, it would be reported asa “March” acceptance or rejection). This seemed preferable tothe alternative of having to explain why we were not able to pro-vide “January” performance data until March. If we were suc-cessful in our efforts to continuously improve our performance(which we were), then the ever-rising performance curve wouldmitigate most of the issues associated with this data latency. Onthe other hand, the more latency there is in the data, the moresluggish the feedback loop for corrective actions will be; thissluggishness can be observed in Fig. 2.

8) Continuous Improvement of the Measurement Plan:It isunrealistic to hope that a set of engineering process metrics, and

Page 11: Developing performance metrics for a design engineering department

BUCHHEIM: DEVELOPING PERFORMANCE METRICS FOR A DESIGN ENGINEERING 319

an approach for using them, will spring up fully formed and re-main unchanged for all time. Any useful measurement plan mustcontain a provision that enables the metrics to evolve as experi-ence and circumstances dictate. Some of the examples given inprevious sections have noted that we were flexible in redefiningmetrics when problems were identified with their use. In somecases, the information could not be used to define meaningfulactions, and in other cases, we decided that we were measuringthe wrong things.

Our approach to improvement of the measurement plan wasconducted with a few simple groundrules (which, intriguingly,were never written down, but were always followed). First,whenever it was determined that a particular metric was notproviding the information that was intended, we would analyzethe problem, and then either change the metric or change theway the raw data were handled, so that the resulting reportwould provide useful information. Second, whenever such achange was made, we would continue calculating the “old”metric for several months, to be sure that we were not inadver-tently hiding useful information; and once a firm decision wasmade to replace a particular metric, we would use historicaldata to recalculate prior years’ results using the “new” metric.That way, there was less chance of our falling into the trapof appearing to change the scorecard every year. Third, wewere quite liberal in accepting recommendations for addingnew metrics; but before a new metric would be added to the“official” list, at least three months worth of data had to begathered and analyzed, to confirm that the metric met all of thecriteria to be both useful and understandable. Finally, no changeto the metrics or measurement plan was made unless it wasreviewed and approved by all of the affected Managers, both inEngineering and in our “internal customer” organizations.

V. USING THE METRICS

Our engineering performance metrics met the expectationswe had for them. They helped focus our ongoing improvementefforts on substantive results. They helped us make decisionsbased on facts, rather than impressions or anecdotes. Theydemonstrated measurable benefits from engineering-processand capital investments. They enabled us to project futureprogress when preparing cost proposals for new contracts.They contributed to improving organizational teamwork.

Demonstration of the benefits of investment in designautomation tools, training, and process improvement effortsdid not depend on an artificial translation of our performancemetrics into financial terms. With a reasonably complete set ofengineering performance metrics, a generally improving trendacross the board was compelling evidence of movement in the“right” direction, and of positive payback from the investmentsbeing made.

As we achieved favorable trends in our metrics, they playedan increasing role in the preparation of proposal estimates. Con-sider the example of Fig. 9, which presents one measure of me-chanical design productivity, and which formed a basis for para-metric estimating of future costs. Not only did it reduce the un-certainty in estimates, but it also provided a quantitative, objec-tive way of discounting estimates for the anticipated effect of

Fig. 9. Long-term trends of cost reduction and quality improvement arevaluable tools for estimating new-project costs.

future productivity improvement. Since we had demonstrateda consistent productivity trend, it was reasonable to anticipatethat a similar rate of productivity improvement would accrue inthe future, and therefore to incorporate this anticipated improve-ment in our proposal estimates.

Most investments in engineering performance improvementprovided benefits to other organizations as well. Design qualityis one of these. Improved design quality reduces some engi-neering costs (such as drawing change rates and problem-res-olution teams), but it has a far larger impact on other organiza-tions. Manufacturing, for example, may be able to significantlyimprove on cost targets, thanks to Engineering’s design-for-pro-ducibility initiatives, and more trouble-free drawings. Since En-gineering’s performance metrics were correlated with Manufac-turing’s metrics, and used the same database, our fellow man-agers in Manufacturing graciously acknowledged that some oftheir excellent results were aided by Engineering’s improve-ment efforts. Such acknowledgment can go a long way towardclearing a path for next year’s budget request!

VI. WHAT DID IT COST?

We instituted our “metrics” system very much on the cheap,by striving for rapid implementation and maximum use ofexisting data, so that funds could be dedicated to the moreimportant task of implementing improvement initiatives.From the starting gun (when the VP of Engineering directedthat he wanted a set of performance metrics) to publicationof the first monthly report (which included several monthsworth of historical data) spanned a little over two months,and required no more that 300 man-hours of total effort. Thetask of gathering data, manipulating them, and preparing

Page 12: Developing performance metrics for a design engineering department

320 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 47, NO. 3, AUGUST 2000

monthly reports was handled by one clerk on a part-time basis.Since much of the data was already available in our existingcomputer systems, during the first year, we invested effort inprogramming routines which would automate some of the dataanalysis and report formatting, and in improving the definitionof some of the metrics (another few hundred man-hours). Afterthat, the clerical effort was reduced by about half—requiring nomore than 700 man-hours per year, to support an engineeringorganization of over 300 people.

VII. SUMMARY AND THOUGHTS ONWIDER IMPLICATIONS

This paper has described how we successfully applied objec-tive, quantitative process metrics to a the nonclerical, creativeaspects of a hardware design activity. Doing so was an importantcontributor to our year-to-year improvement in design quality,responsiveness to customer needs, reduced error rates, and im-proved productivity.

From this experience, a few observations which have widerapplicability may be inferred.

First, no matter how daunting the task of developing quanti-tative performance metrics may be, it is probably worth doing.Other people (customers, executives) are certainly judging theperformance of your organization. It makes sense to replicatetheir judgment internally, and to take actions to improve yourperformance in the areas which are important to these externaljudges.

Second, your improvement efforts should be focused onimproving the process, not directly focused on improvingthe results. Process changes are the lever which will yieldsustained, repeatable improvements in results. The task ofmanagement is to alter the process so that it achieves improvedresults in a repeatable way. The purpose of process metrics isto highlight problem areas, and to measure whether processchanges have truly improved the results. It seems paradoxical,but spending too much management energy on the resultsis counterproductive because, if the design process remainsunchanged, improved results on “Project A” will be impossibleto replicate on “Project B.”

Third, be absolutely certain that all affected people and or-ganizations agree on the definition of “better” for each of yourmetrics. This may require some painful clarification of organi-

zational goals and values. It may occasionally require that cer-tain metrics be modified, or dropped altogether, to maintain thenecessary agreement with organizational goals.

Fourth, improvement will not happen overnight. Each incre-ment of improvement is likely to be small, but the continuouscompounding of all those little increments can add up to enor-mous improvements in the course of a few years.

Fifth, never forget that these metrics are designed to measureorganizational performance and the capability of the organiza-tion’s processes,not to judge individuals. Even the best peoplemay be terribly hampered by a poorly designed or ill-conceivedprocess. Assuming that you have selected competent people, the“continuous improvement” goal is to continually improve theprocess, so that these competent people can achieve every higherresults.

REFERENCES

[1] R. F. Sloma,How to Measure Managerial Performance. New York:Macmillan, 1980.

[2] G. A. Rummler and A. P. Brache,Improving Performance, 2nd ed. SanFrancisco, CA: Jassey-Bass, 1995.

[3] B. H. Maskell, New Performance Measures. Portland, OR: Produc-tivity Press, 1994.

[4] J. A. Hendricks, D. Defreitas, and D. K. Walker, “Changing performancemeasures at caterpillar,”Manage. Accounting, vol. 78, no. 6, Dec. 1996.

[5] S. L. Pfleeger, “Lessons learned in building a corporate metrics pro-gram,” IEEE Software, vol. 10, May 1993.

[6] R. K. Jain, “Metrics of organizational effectiveness,”J. Manage. Eng.,vol. 13, Mar./Apr. 1997.

[7] R. F. Sloma,Getting it to the Bottom Line—Management by IncrementalGains. New York: Macmillan.

[8] P. F. Wilson and R. D. Pearson,Performance-Based Assess-ments. Milwaukee, WI: ASQC Quality Press, 1995.

[9] S. M. Hronec and A. Andersen,Vital Signs. New York: AMACOM,Amer. Manage. Assoc., 1993.

[10] Training Resources and Data Exchange (TRADE),How to Measure Per-formance, a Handbook of Tools and Techniques: prepared for the U.S.Dep. Energy, Oct. 1995.

[11] A. Kleiner and G. Roth, “How to make experience your company’s bestteacher,”Harvard Bus. Rev., Sept.–Oct. 1997.

[12] W. E. Deming,Out of the Crisis. Cambridge, MA: Massachusetts Inst.Technol. Center for Adv. Study, 1986.

Robert K. Buchheim received the Bachelor of Science degree in physics fromthe Arizona State University, Tempe.

His work experience includes a variety of engineering, management, and pro-gram leadership assignments in the aerospace industry.


Recommended