+ All Categories
Home > Documents > The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE...

The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE...

Date post: 07-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
50
M EASURE AND CONTROL SOFTWARE TESTING BY QUANTIFICATION 41 Number of defects in time; This appears to be a useful metric for software quality. This will be explained in next sections. Number of defects found/not found; This “detection rate’ is in fact the essential metric for testing. The main question here is to derive and or estimate the proper values for these two variables. This also will be elaborated further. Metric application for planning, control and stop decisions We now will discuss the application of the metrics described in the preceding par- agraph, for planning, control and stop decision purposes. Estimation and Planning At the start of a test project an important issue is the required testing effort. Sev- eral estimation techniques are available. However, in general it remains unclear how many actual test cases have to be executed to enable valid conclusions about the software quality. The traditional, rather technical test depth measures are not very useful in every day’s practice, with its time pressure and priority issues. A practical estimation method can be implemented by using function points together with some straightforward depth and quality indicators. The regular use of function points for estimation purpose is: number of function points * productivity factor (hours per function point) The productivity then applies to the activity that has to be established, e.g. tech- nical design and programming. One could use this for estimating the testing effort also. For testing however it remains unclear what will be done in these hours. To get a more solid result some indication of test depth has to be included. This can be accomplished by means of the metric described earlier: the number of test cases per function point. This leads to an elaboration of the formula, to: number of function points * number of test cases per function point * hours per test case The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM
Transcript
Page 1: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

E A

ND

CO

NT

RO

L S

OF

TW

AR

E T

ES

TI

NG

BY

QU

AN

TI

FI

CA

TI

ON

41

• Number of defects in time;

This appears to be a useful metric for software quality. This will be explained in

next sections.

• Number of defects found/not found;

This “detection rate’ is in fact the essential metric for testing. The main question

here is to derive and or estimate the proper values for these two variables. This

also will be elaborated further.

■ Metric application for planning, control and stop decisions

We now will discuss the application of the metrics described in the preceding par-

agraph, for planning, control and stop decision purposes.

Estimation and Planning

At the start of a test project an important issue is the required testing effort. Sev-

eral estimation techniques are available. However, in general it remains unclear how

many actual test cases have to be executed to enable valid conclusions about the

software quality. The traditional, rather technical test depth measures are not very

useful in every day’s practice, with its time pressure and priority issues. A practical

estimation method can be implemented by using function points together with some

straightforward depth and quality indicators.

The regular use of function points for estimation purpose is:

number of function points * productivity factor (hours per function point)

The productivity then applies to the activity that has to be established, e.g. tech-

nical design and programming.

One could use this for estimating the testing effort also. For testing however it

remains unclear what will be done in these hours. To get a more solid result some

indication of test depth has to be included. This can be accomplished by means of the

metric described earlier: the number of test cases per function point.

This leads to an elaboration of the formula, to:

number of function points * number of test cases per function point * hours per

test case

The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM

Page 2: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

42

As said before, the number of function points can be calculated, from a quick and

easy to a detailed approach, depending on available time and information. The

number of hours per test case is a simple, easy understandable measure that can be

established and maintained very well in a testing environment.

The number of test cases per function point causes more practical problems at

first sight: how can we determine the required number of test cases per function

point in a planning stage?

Within DataCase we have collected data from a set of test projects. For these

projects (about 15 projects currently) the test depth as defined here is compared to

the test results.

At first an elementary classification for a set of projects was executed.:

• Which of these projects were good, sufficient tests? In other words: after test

execution the application could be transferred to regular use and maintenance

phases, without major problems.

• Which of these projects had less quality than necessary? In other words the appli-

cation could not be transferred to regular use and maintenance without further

measures. The application had to go in a special extra phase of (e.g.) tight produc-

tion control, limited use, shadow production, with continuation of testing etc.

• Which projects had a quality somewhere in between those extreme cases, classi-

fied as Moderate (low) and Fair (reasonable).

This resulted in a first, very global impression of test depth and results, shown in

next table.

As a second study a more detailed quantitative analysis has been performed.

For a number of projects from the available data a ‘detection rate’ could be calcu-

lated. This detection rate was calculated as the ratio between the number of defects

found in the own test (from which also the test depth was known) and the total

Number of test cases / function point Test Quality0.25 Minimum0.50 Moderate1 Fair1.5 Good

The Nesma year.book Page 42 Friday, November 5, 2004 4:34 PM

Page 3: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

E A

ND

CO

NT

RO

L S

OF

TW

AR

E T

ES

TI

NG

BY

QU

AN

TI

FI

CA

TI

ON

43

number of defects known. The total number consists of the defects found in the own

test and the defects that emerged from a certain period after this test. This period

could be an acceptance test period by the user organisation, a period of production

etc. The numbers are ‘normalized’ (whenever necessary by extrapolation) to a compa-

rable period of four months.

Figure 5: The detection rate for a number of projects as a function of the test

depth, expressed in the number of test cases per function point.5 shows the detec-tion rate for a number of projects as a function of the test depth, expressedin the number of test cases per function point.

Of course these dots are not in a straight line, but a relationship can be seen. The

relationship can be investigated in different ways. The diagram shows for instance a

linear, exponential and s-curve relation that can be determined from these points. The

average of these shows a pattern that can be expected by intuition: adding more test

case results in a higher detection rate but the effect of this will gradually decrease.

With these table and diagram information it is possible to make an estimation of

the percentage of defects one will probably find (as well as the percentage probably

to be missed) by applying a number of test cases per function point

This enables the choice of planned test depth at the start of a test project. Of

course there will be a substantial uncertainty, because this choice is based upon sta-

tistical and still limited data. Besides that, the detection rate does not give any clue

about the severity of the defects found or missed.

0%

20%

40%

60%

80%

100%

0,0 0,5 1,0 1,5 2,0

detection rate / test depth

exponential calculated

s-curve calculated

linear calculated

average fit

detection rate

testcases/function point

2

Figure 5: The detection rate for a number of projects as a function of the test depth, expressed in the

number of test cases per function point.

The Nesma year.book Page 43 Friday, November 5, 2004 4:34 PM

Page 4: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

44

Control

The quantitative data and metrics that are used in planning stage can also be

used very well during project control. Test projects by definition have a certain degree

of uncertainty, due to the influence of the unknown software quality on the test

itself. Tight project control by means of proper, quantifiable indicators is necessary.

The variables that can be used for control are:

• Number of function points

Whenever required by the situation one can limit the number of function points,

by putting some application parts out of scope. This can be done for instance for

parts that appear to have very low quality after some initial testing. These parts

then can be shifted to further development. Another strategy is to shift applica-

tion parts that are not necessary immediately for production to later, additional

test projects.

These parts (when not too small) can be expressed in function points and then the

impact on the test planning can be calculated directly. The parts cannot be too

small because the function points measure is not suitable for small application

bits like individual user functions.

• Number of test cases

The number of test cases and their distribution over the application parts can be

changed during the test when forced by the circumstances. Testing for weak parts

with a lot of defects can be increased by adding test cases, testing for already

proven firm and stable parts can be limited.

• Treating the defects

This is a typical control factor, a little beyond the scope of this article, but also

important. By limiting the defect solution activity the project control for testing

can be improved. There are two extreme strategies, with a lot of possibilities in

between:

• Repair all defects. This leads to high application quality, but gives a very

unpredictable end to the test project.

• Repair no defects at all. The application will be transferred ‘as is’ or will not be

transferred at all when defect are too numerous or severe. For defect repair a

separate new project including test will be defined, e.g. a next release.

The optimum strategy often lies in between these extremes. A proper decision on

defect repair however is an important instrument for control.

The Nesma year.book Page 44 Friday, November 5, 2004 4:34 PM

Page 5: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

E A

ND

CO

NT

RO

L S

OF

TW

AR

E T

ES

TI

NG

BY

QU

AN

TI

FI

CA

TI

ON

45

Stop decision

Finally the moment of finishing the test work has to be established, with espe-

cially the conclusions about remaining errors that can be expected. Here also quanti-

fication is necessary. It is important to find and use the proper measurements. There

are different possibilities, that can be combined. We chose two practical approaches:

• Determine the number of test cases executed: is this sufficient for the intended

detection rate? The assumption here is the correctness of the initial estimation of

depth and detection rate in the planning stage.

• Determine the actual defects in time. The defect pattern shows whether or not

the intended detection rate has been reached. This is a type of metric that already

exists in several technical appearances.

The cumulative defect pattern in time appears to be a suitable instrument for the

approach b. The time axis can be the days of actual testing activity. The cumulative

number of defects is the total of defects found up to the actual day. It is of course

possible to distinguish defects, for instance according severity, but for tracking the

test process en making the proper stop decision this is not relevant.

The defect pattern shows a curve that in general can be approached by a so-

called s-curve. The formula of these types of curves is

S = a.X / (X + exp(b-cX)

S is the cumulative number of defects and X is the test day sequential number (or,

in other words, the number of test days up till this moment).

The formula has three parameters by which the curve can be ‘tuned’ to the real

defect data. Parameter a determines the total level, parameter b the length of the ini-

tial flat part of the s-curve and parameter c the slope of the mid-part of the s-shape.

With some basic statistical alignment, by means of correlation and quadratic devia-

tion analysis, one can determine the optimum parameter setting.

Figure 6 shows the actual and calculated defect percentages as a function of the

test days for some test projects.

During the test execution estimations can be made, by determining the s-curve.

The s-curve is used for extrapolation of the actual data, to see the final level of the

number of defects and the time when this level of will be reached. The theoretical

curve is placed upon the real data and tuned by means of the three parameters.

The Nesma year.book Page 45 Friday, November 5, 2004 4:34 PM

Page 6: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

46

There are some known alternatives for this method. One is merely plotting the

daily number of defects per day. This has to show a so-called Rayleigh curve. The S-

curve can be seen as the integrated curve with the advantage of less statistical noise

and therefore a better view.

Figure 7 shows the calculated s-curves for a number of projects. These are the

theoretical curves that have been fitted as good as possible to the real defect data.

The diagram shows that the curves can vary rather heavily. In general this can be

explained by the differences between the test projects involved. A test that mainly

consists of limited regression testing clearly gives a picture that is different from a

first-time test on a new-build and not stabilized software application.

When applying these types of metrics some issues have to be kept in mind:

0%

20%

40%

60%

80%

100%

0,0 0,5 1,0 1,5 2,0

detection rate / test depth

exponential calculated

s-curve calculated

linear calculated

average fit

detection rate

testcases/function point

1.2

0%

20%

40%

60%

80%

100%

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69

Figure 6: The actual and calculated defect percentages as a function of the test days for some test

projects.

Figure 7: The calculated s-curves for a number of projects.

The Nesma year.book Page 46 Friday, November 5, 2004 4:34 PM

Page 7: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

E A

ND

CO

NT

RO

L S

OF

TW

AR

E T

ES

TI

NG

BY

QU

AN

TI

FI

CA

TI

ON

47

• The underlying assumption is a stable, constant "test load". When the test effort is

substantially decreased from a certain day this will lead to a decrease in defects

found. Filtering and adjusting for minor changes in test effort is not very useful,

but major changes in the test have to be accounted for. Examples are complete

stops due to major problems in software or test environment.

• A registration of defect originating day but also test case execution is necessary

to keep track of the defect curve and the actual test depth. The general time unit

is a test day.

• Basic, simple and pragmatic statistics will do. Too many filtering, categorizing etc.

leads to incomparable detail. This also applies to categorizing test cases or

defects. This limits the available data and thus gives less usable statistical results.

Ultimately every project, test case or defect could be considered as unique, but

this point of view does not provide any statistical result.

Finally

Applying metrics and techniques for planning, controlling and stopping tests at

the right moment requires very little additional effort for registration, analysis and

calculation. The additional costs are minor compared to the total costs of a test

project. The appliance does not require specific tools, education or support. In general

the available tools in a test environment will be sufficient.

From the point of project costs there is hardly a reason for not implementing

these instruments, while the benefits are very clear.

About the author

Henry Peters ([email protected]) is manager and consultant on software

engineering. He is board member of the NESMA and general manager of DataCase, an

IT organisation specialised in software testing. Over 25 years of IT-experience, mainly

on the area of quality management and testing, in many organisations, projects and

software applications. Focus on measurement of test activities and results. He devel-

oped an experience-based test method, with special focus on estimation, planning,

control and quantitative evaluation of test projects.

The Nesma year.book Page 47 Friday, November 5, 2004 4:34 PM

Page 8: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

48

The Nesma year.book Page 48 Friday, November 5, 2004 4:34 PM

Page 9: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

IN

G D

EF

EC

TS

TO

CO

NT

RO

L P

RO

DU

CT

QU

AL

IT

Y

49

4 MEASURING DEFECTS TO CONTROL

PRODUCT QUALITY

Wouldn’t it be nice if you could have more insight into the quality of a prod-

uct, while it is developed, and not afterwards? Would you like to be able to estimate

how many defects are inserted in the product in a certain phase, and how effective a

(test) phase is in capturing these defects? The simple but very effective model

described in this paper makes it possible! The model is used at Ericsson to develop

software for telecommunications product. It supports controlling projects, by putting

quality next to planning and budget, evaluating risks, and taking decisions regarding

release and maintenance.

This paper will first highlight why there was a need for such a model, and why

existing measurements didn’t fulfill this need. Then the model itself, and the deploy-

ment in the projects are described. Conclusions that were drawn from the model,

using feedback sessions, are described, explaining how the projects have benefited

from the model. At the end we look shortly into the future, regarding both the model

and the needs of the organization regarding measurements on product quality.

Why a Model?

Within Ericsson there has always been focus on the quality of developed products,

next to planning and budget. Initially measurements like fault density were used. But

fault density has major drawbacks; one being that you can only measure it after a

phase is completed, and another is that it does not give any insight on the causes if a

product has a quality risk.

For instance, high fault density can either mean that there is a quality risk, that

the product was more thoroughly tested than other products, or both. The same

applies for a low fault density, the reason could be that insufficient testing was done

and that defects remain undetected in the product (a product quality risk), or that the

product has a better quality and thus less defects were found, or both.

BEN LINDERS

The Nesma year.book Page 49 Friday, November 5, 2004 4:34 PM

Page 10: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

50

Studies outside of Ericsson have also revealed the limited value of fault density;

see for instance [1], other studies showed defect measurements that successful

organizations used [2]. So there was a need for new measurements that would give

more insight. The GQM metric approach was used to define the measurements [3].

Goals:

1 Control verification activities (optimize defect detection).

2 Control development activities (minimize defect injection).

3 Predict release quality of the product.

4 Improve the quality of the development and test processes.

There is need for measurements usable to steer quality: measurements to plan

quality at the start of the project, and track it during project phases. Enabling correc-

tive and preventive actions and reducing quality risks in a project. An additional

projects need is to estimate the number of latent defects in a product at release. The

purpose is twofold.

In the first place, it is usable to decide if the product can be delivered to custom-

ers, or released, knowing the quality. Secondly, it helps to plan the support and main-

tenance capacity needed to resolve the defects that are anticipated to be reported by

customers.

Finally it should be possible to have quality data that is analyzed together with

the applied processes, and the way a project is organized. This analysis provides

insight into process and organizational bottlenecks, and therefore enables cost effi-

cient improvements.

Questions:

1 What will be the quality of the released product?

a Per requirement?

b As perceived by customers?

2 How good are inspections?

a How effective is the preparation?

b How effective is this review meeting?

3 How good are the test phases?

a How many test cases are needed?

b How effective is a test phase?

The Nesma year.book Page 50 Friday, November 5, 2004 4:34 PM

Page 11: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

IN

G D

EF

EC

TS

TO

CO

NT

RO

L P

RO

DU

CT

QU

AL

IT

Y

51

4 What is the quality of the requirement definition?

5 What is the quality of the high level and detailed design?

6 What is the initial quality of the code (before inspections/test)?

7 Which phase/activity has the biggest influence on quality?

This list of questions is not exhaustive, but they are the first ones that come to

mind when you want to measure and control quality. Certain questions can trigger

additional questions, for instance when it appears that a certain test phase is ineffec-

tive in finding defects, additional questions are needed to investigate the activities

and its effectiveness.

Metrics:

1 Number of undetected defects in the released product.

2 Number of defects found per requirement/feature.

3 Number of latent defects in a product before an inspection or a test phase (avail-

able).

4 Number of defects expected to find in an inspection.

5 Actual number of defects found in an inspection (detected).

6 Number of defects expected to be found in a test phase.

7 Actual number of defects found in a test phase (detected).

8 Size of the document/code.

9 Detection rate: percentage of defects detected (detected/available).

The metrics listed above can be collected in most projects, since the data is usu-

ally available in inspection records and defect tracking tools. But to analyze the met-

rics, a measurement model is needed. This since the metrics are related, only when

looking at a combination of several metrics conclusions can be drawn that help

answering questions and reaching the goals of the measurements.

To get more insight into the quality of the product during development, the soft-

ware development processes must be measured with two views: Introduction and

detection of defects. To develop the model, descriptions from Watts Humphrey [4]

and Stephen H. Kan [5] have been used.

The Nesma year.book Page 51 Friday, November 5, 2004 4:34 PM

Page 12: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

52

Introduction is done during the specification, design and coding phases; defects

are either introduced into documents or into the actual product. Measuring introduc-

tion gives an indication of development phase quality. Detection of defects is done

via inspections and test during all the phases of the project. Measuring detection

gives insight into the effectiveness of verification phases. By using these two meas-

urements, a project can determine if there is a quality risk, and what the origin is: Too

many defects in the product and/or insufficient testing to capture the defects.

Development Phase Quality

The quality of a product depends on the number of defects that are inserted dur-

ing the development phases. Mistakes are made in every phase, from specification to

implementation. Defects that are detected and removed increase the likely quality of

the end product. However, those defects reflect the inefficiency of the development

process.

Defects which are not detected in the phase in which they are inserted lead to

more and expensive rework and can decrease product quality if they remain in the

product after release and surface when customers use the product. The aim is to

remove defects as early as possible and to have as few defects as possible in the prod-

uct when released, thereby delivering quality products.

At the start of a phase the number of inserted defects is estimated. During execu-

tion of the phase this estimate is adjusted based on the number of defects actually

detected. Since it is sometimes difficult to estimate the number of defects, an alter-

native method is to estimate the size of the produced documents or code, and use

size multiplied by the defect density to estimate the number of defects. In all cases, it

is better to do a rough estimate, and adjust it during a project, than to do no esti-

Figure 8: Defect flow.

The Nesma year.book Page 52 Friday, November 5, 2004 4:34 PM

Page 13: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

IN

G D

EF

EC

TS

TO

CO

NT

RO

L P

RO

DU

CT

QU

AL

IT

Y

53

mate. Historical data of earlier projects is very useful when estimating defect intro-

duction. Also industry data is used when no historical data is available.

Defect Detection Effectiveness

The aim of verification is to detect the inserted defects, preferably in the earliest

phase that they can economically be detected. The effectiveness is expressed with a

detection rate, that is:

Detection Rate = Number of defects detected / Number of defects present in product

An organization has a detection rate for a certain phase, which is estimated

within certain statistical limits. Initially when no historical data of an organization is

available, industrial figures can be used. An alternative for the detection rate is to

estimate the absolute number of defects that are likely to be found in the current

phase. Based on that number and the number of defects present, the detection rate is

derived.

During the execution of a phase, the detection rate is adjusted based on the

actual number of defects detected. If, for instance, a detection rate of 50% is

expected, and 46% of the expected defects are detected halfway through the phase,

then either the number of defects that was inserted will be higher than initially

expected, or the actual detection rate is higher – fewer defects were inserted than

were predicted. If the first is true, then there is a quality risk in the product, which

needs to be investigated. Also it gives a signal usable to improve the process phase

where the defects were introduced. In a next increment of the project, defect intro-

duction can thus be reduced. If fewer defects were inserted and thus the resulting

detection rate is higher, then further investigation is warranted to understand how

this was accomplished. That would make it possible to learn and improve verification

in other projects, based on the positive experiences from this one.

The combination of measurements on defect insertion and defect detection gives

a more detailed view of the quality of the product, and effectiveness of the develop-

ment processes. This provides a project with better means to track and control quality.

Pilot Project

The defect introduction and detection model as described in the earlier para-

graphs is implemented in a pilot project for a network management product. Since

the project has two distinct requirements, the project is divided in two increments

The Nesma year.book Page 53 Friday, November 5, 2004 4:34 PM

Page 14: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

54

with separate teams, which are overlapping in time. The model copes with these two

increments separately, since different processes are used. As the first part of the

project was combined for the two increments, and also final testing was combined,

the basic introduction/detection model becomes:

A tool for the model is developed using a spreadsheet: The Project Defect Model.

The purpose of the Project Defect Model is to estimate defects inserted and detected

by phases, and to track defects from inspections and tests against the estimates. The

model supports analysis of the data with both calculated values and graphs compar-

ing actuals to estimates in terms of current status and trends.

In the pilot, 420 defects are collected, which are analyzed and classified on intro-

duction phase, requirement, and phase where they could have been detected. The

result data gives an estimate of 21 latent defects in the released product, expected to

be found in the first six months of operation. This estimate was used as one of the cri-

teria in the release decision; it was decided that this would be an acceptable quality

level provided that sufficient maintenance support would be available to solve the 21

defects when detected by customers. The 6 months operation period ended in June

2003, and 20 defects were actually found, a difference of 1 defect with the estimate

at release.

Requirements

Architecture

Design-Impl-Unit Test Incr 1

Design-Impl-Unit Test Incr 2

FT/ST Incr 1

FT/ST Incr 2

Network Test 1st customer Test

Test

Maintenance

Activity Latent Estimated defects

defects # %

Inspection 420 197 47%

Test in project 223 194 87%

MDA/FOA 24 9 38%

Project totals 420 400 95% 91%

Maintenance 20 20 100%

Average/Total 420

Plan

Test

Only %

Figure 9: Project phases.

Figure 10: Defect Figures from Pilot Project.

The Nesma year.book Page 54 Friday, November 5, 2004 4:34 PM

Page 15: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

IN

G D

EF

EC

TS

TO

CO

NT

RO

L P

RO

DU

CT

QU

AL

IT

Y

55

Based on the estimated number of latent defects, the project has a defect detec-

tion rate of 95%, i.e. 400 of the 420 defects made in the project have been detected

before the product is released. If we exclude the phases before test (that used inspec-

tions for verification) from the measurement, the detection rate is lower: only 91% of

the defects left after inspections were detected in the test phases. This shows that

inspection has contributed towards the quality of the released product. However, the

average detection rate from inspections is 47%. According to industry data, inspec-

tions can detect between 60%-80% of the available defects, so there is room for

improvement.

Even more important than the data are the benefits the project received by using

the model. During the project, data feedback and analysis sessions are done where

corrective actions based on the data are implemented Major conclusions/actions

included:

• A slip through of requirement defects is detected early in the architecture phase.

Investigation showed however that good high-level design, combined with effec-

tive architecture inspections, revealed many requirement clarifications. Action

defined is to monitor requirement defect detection in the design phase for quality

risks; it turned out that both the number and impact of the detected defects are

limited. No more requirement defects are detected in later phases, final conclu-

sion is that the requirements after initial clarification reached a high quality.

• Data from defects inserted/detected, test requirement coverage, and Orthogonal

Defect Classification, shows that inspection effectiveness depends on several

issues: Good and focused inspectors, qualified moderators, sufficient preparation,

and thorough inspection planning. The detailed conclusions on inspections are

used to further improve reviews and inspections in future projects. Though it was

Defects Inserted

0

50

100

150

200

250

Req Arch Design Impl

Figure 11: Defect Inserted per Phase.

The Nesma year.book Page 55 Friday, November 5, 2004 4:34 PM

Page 16: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

56

known that inspections are an effective way of detecting defects (as was to be

expected from many earlier studies), our data confirmed this and has lead to more

focus from management and buy in for further improving inspections.

• Data also made clear that test phases discover defects that could have been

found in earlier phases. Function Test finds many inspection defects, where Sys-

tem Test discovers a lot of Function Test defects. Based on Trigger analysis with

Orthogonal Defect Classification, we determine our test progress. Together with a

requirement based test matrix, the project is able to predict where requirements

are sufficiently verified, and where there are risks of latent defects. Test focus and

scope is changed during the project, based on data from the model, and the

remaining quality risks are on requirements that are seldom used.

The Project Defect Model is beneficial to the project. It helps estimating, planning,

and tracking quality during the project. This quality data is used in the project

together with time and cost data, to take better decisions. Also the model identifies

quality risks at an early stage, helping the project to take corrective actions and deci-

sions on product release and maintenance capacity planning. The teams using the

model gain significant quantitative insight into their design and test processes, that

they will use in future projects. Feedback sessions of defect data analyzed by the team

themselves prove to be very powerful.

More detailed information about the model and the results from the pilot can be

found in [6].

Data from finished and ongoing projects

Based on the results in the pilot project, the management team has decided that

all future projects would use the Project Defect Model to estimate and track their

quality. Until now (march 2004) there are 7 projects from R&D in Rijen, that have

used or are using the model. Also the model is used to do retrospective analysis on

some older projects, to get data to derive planning constants for future projects.

Below data collected from the projects:

The Nesma year.book Page 56 Friday, November 5, 2004 4:34 PM

Page 17: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

IN

G D

EF

EC

TS

TO

CO

NT

RO

L P

RO

DU

CT

QU

AL

IT

Y

57

The detection rate calculates which percentages of the defects are found within

the project, before delivery to the customer. Size is a relative indication on how big

the project (man-hours and lead-time) is. This table shows that on average, 90% of

the defects are detected in the project, while the customers detect 10% of the

defects. If we exclude project D from the figures (a project that was expected to find

less defects, since it integrated earlier developed and tested components), the average

becomes 92%. Industry figures for “best in class” vary between 90% and 95% (see

[7]).

We conclude that bigger projects have a better detection rate. This has to do with

more extensive test phases, more clearness into the interdependencies and risks

between projects and the usage of incremental development. This leads to making

fewer defects, and finding defects earlier.

These figures show what kinds of defects are made in projects. We see that most

of the defects are coding defects, while architecture and design are the 2nd and 3rd

biggest categories. Given the fact that much effort is put in exploring, defining and

verifying the architecture, that includes formal inspections on architecture docu-

ments, this is an expected and wanted result.

Project detection rates (inspections & test)

Proj A Proj B Proj C Proj D Proj E Proj F Proj G Proj H

Rate 95% 95% 90% 59% 94% 86% 89% 90%Size 1 4 1 1 5 3 1

Phase injection rates

Requirements Architecture Design Code

Rate 6% 21% 15% 58%

Phase detection rates

Require-

ments

Archi-

tecture

Design Code Function

Test

System

Test

Network

Test

Total

Rate 30% 67% 66% 40% 48% 48% 27% 47%

The Nesma year.book Page 57 Friday, November 5, 2004 4:34 PM

Page 18: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

58

The phase detection rate calculates which percentage of the available defects in a

product at the start of a phase is captured in that phase. We see that the architecture

and design inspections have a high detection rate, while requirements and code

inspections detect fewer defects. For requirements this has to do with early stage that

the project is in at the time of inspection, we know from other data that the require-

ments defects that slip through are mostly detected during architecture and design

inspections.

Function test and system test both find 48% of the defects, however in absolute

figures function test find more defects and removes them before system test starts.

The average figure of all detection phases is 47%, given industry figures this accepta-

ble but there is room for improvement.

The figures above from all projects help us to define planning constants that are

used for future projects, thereby improving our estimation accuracy.

Feedback sessions

Organizations are increasingly relying on measurements, but many struggle to

implement them. There are the usual technical problems associated with collecting

data, storing it efficiently, and creating usable reports. However, the biggest chal-

lenges are often related to using the data to actually make decisions and steer the

activities of the organization. Miscommunication, incomplete analysis, and corrective

actions that seem to come from nowhere create resistance to the whole idea of meas-

urements.

Feedback is based on the assumption that you should give the raw data to the

people who did the work, and that they should perform the analysis. Why? Because

they know the story behind the data. For instance, defect detection rates are dis-

cussed with the test team leader, he knows how much and what kind if testing they

have been doing, and they expect to find.

With the Project Defect Model we do regular feedback sessions. On average once

a week we look at the data, compare it to our estimates, and check where there are

differences, trends, or signals in the data that something is going wrong. This is com-

pared with the development status from the design and test teams, based on that we

draw conclusions and take the necessary actions.

We see that development teams learn a lot from the feedback they received on

defects. They see which kinds of defects they discover too late, and use the data to

improved the early test and inspection processes. For instance, for defects that slip

through many times, checks are add to the design and inspection checklist. Using

The Nesma year.book Page 58 Friday, November 5, 2004 4:34 PM

Page 19: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

ME

AS

UR

IN

G D

EF

EC

TS

TO

CO

NT

RO

L P

RO

DU

CT

QU

AL

IT

Y

59

these checks the defects are found before or at the latest in inspection. One project

found analyzing the data that both product knowledge and test skill are the cause

that insufficient defects are found in early test. The test team now takes time to study

product behavior documentation, and uses coaches to support newcomers on the

team. As a result, the detection rate of the test phase increases significantly, thus

reducing the number of defects available in the product when delivered to system

test. So during the improvements they use the data from the model to check if they

are actually making progress.

Also the teams get insight where they make the most defects, using the data they

are able to determine root causes and improve quality right at the start. In a project a

team found out that a specific part of the project is more complex and difficult to

verify. They put in extra time in the investigation of possible solutions, to prevent

many small but disturbing defects during design and coding, and reduced the risk

that defect would slip through to late testing.

An effective feedback process doesn’t come easily. In the beginning it will need a

lot of attention and perseverance, but once the benefits of the effort become clear,

which is usually early in the process, people will start to give their support. More

information about feedback in measurement systems, including the key success fac-

tors & pitfalls, can be found in [8].

Conclusions

The Project Defect Model is beneficial for our projects. It helps estimating, plan-

ning, and tracking quality during the projects. This quality data is used in the projects

together with time and cost data, to improve decision-making. The model identifies

quality risks at an early stage, helping the projects taking corrective actions and deci-

sions on product release and maintenance capacity planning. Also the design and test

teams using the model gain significant quantitative insight into their design and test

processes, that is used in future projects.

Future extensions of the model will include effort spend in design and test

phases. This will enable trade-off between appraisal cost (pre-release defect detec-

tion), rework cost (pre-release defect removal) and operational cost (post-release

defect removal). By extending the model in the future with cost data, it will evolve in

a true implementation of a Cost of Quality model.

The Nesma year.book Page 59 Friday, November 5, 2004 4:34 PM

Page 20: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

60

About the Author

Ben Linders ([email protected]) is Specialist Operational Development &

Quality at Ericsson Telecommunicatie B.V., the Netherlands. He has a Bachelor in

Computer Science, and did a Master study on Organizational Change. He works in

process- and organizational improvement for more then 15 years, implementing high

maturity practices, to improve organizational performance and bring business bene-

fits.

Since 2000 he leads the Defect Prevention program. He coaches implementation

of Root Cause Analysis, Reviews and Inspections, and has defined and applied a

Project Defect Model, used for quantitative management of the quality of products

and effectiveness of verification.

Also he introduces and supports the measurement system, manages continuous

improvement, and is an expert and coach in several Organizational Performance &

Quality areas.

He is a member of several (national and international) SPI and quality related net-

works, has written several papers, and regularly gives presentations.

References

1 A Critique of Software Defect Prediction Models, Norman Fenton and Martin Neil,

in IEEE Transaction on Software Engineering, September/October 1999.

2 Software Measurement Programs and Industry Leadership, Capers Jones, in

Crosstalk February 2001, http://www.stsc.hill.af.mil/CrossTalk/2001/feb/jones.asp.

3 The Goal/Question/Metric Method: A Practical Guide for Quality Improvement of

Software development, R.v.Solingen, E.W. Berghout, McGraw-Hill.

4 Managing the software process, Watts Humphrey, Chapter 16, managing Soft-

ware Quality.

5 Metrics and models in Software Quality Engineering, Stephen H. Kan. Chapter 6,

Defect removal effectiveness.

6 Controlling Product Quality During Development with a Defect Model, Ben

Linders, in: Proceedings of the 8th European SEPG conference, London, 2003.

7 A Business Case for Software Process Improvement Revised, DACS state of the

art report, Measuring Return on Investment from Software Engineering and Man-

agement.

8 Make what’s counted count, how one company found a way to use measure-

ments to steer the actions of their organization, Ben Linders, in: Better Software

magazine, march 2004.

The Nesma year.book Page 60 Friday, November 5, 2004 4:34 PM

Page 21: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SOFTWARE DEVELOPMENT

The Nesma year.book Page 61 Friday, November 5, 2004 4:34 PM

Page 22: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

62

The Nesma year.book Page 62 Friday, November 5, 2004 4:34 PM

Page 23: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

CO

ST

S O

F A

PP

LI

CA

TI

ON

MA

IN

TE

NA

NC

E M

AD

E M

OR

E T

RA

NS

PA

RE

NT

TH

RO

UG

H T

HE

US

E O

F M

ET

RI

CS

63

5 COSTS OF APPLICATION MAINTENANCE MADE MORE TRANSPARENT THROUGHTHE USE OF METRICS

“Quality costs money, and money determines quality”

IT not only includes the development of new software products, but, as well, the

maintenance of its applications. Even more importantly, software development

accounts for only a limited part of the costs generated by IT. Nevertheless, software

development is, for many organisations, the source of all that is unpleasant, and thus

constitutes a critical element of quality management.

Many organisations make a strict distinction between, on the one hand, those

individuals who provide system development and, on the other, those in charge of

application maintenance. As a consequence of the ‘fixed price / fixed time’ approach,

the ‘developers’ are not always able, or willing, to devote additional effort to make

easily and cheaply maintained systems available to the ‘maintainers.’ The maintainers,

in turn, are not always able to make clear what this additional effort will entail.

The aim of the present article is to provide insight into the most important factors

influencing the costs of application maintenance. There is a wealth of literature con-

cerning related topics, such as the quality of software products, quality management

and quality systems. However, hardly any study has been done into the relationship

between system development and application maintenance from the standpoint of

costs.

Customers not only expect an acceptable level of prompt service provision for an

acceptable price, but are also thinking about what their situation will be in three to

five years. Reducing the costs of application maintenance is the aspiration of many

customers. Not only technological knowledge, but that of IT processes, as well, are the

prerequisites for this.

RICHARD SWEER

The Nesma year.book Page 63 Friday, November 5, 2004 4:34 PM

Page 24: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

64

Software Quality Model

It is undeniable that the use of software quality models has yielded substantial

improvements within organisations. Most software quality models are nothing more

or less than the distillation of practical experience of proven value. Indeed, this is

where the greatest strength of such models lies: it is no longer necessary to prove

that a given approach works – ‘all’ that is needed is to ensure that the approach is

applied with the necessary expertise. Another great advantage of these models is that

they attempt to provide clues toward the attainment of an ideal software engineering

process.

In general, much time and effort is devoted to devising method and technology

oriented procedures for software improvement. However, when it comes to software

development, people continue to be the prime means of production. The importance

of the human factor is usually underestimated. If no attention is paid to, e.g., the cul-

ture, knowledge and expertise of the developers involved, or the motivation/style of

management, it is difficult to study the effect of a change in the process on the qual-

ity of the product. After all: the customer purchases a product, not a process.

Info Support has developed a Software Quality Model (SQM) based on both accu-

mulated practical experience and analyses of literature research. With the help of this

SQM model, an IT organisation can obtain insight into the most important factors

influencing quality and the resulting maintenance costs for applications (or applica-

tion maintenance). The model’s strength lies in its integral approach to a number of

(less well-known) factors. These factors are divided into five different but closely

related and continually interacting areas, viz.: staff member, team, process, product

and software factory (Figure 12: Software Quality Model.12).

In addition to these factors, the context in which software development takes

place also has an important effect on software quality. Here, we distinguish between

two categories of factor, namely, organisation and environment-dependent factors.

The Nesma year.book Page 64 Friday, November 5, 2004 4:34 PM

Page 25: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

CO

ST

S O

F A

PP

LI

CA

TI

ON

MA

IN

TE

NA

NC

E M

AD

E M

OR

E T

RA

NS

PA

RE

NT

TH

RO

UG

H T

HE

US

E O

F M

ET

RI

CS

65

SQM model: context

The context in which software development takes place is divided into two cate-

gories: organisation-dependent and environment-dependent factors. The first cate-

gory involves the influence of, e.g., policy, culture and structure, and, as well, the

degree to which the experiencing of quality is anchored in an organisation. The sec-

ond category, which involves context, includes the influence exerted by, e.g., the cus-

tomer, user, legislator or branch of trade in question.

SQM model: staff member

Regardless of how much is invested in improving processes, methods and tech-

niques, these investments will yield little if the staff who must work within these

improved processes and with these newest of methods, techniques and tools are not

capable of adequately functioning within this context or are not willing to. A lack of

commitment, motivation, feedback, or poor/neglected training programmes and

insufficiently mature behaviour are all factors which can lead to non-realisation of

one’s quality objectives.

Figuur 4.1

leadership styleleadership style

combinationcombination

meansmeansmaturitymaturity

stages of developmentstages of development

specificationsspecifications

Needs and

requirements

Needs and

requirements

characteristicscharacteristicsinterestedparties

interestedparties

quality treequality tree generatorsgenerators

infrastructureinfrastructure

re-usere-usemaintenancedevelopment

maintenancedevelopment

design

patterns

design

patterns

validation and

verification

validation and

verification

organisational

management

organisational

management

methods

techniques

methods

techniquesadequate

management

adequatemanagement

application

management

application

management

processprocess

software factorysoftware factoryproductproduct

feedbackfeedback

behaviourbehaviour

commitmentcommitmentknowledgeAnd expertise

knowledgeAnd expertise

motivationmotivation

staff memberstaff member

teamteam

Software

Quality

Model

Software

Quality

Model

Figure 12: Software Quality Model.

The Nesma year.book Page 65 Friday, November 5, 2004 4:34 PM

Page 26: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

66

SQM model: team

The ideal objective of software development is having the ‘best’ team produce the

‘best’ software. Many of the problems of software development can be blamed on

poorly functioning teams, which is why the importance of teamwork and creating

and putting together teams is generally viewed as one of the prime factors influenc-

ing quality. It is important to bear in mind in this connection that the life cycle of a

team is necessarily coupled to the various phases in the life cycle of the project in

question. Team behaviour (or team maturity) can raise the quality of the software

process to a higher level. Such behaviour is characterised by such qualities as pro-

activeness, and being focused on solutions, etc.

Management often tries to motivate staff to change by creating for them a posi-

tive, attractive and challenging work context and making available to them the

means they require in order to deliver good software. Having an optimal working

environment, with the most modern of tools and equipment will contribute greatly to

motivating software developers to produce good work, so that economising on such

things will not have a positive effect on the software produced, and this becomes all

the more clear, once one considers that such expenditures often constitute only a

small fraction of one’s total project costs.

SQM model: process

Paying attention only to development, as is traditionally customary in project

management, is clearly no longer sufficient. Project management must extend over

the entire software life cycle, i.e., over development, marketing, use and maintenance.

Thus, project management becomes product management. ‘Multiproduct manage-

ment,’ the management of an entire range of products, goes one step further. Here,

re-use plays a crucial role. In the transition from development management to prod-

uct management, two organisational aspects play crucial roles. The first is directed

toward distinguishing a discrete function within the organisation, responsible for the

evolution of a software product through its entire life cycle, i.e.: the product man-

ager. The second aspect has to do with the organisation of this function.

Typically, when examining the requirements to be placed on information provi-

sion, while taking account of the needs, wishes and requirements of users and the

requirements stemming from a given business process or set of business operations,

the organisation forms one’s sole starting point. However, application maintenance,

as it relates to information provision, should not, in fact, be carried out solely from

the standpoint of the existing organisation, but as well, from the standpoint of mar-

The Nesma year.book Page 66 Friday, November 5, 2004 4:34 PM

Page 27: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

CO

ST

S O

F A

PP

LI

CA

TI

ON

MA

IN

TE

NA

NC

E M

AD

E M

OR

E T

RA

NS

PA

RE

NT

TH

RO

UG

H T

HE

US

E O

F M

ET

RI

CS

67

keting and the effects and possibilities of new technologies upon and for the organi-

sation. This two-sided orientation is a feature of adequate application maintenance,

and is essential to modern information provision.

It can be stated that, in this connection, validation and verification are indispen-

sable parts of supplying software with reasonable to good ‘demonstrable’ quality. For

this reason, it is necessary always to combine different types of validation and verifi-

cation activities, e.g., inspections, walk-throughs and tests.

SQM model: software factory

As a result of the immense increase in the scope and complexity of software,

development teams continue to increase in size, such that communication within

teams and coordination between them constantly increase in difficulty. In order to be

able to deal with the phenomenon of greater and more complex software, several

development organisations have sought solutions, one of which is the software fac-

tory. An organisation which manages development, maintenance and re-use, is

referred to as a software factory: a noble aspiration, whose key word is re-use. Impor-

tant basic elements within the software factory are infrastructure, model supporting

generators and templates for design and analysis.

SQM model: product

Quality management should begin at the earliest possible stage, i.e., from the first

phase of development (planning). This helps prevent delays at later stages. It is impor-

tant to formulate good specifications, i.e., ones that are unambiguous, complete, ver-

ifiable, consistent, alterable, traceable and usable.

Various writers have, in recent decades, striven toward an optimal description of

software (product) quality. The similarities and differences between these different

versions have prompted an intensified call for one universal and operational set of

terms for software quality. Only standardisation would seem to be able to meet this

need.

Within the ISO 9126 standard, a distinction is made between internal software

quality, external software quality and software quality-in-use. In particular with the

recently developed term quality-in-use, an attempt is being made to direct one’s

product more clearly toward the customer’s daily operations.

If one distinguishes between quality needs, quality requirements and product

characteristics, it is possible to distinguish two different possible paths. The first path

entails the execution of activities directed toward specifying quality requirements (=

The Nesma year.book Page 67 Friday, November 5, 2004 4:34 PM

Page 28: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

68

the specification path). The second path entails the execution of activities directed

toward realising quality requirements (= the realisation path).

The measuring of performance indicators for application maintenance

Several factors play a role in assessing software quality and the application main-

tenance costs resulting from it. The 25 most important factors have been identified in

the SQM model. Experience teaches that many of these factors are not or hardly

quantifiable, such that, unavoidably, some performance indicators in the model are

quite subjective in nature. This makes it very important to define such indicators as

clearly and unambiguously as possible in order to be able to approach the ideal situa-

tion (objectivity and explicitness) as closely as possible. An aid in determining the

right set of performance indicators is the use of the quality score matrix in which a

given performance indicator can be scored by means of the quality criteria selected.

NESMA has developed a number of matrixes which can be used to measure appli-

cation maintenance performance indicators. These matrixes provide insight into: the

size of a system, its manner of documentation, how the system was designed, how the

system has been built in terms of technology, the system’s ‘ancestry’ and what its

recent maintenance has entailed.

Aside from NESMA, ISO has also developed a model who aim is to provide points

of reference in determining and measuring software quality. An updated version of

the ISO 9126 model is now available. It distinguishes between the internal and exter-

nal quality (ISO 9126-3 and ISO 9126-2) of software with accompanying metrics.

Another term used in the new ISO 9126 model is quality-in-use (ISO 9126-4).

The metrics of NESMA and ISO 9126 are different in respect of a number of

points. ISO’s metrics are highly process oriented, whereas those of NESMA are

extremely product directed. Both institutes indicate that the user can augment or

alter the metrics as he sees fit. Both the NESMA and ISO models include, in addition

to metrics for application maintenance, ones for other purposes, as well. Both the

NESMA metrics and those of ISO 9126 can be used during the development process.

There are, however, a clear differences in the applicability of the different metrics.

E.g., the ISO 9126-2 metrics are better for use on software programming, while the

ISO 9126-3 metrics are better suited for use on products created during the phases:

requirements, analysis and design.

The Nesma year.book Page 68 Friday, November 5, 2004 4:34 PM

Page 29: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

CO

ST

S O

F A

PP

LI

CA

TI

ON

MA

IN

TE

NA

NC

E M

AD

E M

OR

E T

RA

NS

PA

RE

NT

TH

RO

UG

H T

HE

US

E O

F M

ET

RI

CS

69

Aside from the applicability of these metrics, it is also important to look at how

work-intensive their use is. Collecting, registering and maintaining the data per met-

ric can vary sharply in terms of time and means used.

In daily practice, the use of metrics is still quite limited. For the development and

use of metrics, the connection to practical experience is indispensable. As a result of

time limitations and limited IT budgets, there are actually few possibilities for apply-

ing software metrics in daily operations. Both customers and IT suppliers fail to

devote sufficient attention to performance measurement when assessing the costs of

application maintenance. However, this applies not only to maintenance costs, but, as

well, to IT performance measurements in general.

Experience teaches that one’s choice of software supplier is often determined

solely by the price which the customer pays for the first delivery. This can have

unforeseen consequences, both with regard to the quality of the application in ques-

tion and the resulting (long-term) maintenance costs.

About the author

Richard Sweer ([email protected]) leads as manager of the Business Unit

Finance about forty to fifty ICT-professionals that perform (turn-key) projects for

customers of Info Support. At this moment the Business Unit Finance is active in

about fifteen projects for more than ten customers in The Netherlands.

Besides his work as a business team manager Richard is responsible for the devel-

opment and implementation of a Professional Development Center for Info Support.

Within this unit a development-factory is being implemented for three platforms:

J2EE, .NET and Open Source. This development-factory supports customers of all busi-

ness units wih the development of software applications.

The Nesma year.book Page 69 Friday, November 5, 2004 4:34 PM

Page 30: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

70

The Nesma year.book Page 70 Friday, November 5, 2004 4:34 PM

Page 31: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

71

6 SOFTWARE RELEASING:

DO THE NUMBERS MATTER?

The software industry is growing exponentially. Due to its enormous impact on

today’s society, the software industry has become critical. However, the ad-hoc and

immature way of working is leading to an increasing number of reported serious

problems. Software products are released without knowing their exact behaviour and

without knowing the expected operational cost.

In this article a control system for (software) product development is defined,

used as a reference to conduct a series of case studies in industry. These studies

revealed serious deficiencies with respect to evaluating innovation proposals, defining

project scope, designing products, and implementing them. As a consequence, release

decisions are characterized by a lack of quantitative information (e.g. financial conse-

quences cannot be predicted).

An economic model as often used in the semiconductor industry is introduced in

this paper. This model enhances a long-term perspective on software development

and enables an organisation to build, implement, monitor and evaluate a business

case. As such, it also enables software release decision-making from a financial point

of view. Effective application of the model requires having an understanding of the

expected product lifetime, the revenue, the development cost and operational cost,

and the resulting profit. An important factor influencing these parameters is the type

of relationship between the software manufacturer and the buyer/user of the soft-

ware. This relationship determines the product development strategy used as input to

business cases. It is concluded that both software manufacturers and the users of

software products could benefit greatly from applying the model given the relative

immaturity of the software industry in comparison to other engineering disciplines.

On the other hand, there will remain practical limitations with respect to a purely

financial approach. Information will always be imperfect and incomplete and deci-

sion-makers are inevitably confronted with cognitive limitations that cannot be

ignored.

HANS SASSENBURG

The Nesma year.book Page 71 Friday, November 5, 2004 4:34 PM

Page 32: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

72

Introduction and Outline

The amount and variety of software applications is growing exponentially. As a

consequence, the impact of software applications on society is increasing rapidly. As

software spreads from computers to the engines of automobiles to robots in factories

to X-ray machines in hospitals, defects are no longer a problem to be managed. They

have to be predicted and excised. Otherwise, unanticipated uses will lead to unin-

tended consequences.

However, ongoing research continues to reveal that the development process of

most IT-suppliers is characterized as being immature. This immaturity in the software

engineering discipline surfaces when new software products are developed or existing

products are maintained.

In 1998 roughly 28% of all software projects in the United States were stopped

prematurely [1], due to either changed economic conditions or, in most cases, project

failures. This does not mean that projects that do release a software product are nec-

essarily successful.

These projects do release a product to their customers, but may be confronted

with considerable budget overruns, schedule delays and poor quality. Many software

manufacturers have a short-term horizon that focuses on controlling the cost and

schedule of the current product release, often neglecting other aspects of the soft-

ware lifecycle [2]. In that case, the focus is on controlling the cost and schedule of the

current product release. This potentially leads to sub optimisation instead of a strate-

gic long-term approach and as a consequence the premature release of software

products.

This leaves the manufacturer exposed to the following risks:

• Unpredictable product behaviour. It is very difficult to guarantee to the user(s)

what the exact functionality of the product will be. This may lead to user dissatis-

faction and to unforeseen, even potentially dangerous situations which may put

people lives may be at risk.

• Unknown operational cost. The post-release or operational cost of the software

products may be unexpectedly high. For example the exact status of the software

product and its documentation may be unknown leading to high corrective main-

tenance costs. In addition, adaptive and perfective maintenance activities may be

severely hampered.

The Nesma year.book Page 72 Friday, November 5, 2004 4:34 PM

Page 33: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

73

Over the last decades, an increasing number of serious problems that illustrate

these risks have been reported. Leveson has published a collection of well-researched

problems along with brief descriptions of industry-specific approaches to safety [3].

Safety problems are described in the fields of medical devices, aerospace, the chemi-

cal industry and nuclear power. Other descriptions of the consequences of software

failures can be found in [4] and [5].

In this article, the results of seven conducted case studies are presented revealing

current industry practices with respect to software development. The focus of these

studies was to determine to what extent software release decisions are based on a

financial analysis.

These case studies were conducted using the control system described in section 1

as a reference. The results of the case studies are described in section 2. Based on the

case study results, an economic model is presented in section 3 whose purpose is to

enable an organisation to build, implement, monitor and evaluate a business case

throughout the lifecycle of a software product. The model will provide financial fig-

ures which can be used as input to the release decision process. In section 4, some

implementation factors are described focussing on the characteristics of a software

manufacturer and the resulting product development strategy. In section 5 conclu-

sions are drawn, answering the question whether software releasing should be based

on financial figures or not. Some limitations with respect to a purely quantitative

approach are addressed as well.

Control System

De Leeuw described a general approach to the effective control of a target system

[6]. He represents a control situation by a controlling organ, a target system and an

environment. The controlling organ exerts goal-directed influence on a target system,

while the environment affects both the controlling organ and the target system. Hol-

lander adapted the control system to the controlling power of business development

teams [7] in the following way:

• The environment is based on Porter’s five forces model, being the company and its

competitors, the customers or buyers of the product, the suppliers, the substitutes

for the product and new potential entrants from other markets [8].

• The controlling system consists of the project management function.

• The target system is the business development project.

The Nesma year.book Page 73 Friday, November 5, 2004 4:34 PM

Page 34: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

74

It will now be described how this control system could be practically implemented

for software product development, using a business case as the underlying rationale

and monitoring instrument for a project.

Business Strategy

Senior Management at a strategic level defines a business strategy, which

describes the long-term expectations of business and technology developments. Busi-

ness developments are addressed in terms of changes in the marketplace and organi-

sation. Technology developments are addressed in terms of adoption of new

technologies and new application of existing technologies.

The business strategy is the input for Product Management (or the department

responsible for information planning) at a tactical level to derive business cases. It is

assumed here, that in general the definition of a business case and its further imple-

mentation at operational level afterwards are executed in five sequential steps. They

will be further described. For each step examples will be given of possible methods

which can be used to support a quantitative approach.

Step 1: Investment proposal

A business case is used to define the rationale for a project that is initiated to

develop a product (either a new product or a newer version of an existing product)

[9]. It is in fact a proposal to start investing in a project definition. It describes the

expected revenue for the vendor organisation taking into account the expected

development or pre-release cost (to develop the product) and operational or post-

release cost (to produce, deploy and maintain the product).

The business case defines in high-level terms the external product needs and con-

straints as input to a project at operational level. The external product needs describe

the required functionality seen from the perspective of the customer(s). Distinction

can be made into functional needs and non-functional needs. The functional needs

describe the functionality that must be offered by the product.

The non-functional needs define product properties and put the constraints upon

the functional needs (e.g. reliability, safety and accuracy). These are often referred to

as quality attributes. In the non-functional needs, the compliance to external stand-

ards is an additional requirement. Constraints determine the boundaries of a project

and may, for example, be limitations with respect to budget and lead-time of the

project and cost price of the final product. Calculation methods such as the tradi-

The Nesma year.book Page 74 Friday, November 5, 2004 4:34 PM

Page 35: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

75

tional discounted cash flow [10] and the newer real option approach [11] may be used

here to build the case.

Step 2: Project definition

Internal stakeholders define internal product needs and constraints. The internal

product needs are also expressed in functional and non-functional needs. Functional

needs describe for instance the documentation that is needed to produce, deploy and

maintain the resulting product. Non-functional product needs describe for instance

the compliance to internal standards.

The combination of the external product needs and constraints and the internal

product needs and constraints are the inputs to the project. They are further analysed

and detailed to the level where one or more project alternatives can be defined, that

meet the formulated needs and constraints. The project alternative that most satisfies

them will be selected.

At this stage, the release criteria can be defined. They are the particular criteria of

a project and its resulting products that are taken into account to make the decision

whether or not to release the product. Project estimation methods like COCOMO II

[12] and SLIM Estimate [13] may be used here to make the optimal trade-off between

function needs, non-functional needs, lead-time and cost. Different project alterna-

tives may be evaluated with multiple stakeholders using the Win-Win Negotiation

Model [14]. The Project Definition step may lead to changes in the business case as

better insights are gained.

Step 3: Product design

After the project has been defined and accepted, the project starts. Further analy-

sis of all needs and constraints will lead to the formulation of different product

design alternatives. The design alternative that most satisfies the business case will be

selected. Supporting methods here are for instance ATAM" [15], SAAM [16] and

CBAM [17].

After the product design has been selected the release criteria are deployed to

lower-level process and product attributes. Suppose that lead-time and budget are

constraints and thus release criteria. They will put constraints on each component as

defined in the product design.

If for example, reliability and maintainability are part of the non-functional

needs, they will have to be deployed in some way to the defined components in the

product design. It may not always be possible to conduct a simple mathematical

The Nesma year.book Page 75 Friday, November 5, 2004 4:34 PM

Page 36: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

76

breakdown of a non-functional need. In that case implementation rules may be

defined that will implicitly contribute in meeting the non-functional need at product

level. Parnas for instance describes how a high level of extension or maintainability

can be obtained through design rules [18]. This step may again lead to additional

changes in the business case.

During further implementation of the product the project must stay aligned with

the business case. The status of the project is obtained by evaluating the defined and

deployed release criteria. Currently measured values and predictions of final values

form the pre-release data. A steering committee may be in place to discuss the pre-

release data, combined with any new insights. For instance, the business case may

have been changed due to market developments or the service department may come

up with additional product needs.

Step 4: Product release

The continuous alignment of the status of the project with the status of the

external product needs and constraints and the internal product needs and con-

straints will finally lead to a situation where the release decision can be made.

Release alternatives to be considered are:

• Release now.

• Release later after the successful implementation of some corrective actions.

• Do not release the product and stop the project.

To answer this question, so-called software defect prediction or reliability models

have been developed. The usefulness of these models can be questioned. Most models

assume a way of working which does often not reflect reality. As a result, several

models can produce dramatically different results for the same data set [19, 20].

Sometimes, no parametric prediction model can produce reasonably accurate

results. Because no two models provide exactly the same answers, care must be taken

to select the most appropriate model for a project and not too much weight must be

given to the value of the results [21].

Step 5: Investment and project evaluation

After the product has been released, assuming that the project is not stopped,

data is needed to determine the result of the business case. A distinction is made

between end-user data (for instance the revenues of the product and the buyer/user

The Nesma year.book Page 76 Friday, November 5, 2004 4:34 PM

Page 37: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

77

satisfaction level) and post-release data (for instance the cost of corrective mainte-

nance). Evaluation of these data might result in changes to the business strategy and

future business cases, as well as removal of organisational process deficiencies (root-

cause analysis).

In Figure 13, the resulting overview is presented.

Case study results

The control system describing a business-case driven approach to software prod-

uct development was used to conduct case studies in seven large organisations devel-

oping software products both for internal use and for external markets. These case

studies revealed the following findings [22]:

• Alignment between business case and project. In all cases but one a business case

was used as the rationale for a project, stating both the expected cost and bene-

fits.2 During the project however, in most cases the Project Steering Committee

and the Software Development Team failed to inform each other explicitly about

the current status of the business case (new insights) and the current status of

the project (progress so far and estimates to completion).

• Comparison and evaluation of alternatives. This happened in most cases implic-

itly. However at crucial decision moments (defining the project scope, selecting

the product design) no evidence was found why one alternative was selected

above the other, using criteria derived from the business case. Available methods

2. In one case it was found impossible to allot benefits to a specific product release, as the clients of the product pay an annual fee for a larger set of products or services.

Senior

Management

Project

Steering

Committee

Software

Development

Team

business strategybusiness case results

pre-release dataproduct needs

and constraintsProduction

Deployment

Maintenance

End-user(s)

external releaseinternal release

post-release data

end-user data

Figure 13: Control system for software product development.

The Nesma year.book Page 77 Friday, November 5, 2004 4:34 PM

Page 38: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

78

and techniques for comparison and evaluation (like software estimation methods

and architecture evaluation methods) were in most cases known but not used.

• Estimation of operational cost. In all cases, reliability and maintainability were

considered to be important non-functional product needs as they determine to a

great extent the operational cost after product release. High reliability reduces

corrective maintenance effort and high maintainability reduces both corrective

maintenance effort and adaptive/perfective maintenance effort. In nearly all

cases, these non-functional needs were not deployed to lower level components

as identified in the selected product design or software architecture. It was only

during testing that much effort was spent on trying to meet a high level of relia-

bility. No cases have been found where the level of maintainability was evaluated.

In all cases reliability and maintainability could not be expressed in financial

terms.

• Evaluation of business case and project. After the final product release, there

were no specific actions undertaken to evaluate the result of the business case as

a whole and the results of the implemented decisions at crucial moments during

development (defining the project scope, selecting the product design, releasing

the product). Only in one situation a plan was available to evaluate the business

case at predefined moments after product release by the chairman of the Project

Steering Committee, who was assigned the responsibility for the investments

made. In all cases, there was no defined process in place to analyse the defects

found after product release and to use the results to remove process deficiencies

in product development.

In Figure 14: Results of the case studies.14, the results are illustrated in the con-

trol system.

The Nesma year.book Page 78 Friday, November 5, 2004 4:34 PM

Page 39: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

79

An Economic Model

The challenge facing organizations today is to focus continuously on the realiza-

tion of profit maximization. The software industry is no different in this respect. This

can only be accomplished by managing software development and releasing software

products from an economic perspective, using a business case approach throughout

the subsequent phases of innovation proposal, project definition, product design,

product release and post release evaluation.

2

Senior

Management

Project

Steering

Committee

Software

Development

Team

business strategybusiness case results

pre-release dataproduct needs

and constraintsProduction

Deployment

Maintenance

End-user(s)

external releaseinternal release

post-release data

end-user data

Alignment

between business

case and project

Estimation of

operational costs

Comparison and

evaluation of

alternatives

Evaluation of

business case and

project

Time

$

baseline model

Time

$

delayed entry

limited competition

Time

$

rushed entry

poor reliability

Time

$

delayed entry

heavy competition

Figure 14: Results of the case studies.

Figure 15: Examples of profit models [23].

The Nesma year.book Page 79 Friday, November 5, 2004 4:34 PM

Page 40: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

80

If the exact relationships between the four main development parameters (func-

tional needs, non-functional needs, schedule and development cost) and additional

parameters (revenue, operational cost) were known, it would enable a software man-

ufacturer to apply continuously trade-off rules among these parameters. For different

combinations, the resulting profit functions could be calculated and compared.

In Figure 15: Examples of profit models [23].15 some examples of profit models

are given when a software manufacturer is faced with a release decision. When, for

instance, the entry of a new product is delayed in a market with heavy competition,

the probability of the manufacturer capturing the advantages of early adopters will

decrease with negative impact on revenue, thus profit

In this section a generic economic model is used as the basis for a business case.

The objective of this model is to illustrate how business decisions during the software

lifecycle affect profit. For this purpose a simple product lifecycle which is frequently

used in the semiconductor industry was employed [24]. The product lifecycle as illus-

trated in Figure 4 is approximated by a triangle. It is assumed that market ramp up

and market decline have the same rate and duration. 3

3. Extended models have been described as well. Others describe for instance a model distinguishing a period maturation in which no market growth occurs [25, 26]. At this stage, the simplified triangle model will only be used as an example to illustrate the effect of lost revenue due to a delayed market entry. Factors influencing the shapes of the product lifecycle curves, revenue curve and cost curves are not addressed in this paper.

Time

Re

ve

nu

e

Intr

oduction

Gro

wth

Matu

rity

Satu

ration

Declin

e

Figure 16: Approximation of product lifecycle to a triangle [24].

The Nesma year.book Page 80 Friday, November 5, 2004 4:34 PM

Page 41: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

81

This model will be used to make a comparison between delivering a software

product on-time and delivering it with a delay. It is a typical issue a software manu-

facturer is confronted with during the development of a product prior to the release

decision.

On-time Entry

In this section, a generic economic model is presented as the basis for a business

case whose objective is to determine profits.

Three models are defined, using the following assumptions: 4

• Revenue model (Figure 5):

• Product lifetime is equal to 2.W with peak P at Tr + W.

• Time of market entry defines a triangle, representing market penetration.

• Triangle area equals total revenue.

• Development cost model (Figure 6):

• Product development time is equal to Tr with peak Cd at Tr/2.

• Start of project at T=0 defines a triangle, representing development cost dis-

tribution.

• Triangle area equals total development cost.

• Operational cost model (Figure 6):

• Peak Co at Tr + W.

• Time of market entry defines a triangle, representing operational cost distri-

bution.

• Triangle area equals total operational cost.

4. These assumptions are used to define simplified functions for the case that a product is developed for a new market. The objective here is to demonstrate how a profit level can be calculated. In reality, there will be many drivers that determine the exact shape of the functions. The development cost function will for instance heavily depend on cost drivers like the level of reuse, experience and the maturity of the organisation. Further, a software manufacturer might define a strategy with multiple releases, where the first one is used to capture the market and further ones are aimed at adding functionality and improving quality.

The Nesma year.book Page 81 Friday, November 5, 2004 4:34 PM

Page 42: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

82

This leads to the following equations:

Revenue = _ . 2W . P (1)

Development cost = _ . Tr . Cd (2)

Operational cost = _ . 2W . Co (3)

Combining the Revenue model, Development cost model and Operational cost

model, the resulting profit can be calculated:

Profit = _ . 2W . P - _ . Tr . Cd - _ . 2W . (4)

Figuur 5 5

Time

Market rise Market fall

Peak revenue

Tr+ W T

r+ 2W

P

Revenue

Tr

iguur 5 6

TimeTr

Tr+ W T

r+ 2W

Peak

development cost Peak

operational costCd

Co

Cost

Figure 17: Revenue model (on-time entry).

Figure 18: Development cost and Operational cost model (on-time entry).

The Nesma year.book Page 82 Friday, November 5, 2004 4:34 PM

Page 43: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

83

The resulting breakeven point and profit level are given in Figure 19: Profit model

(on-time entry).19.

Delayed Entry

In this section the model for the product lifecycle will be used to calculate the

profit in case of delayed delivery of a product.

Three models are defined, using the following assumptions:

1 Revenue model (Figure 20: Revenue model (delayed market entry).20):

• Product life time is equal to 2.W, product is released at Tr + D,

with peak P’ at Tr + W, with P’ = ((W-D)/W) . P.

• Time of market entry defines a triangle, representing market penetration.

• Triangle area equals total revenue.

2 Development cost model (Figure 21: Development cost and Operational cost

model (delayed market entry).21):

• Product development time is equal to Tr + D with peak Cd‘ at (Tr+ D)/2, with

Cd‘ = ((Tr + D)/Tr) . Cd.

• Start of project at T=0 defines a triangle, representing development cost dis-

tribution.

• Triangle area equals total development cost.

uur 5.7

TimeTr

Tr+ W T

r+ 2W

Bre

ak

Eve

n

Revenue

Total Cost

Profit

Reve

nue, C

ost

Figure 19: Profit model (on-time entry).

The Nesma year.book Page 83 Friday, November 5, 2004 4:34 PM

Page 44: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

84

3 Operational cost model (Figure 21: Development cost and Operational cost model

(delayed market entry).21):

• Peak Co‘ at Tr + W, with Co‘ = ((W - D)/W) . Co.

• Time of market entry defines a triangle, representing operational cost distri-

bution.

• Triangle area equals total operational cost.

This leads to the following equations:

Revenue = _ . (W-D+W) . ((W - D)/W) . P (5)

Development cost = _ . (Tr + D) . ((Tr + D)/Tr) . Cd (6)

Operational cost = _ . (W-D+W) . ((W – D/W)) . Co (7)

uur 5 8

Time

Market rise Market fall

Peak revenue

Tr+ W T

r+ 2W

P'

Revenue

Tr

Tr+ D

TimeTr

Tr+ W T

r+ 2W

Cost

Cd

'

Tr+ D

Co

'

Figure 20: Revenue model (delayed market entry).

Figure 21: Development cost and Operational cost model (delayed market entry).

The Nesma year.book Page 84 Friday, November 5, 2004 4:34 PM

Page 45: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

85

Combining the Revenue model, Development cost model and Operational cost

model, the resulting profit can be calculated:

Profit = _ . (W-D+W) . ((W-D)/W) . P

- _ . (Tr + D) . ((Tr + D)/Tr) . Cd

- _ . (W-D+W) . ((W-D)/W) . Co (8)

The resulting breakeven point and profit level are given in Figure 22: Profit model

(delayed market entry).22.

In Figure 23: Example of consequences for delayed market entry.23 an example is

presented what the relative consequences of a delayed market entry can be on the

profit level5.

5. Implementation of this model could in practice be supported with a sensitivity analysis to see which parameters affect the outcome more strongly than others. Further, confidence limits could be considered given the uncertainties on all quantities.

5 10

TimeTr

Tr+ W T

r+ 2W

Bre

ak

Eve

n

Revenue

Total Cost

Profit

Tr+ D

Revenue, C

ost

Figure 22: Profit model (delayed market entry).

The Nesma year.book Page 85 Friday, November 5, 2004 4:34 PM

Page 46: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

86

Figure 23: Example of consequences for delayed market entry.

The objective of the presented model is not only to support release decision-mak-

ing. It is also meant to support the business case definition process (innovation phase)

and the comparison and evaluation of alternatives (project definition phase, product

design phase). Further, a proper evaluation of a business case after having released a

software product should include a financial evaluation of the profit made (revenue

versus cost). In other words, the objective of applying the model is to support the

elimination of the negative consequences of the case study findings.

Application of the Model

Software manufacturers must have an understanding of the expected product

lifetime, the revenue curve, the development cost and operational cost curve, and the

resulting profit. This information, part of the business case, is not only gathered dur-

ing the investment proposal phase. It must be continuously updated with respect to

new market insights and the actual project status.

This information will be specific to the external and internal characteristics of an

organization. An important factor influencing the shapes of the revenue curve and

cost curves (and thus profit function) is the relationship between the software manu-

facturer and the buyer/user of the software.

In Figure 24: Characteristics of software manufacturer types [26].24 some typical

characteristics are given for different software manufacturer types.

Tr= 50 weeks

W = 50 weeks

D = 0 wk D = 2.5 wk D = 5 wk D = 7.5 wk D = 10 wk

- -7% -14% -21% -28%Development Cost - 10% 21% 32% 44%Operational Cost - -7% -14% -21% -28%Profit

(P = 8, Cd = 5, Co = 5)

- -25% -50% -75% -100%

Software manufacturer type Typical characteristicsCustom systems written on contract • Software made for one particular

buyer

• Budget and schedule fixed

• Penalties for late delivery

The Nesma year.book Page 86 Friday, November 5, 2004 4:34 PM

Page 47: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

87

Figure 24: Characteristics of software manufacturer types [26].

The characteristics of the relationship between a software manufacturer and its

potential buyers/users are input to the determination of a product development strat-

egy. An important aspect here is also the possibly prescribed compliance to standards.

In several markets standards have been defined to ensure the safety of products at a

cost to the manufacturer, examples being the defence industry, aerospace and medi-

cal devices. These standards will have an effect on the way software is produced and

released, thus on the development cost curve. Further, if warranty or liability condi-

tions apply both the development cost curve (increased need for higher reliability

level) and the operational cost curve (higher penalty for software failures) might be

influenced.

Custom systems written in-house • Software used to improve effi-

ciency/effectiveness of internal

organisation

• Limited number of end-users

• Annual budget divided amongst

different projects

• Possible conflicting interests

between IT-department and end-

users

Commercial software

(business-to-business)

• Software sold to other businesses

• Many different buyers

• Critical to the buyer’s business

Mass-market software

(business-to-consumer)

• Software sold to individual buyers

• High volume buyers

• Market windows and buying sea-

sons

Commercial/mass-market firmware • Cost of distributing fixes very high

(physical items)

• Many to high volume buyers

• Failures can have fatal conse-

quences

Software manufacturer type Typical characteristics

The Nesma year.book Page 87 Friday, November 5, 2004 4:34 PM

Page 48: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

88

Knowing the relationship between a software manufacturer and the potential

buyers/users of the software, a product development strategy can be defined, provid-

ing the framework to orient a software manufacturer’s development projects as well

as its development process. As a starting point to develop a product development

strategy, the software manufacturer must determine its primary strategic orientation.

A software manufacturer must recognize that it cannot be all things to all people and

that it must focus on what will distinguish it in the marketplace. Some possible prod-

uct development strategic orientations are:

• First Mover. This involves an orientation to getting a product to market fastest.

This is typical of software manufacturers involved with rapidly changing technol-

ogy or products with rapidly changing fashion (small market window). Pursuit of

this strategy typically leads to tradeoffs in optimising functional product needs,

development cost and non-functional product needs.

• Lowest Development Cost. This orientation is focused on minimising development

cost or developing products within a constrained budget. It occurs for instance

when software manufacturers are developing under contract for other parties,

where a company has severely constrained financial resources. It involves trade-

offs between functional product needs, time-to-market and non-functional

product needs.

• Unique Functional Product Needs. This orientation focuses on the highest level of

product features (including aspects like the latest technology and/or product

innovation). It involves a tradeoff between time-to-market, development cost

and non-functional product needs.

• Highest Non-Functional Product Needs. This orientation focuses on assuring high

levels of product quality (reliability, safety, etc.). This orientation is typical of

industries requiring high quality because of the significant costs incurred in fixing

post-release defects (e.g. instance recalls in a mass market), the need for high lev-

els of reliability (e.g. aerospace industry), or where there are significant safety

issues (e.g. medical devices). It corresponds to the orientation of minimising oper-

ational cost. It involves a tradeoff between functional product needs, time-to-

market and development cost.

The Nesma year.book Page 88 Friday, November 5, 2004 4:34 PM

Page 49: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

SO

FT

WA

RE

RE

LE

AS

IN

G:

DO

TH

E N

UM

BE

RS

MA

TT

ER

?

89

How does the selected product development strategy influence the economic

model? Theoretically not, as it is assumed that each software manufacturer will strive

for maximized profit. The product strategy chosen influences however the shapes of

the revenue and cost curves and as a result the potential profit level. Card suggests

that the number of potential buyers and competition level together determine the

kind of strategy that makes the most profit in the long run [27]. See Figure 25: Model

of Software Markets [27].25.

Figure 25: Model of Software Markets [27].

It is assumed here however, that the strategy to be chosen also depends on the

company’s capabilities (strengths, weaknesses and core competences), market needs

and opportunities, goals, and financial resources. There is no one right strategy for a

software manufacturer, but it is considered important that a strategy is chosen as

input to the business case definition during the investment proposal phase.

Conclusions

Do the numbers really matter? Yes, there is no doubt that they matter. Software

manufacturers will only invest in new software products as a means of profit maximi-

zation. This is true both for manufacturers selling their products to an external mar-

ket and for manufacturers investing in information technology to support their

internal processes.

The case studies revealed however that although business cases were often used

as the rationale for investments, decision-making in subsequent phases is character-

ized by a lack of quantitative information. This is especially true of software release

decisions which lack a clear expectation of operational cost. It is more the exception

than common practice that software release decisions are heavily influenced by

financial considerations.

Competition LevelLow High

Few buyers Unique Functional

Product Needs

Lowest Development Cost

Many buyers First Mover Highest Non-Functional

Product Needs

(Lowest Operational Cost)

The Nesma year.book Page 89 Friday, November 5, 2004 4:34 PM

Page 50: The Nesma year - Ben Linders · The Nesma year.book Page 41 Friday, November 5, 2004 4:34 PM. THE NESMA For a number of projects from the available data a ‘detection rate’ could

TH

E N

ES

MA

AN

NY

VE

RS

AR

Y Y

EA

RB

OO

K

90

How can this be explained? In the first place, Etioni and Amitai argue that it is

impossible to perform a precise analysis necessary to maximize economic objectives,

because limitations with respect to the information will normally exist. Information is

incomplete and imperfect [28]. Decision-makers will make a trade-off between the

amount of information (perfection, completeness) and the cost related to searching

for additional information. Beyond a certain point, obtaining additional information

would lead to diminishing returns. Secondly, it is very difficult if not impossible for

decision-makers to escape the diverse psychological forces that influence their indi-

vidual behaviour. These forces lead to cognitive limitations. A decision-maker simpli-

fies reality, leaves out information and prefers simple rules of thumb as a

consequence of limited cognitive capabilities.

Although these limitations cannot be ignored, they are not excuses to avoid a

more financial approach to software development in general and software release

process in particular. Both software manufacturers and the users of software products

could highly benefit from applying sound economic principles. It offers the possibility

to select only those projects, which offer increasing business benefits and it might

support avoiding the release of software products that impose an unacceptably high

risk on both the user(s) of the product and its manufacturer.

About the author

Hans Sassenburg ([email protected]) received a Master of Science degree

in Electrical Engineering from the Eindhoven University of Technology in 1986 (The

Netherlands). He worked as an independent consultant till 1996, when he co-founded

a consulting and training firm. From 1996 till 2001 he also worked as a guest lecturer

and assistant professor at the Eindhoven University of Technology.

Having sold his company, he moved in 2001 to Switzerland where he founded a

new consulting firm (SE-CURE AG, www.se-cure.ch), offering services in the field of

applied business/software metrics. In 2002, he started in parallel with his consulting

activities a PhD at the Faculty of Economics at the University of Groningen (The Neth-

erlands). Objective of this research is to design a decision-making software release

model for strategic software applications.

References

1 Chaos Report, Standish Group Report.

The Nesma year.book Page 90 Friday, November 5, 2004 4:34 PM


Recommended