8/10/2019 Software Metrics, A Roadmap.ppt
1/46
Software Metrics: RoadmapBy Norman E. Fenton and Martin Neil
Presentation by Karim Dhambri
8/10/2019 Software Metrics, A Roadmap.ppt
2/46
Software Metrics: Roadmap 2
Authors (1/2)n Norman Fenton isProfessor of
Computing at QueenMary (University ofLondon) and is alsoChief Executive Officerof Agena, a companythat specialises in riskmanagement forcritical systems. He ishead of RADAR (Risk
Assessment andDecision Analysis)
Group
8/10/2019 Software Metrics, A Roadmap.ppt
3/46
Software Metrics: Roadmap 3
Authors (2/2)
n Martin Neil is a Reader in "SystemsRisk" at the Department of ComputerScience, Queen Mary, University of
London, where he teaches decisionand risk analysis and softwareengineering. Martin is also a jointfounder and Chief Technology Officerof Agena Ltd (UK)
8/10/2019 Software Metrics, A Roadmap.ppt
4/46
Software Metrics: Roadmap 4
Plan
n Introductionn Brief history of software metricsn Weaknesses of traditionnal
approachesn Causal modelsn Future worksn Comments on the article
8/10/2019 Software Metrics, A Roadmap.ppt
5/46
Software Metrics: Roadmap 5
Introduction (1/9)
n The car accidents example
n Data on car accidents in both the US
and the UK reveal that January andFebruary are the months when thefewest fatalities occur.
8/10/2019 Software Metrics, A Roadmap.ppt
6/46
Software Metrics: Roadmap 6
Introduction (2/9)
n The car accidents example
n Thus, if you collect a database of
fatalities organised by months and usethis to build a regression model, yourmodel would predict that it is safest todrive when weather is coldest and roads
are at their most treacherous.
8/10/2019 Software Metrics, A Roadmap.ppt
7/46
Software Metrics: Roadmap 7
Introduction (3/9)
n The car accidents example
n Such a conclusion is perfectly sensible
given the data available, but intuitivelywe know its wrong. n The problem is that you do not have all
the relevant data to make a sensibledecision about the safest time to drive.
8/10/2019 Software Metrics, A Roadmap.ppt
8/46
Software Metrics: Roadmap 8
Introduction (4/9)
n The car accidents example
8/10/2019 Software Metrics, A Roadmap.ppt
9/46
Software Metrics: Roadmap 9
Introduction (5/9)
n So what has this got to do withsoftware metrics? Well, softwaremetrics has been dominated by
statistical models, such as regressionmodels, when what is really neededare causal models.
8/10/2019 Software Metrics, A Roadmap.ppt
10/46
Software Metrics: Roadmap 10
Introduction (6/9)
n Software resource estimation
n Much software metrics has been driven
by the need for resource predictionmodels. n Usually this work has involved models
of the form
effort=f(size)
8/10/2019 Software Metrics, A Roadmap.ppt
11/46
Software Metrics: Roadmap 11
Introduction (7/9)
n Problems with effort=f(size)
n Size cannot cause effort.n Such models cannot be used for risk
assessment because they lackexplanatory framework.
n Managers cant decide how to improvethings from the models outputs.
8/10/2019 Software Metrics, A Roadmap.ppt
12/46
Software Metrics: Roadmap 12
Introduction (8/9)
n Solution: causal modeling
n Provide an explanatory structure to
explain events that can then bequantified.n Provide information to support
quantitative managerial decision-makingduring the software lifecycle.
n Provide support for risk assessment andreduction.
8/10/2019 Software Metrics, A Roadmap.ppt
13/46
Software Metrics: Roadmap 13
Introduction (9/9)
n Software resource estimation
8/10/2019 Software Metrics, A Roadmap.ppt
14/46
Software Metrics: Roadmap 14
History of metrics (1/13)
n Def.: Software metrics is a collectiveterm used to describe the very widerange of activities concerned with
measurement in softwareengineering.
8/10/2019 Software Metrics, A Roadmap.ppt
15/46
8/10/2019 Software Metrics, A Roadmap.ppt
16/46
Software Metrics: Roadmap 16
History of metrics (3/13)
n Software metrics are used since themid-1960s
n At that time, Lines of Code was usedas a measurement of productivity andeffort
8/10/2019 Software Metrics, A Roadmap.ppt
17/46
Software Metrics: Roadmap 17
History of metrics (4/13)
n Problems using metrics:n Theory and practice have been out of
stepn
Metrics often misunderstood, misused,and even reviledn Industry is not convinced of metrics
benefitsn Metrics programs are used when things
go bad to satisfy some assessment body(CMM)
8/10/2019 Software Metrics, A Roadmap.ppt
18/46
Software Metrics: Roadmap 18
History of metrics (5/13)
n The two components of softwaremetrics:
n The component concerned with definingthe actual measures
n The component concerned with how wecollect, manage and use the measures
8/10/2019 Software Metrics, A Roadmap.ppt
19/46
Software Metrics: Roadmap 19
History of metrics (6/13)
8/10/2019 Software Metrics, A Roadmap.ppt
20/46
Software Metrics: Roadmap 20
History of metrics (7/13)
n Rationale for using metrics
n The desire to assess or predict
effort/cost of development processes
n The desire to asses or predict quality ofsoftware products
8/10/2019 Software Metrics, A Roadmap.ppt
21/46
Software Metrics: Roadmap 21
History of metrics (8/13)
n The key in both cases has been theassumption that product size shoulddrive any predictive models.
8/10/2019 Software Metrics, A Roadmap.ppt
22/46
Software Metrics: Roadmap 22
History of metrics (9/13)n
LOC/programmer month asproductivity measure
n
Regression-based resource predictionby Putnam and Boehm:
Effort = f(LOC)
n Program quality measurement(usually defects/KLOC)
8/10/2019 Software Metrics, A Roadmap.ppt
23/46
8/10/2019 Software Metrics, A Roadmap.ppt
24/46
Software Metrics: Roadmap 24
History of metrics (11/13)
n From the mid-1970s interest inmeasures of software complexity andfunctional size (such as function
points)n The rational for these metrics is still
to asses quality and effort/cost
8/10/2019 Software Metrics, A Roadmap.ppt
25/46
Software Metrics: Roadmap 25
History of metrics (12/13)
n Study of software metrics has beendominated by defining specificmeasures and models.
n Much recent work has beenconcerned with collecting, managing,
and using metrics in practice.
8/10/2019 Software Metrics, A Roadmap.ppt
26/46
Software Metrics: Roadmap 26
History of metrics (13/13)
n Most notable advancesn Work on the mechanics of implementing
metrics programsn Grady and Caswell: first company-wide
software metrics programn Basili, Rombach: GQM
n The use of metrics in empirical software
engineeringn Benchmarking and evaluating the
effectiveness of s.e. methods, tools andtechnologies (Basili)
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
27/46
Software Metrics: Roadmap 27
Weaknesses of traditionalapproaches (1/11)n The approaches to both qualityprediction and resource prediction
have remained fundamentallyunchanged since the early 1980s.
8/10/2019 Software Metrics, A Roadmap.ppt
28/46
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
29/46
Software Metrics: Roadmap 29
Weaknesses of traditionalapproaches (3/11)n
Regression-based model for qualityprediction:
f(complexity metric) = defect density
n Problemsn Incapable of predicting defects
accuratelyn No explanations of how defect
introduction and detection variableaffects defect counts
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
30/46
Software Metrics: Roadmap 30
Weaknesses of traditionalapproaches (4/11)n
A further empirical study (Fenton)shown:
n Size metrics (while correlated to grossnumber of defects) are poor indicator ofdefects
n Static complexity metrics are notsignificantly better as predictors
n Counts of defects pre-release is a verybad indicator of quality
n The lunch story
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
31/46
Software Metrics: Roadmap 31
Weaknesses of traditionalapproaches (5/11)
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
32/46
Software Metrics: Roadmap 32
Weaknesses of traditionalapproaches (6/11)n
These results invalidate models:
n using pre-release faults as a measure for
operational quality
n using complexity metrics to predictmodules fault-prone post release
n Complexity metrics were judged valid ifcorrelated with pre-release fault density
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
33/46
Software Metrics: Roadmap 33
Weaknesses of traditionalapproaches (7/11)n
Empirical phenomenon observed by Adam (1984):n [] most operational system failures
are caused by a small proportion of thelatent faults.
n The fact that fault density (in terms ofpre-release faults) was used as a
measure of user perceived softwarequality lead us to wrong conclusions.
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
34/46
Software Metrics: Roadmap 34
Wapproaches (8/11)n
Explanations of the scatter plotn Most of the modules that had high
number of pre-release, low number ofpost-release faults just happened to bevery well tested.
n A module that is never executed willnever reveal latent faults (no matter
how many), hence operational usagemust be taken into account.
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
35/46
Software Metrics: Roadmap 35
approaches (9/11)n
Other problems with regression-basedmodels for resource prediction:n Lack causal factors to explain variationn
Based on limited historical datan Resource constraints not modeledn Black box modelsn Cannot handle uncertaintyn Little support for risk assessment and
reduction
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
36/46
Software Metrics: Roadmap 36
approaches (10/11)n
The classic problem : Is this systemsufficiently reliable to ship? n Useful information:
n Measurement data from testing (such asdefects found in various testing phases)
n Empirical data about the process andresources used
n
Subjective information about theprocess/resourcesn Very specific and important pieces of
evidence (proof of correctness)
Weaknesses of traditional
8/10/2019 Software Metrics, A Roadmap.ppt
37/46
Software Metrics: Roadmap 37
approaches (11/11)n
In practice, we only possessfragments of such information.n The question is how to combine such
diverse information and then how touse it to help solve a decisionproblem that involves risk.
Causal models (1/7)
8/10/2019 Software Metrics, A Roadmap.ppt
38/46
Software Metrics: Roadmap 38
Causal models (1/7)n
We need a model that take accountof missing concepts from regression-based approaches:
n Diverse process and product variablesn Empirical evidence and expert
judgementn Genuine cause and effect relationshipn Uncertaintyn Incomplete information
8/10/2019 Software Metrics, A Roadmap.ppt
39/46
Causal models (3/7)
8/10/2019 Software Metrics, A Roadmap.ppt
40/46
Software Metrics: Roadmap 40
Causal models (3/7)
8/10/2019 Software Metrics, A Roadmap.ppt
41/46
Causal models (5/7)
8/10/2019 Software Metrics, A Roadmap.ppt
42/46
Software Metrics: Roadmap 42
Causal models (5/7)
Causal models (6/7)
8/10/2019 Software Metrics, A Roadmap.ppt
43/46
Software Metrics: Roadmap 43
Causal models (6/7)
8/10/2019 Software Metrics, A Roadmap.ppt
44/46
8/10/2019 Software Metrics, A Roadmap.ppt
45/46
Comments on the article
8/10/2019 Software Metrics, A Roadmap.ppt
46/46
Software Metrics: Roadmap 46
Comments on the articlen
Positiven Application of simulation to software
engineeringn Causal models can constantly be tuned
n Negativen Would have liked more details
concerning BBNsn In practice, how can we determine the
probability for each node