+ All Categories

Metrics

Date post: 31-Dec-2015
Category:
Upload: socrates-flavian
View: 64 times
Download: 0 times
Share this document with a friend
Description:
Metrics. Sudipto Ghosh CS 406 Fall 99 November 30, 1999. Learning objectives. Software metrics Metrics for various phases Why metrics are needed How to collect metrics How to use metrics. Questions. How big is the program? Huge!! How close are you to finishing? We are almost there!! - PowerPoint PPT Presentation
Popular Tags:
49
Metrics Sudipto Ghosh CS 406 Fall 99 November 30, 1999
Transcript
Page 1: Metrics

Metrics

Sudipto GhoshCS 406 Fall 99

November 30, 1999

Page 2: Metrics

11/30/99 CS 406 Testing 2

Learning objectives

• Software metrics• Metrics for various phases• Why metrics are needed• How to collect metrics• How to use metrics

Page 3: Metrics

11/30/99 CS 406 Testing 3

Questions

• How big is the program?• Huge!!

• How close are you to finishing?• We are almost there!!

• Can you, as a manager, make any useful decisions from such subjective information?

• Need information like, cost, effort, size of project.

Page 4: Metrics

11/30/99 CS 406 Testing 4

Metrics

• Quantifiable measures that could be used to measure characteristics of a software system or the software development process

• Required in all phases• Required for effective management• Managers need quantifiable information,

and not subjective information • Subjective information goes against the

fundamental goal of engineering)

Page 5: Metrics

11/30/99 CS 406 Testing 5

Kinds of software metrics

• Product metrics• quantify characteristics of the product being

developed• size, reliability

• Process metrics• quantify characteristics of the process being

used to develop the software• efficiency of fault detection

Page 6: Metrics

11/30/99 CS 406 Testing 6

CMM

• Level 4: Managed level• Process measurement performed• Quality and productivity goals set• Continually measured and corrective actions

taken• Statistical quality controls in place

• Level 5: Optimizing level• Statistical quality and process control in place• Positive feedback loop used for improvement

in productivity and quality

Page 7: Metrics

11/30/99 CS 406 Testing 7

Issues [1]

• Cost of collecting metrics• Automation is less costly than manual method• CASE tool may not be free

• Development cost of the tool• Extra execution time for collecting metrics

• Interpretation of metrics consumes resources

• Validity of metrics• Does the metric really measure what it

should?• What exactly should be measured?

Page 8: Metrics

11/30/99 CS 406 Testing 8

Issues [2]

• Selection of metrics for measurement• Hundreds available and with some cost

• Basic metrics• Size (like LOC)• Cost (in $$$)• Duration (months)• Effort (person-months)• Quality (number of faults detected)

Page 9: Metrics

11/30/99 CS 406 Testing 9

Selection of metrics

• Identify problems from the basic metrics• high fault rates during coding phase

• Introduce strategy to correct the problems• To monitor success, collect more detailed

metrics• fault rates of individual programmers

Page 10: Metrics

11/30/99 CS 406 Testing 10

Utility of metrics

• LOC• size of product• take a regular intervals and find out how fast

the project is growing

• What if # defects per 1000 LOC is high?• Then even if the LOC is high, most of the code

has to be thrown away.

Page 11: Metrics

11/30/99 CS 406 Testing 11

Applicability of metrics

• Throughout the software process, like• effort in person-months• staff turnover• cost

• Specific to a phase• LOC• # defects detected per hour of reviewing

specifications

Page 12: Metrics

11/30/99 CS 406 Testing 12

Metrics: planning

• When can we plan the entire software project?• At the very beginning?• After a rapid prototype is made?• After the requirements phase?• After the specifications are ready?

• Sometimes there is a need to do it early.

Page 13: Metrics

11/30/99 CS 406 Testing 13

Metrics: planning

• graph of cost estimate

Phase during which cost estimation is madeRe

lativ

e r

ang

e o

f co

st e

stim

ate

Requirements Specifications Integration

Design Implementation

2

3

4

Page 14: Metrics

11/30/99 CS 406 Testing 14

Planning: Cost estimation

• Client wants to know:• How much will I have to pay?

• Problem with• underestimation (possible loss by the

developer)• overestimation (client may offer bid to

someone else)

• Cost• internal (salaries of personnel, overheads)• external (usually cost + profit)

Page 15: Metrics

11/30/99 CS 406 Testing 15

Cost estimation

• Other factors:• desperate for work - charge less• client may think low cost => low quality, so

raise the amount

• Too many variables• Human factors

• Quality of programmers, experience• What if someone leaves midway

• Size of product

Page 16: Metrics

11/30/99 CS 406 Testing 16

Planning: Duration estimation

• Problem with underestimation• unable to keep to schedule, leading to

• loss of credibility• possible penalty clauses

• Problem with overestimation• the client may go to other developers

• Difficulty because of similar reasons as for cost estimation

Page 17: Metrics

11/30/99 CS 406 Testing 17

Metrics: planning - size of product

• Units for measurement• LOC = lines of code• KDSI = thousand delivered source instructions

• Problems• creation of code is only a part of the total effort• effect of using different languages on LOC• how should one count LOC?

• executable lines of code?• data definitions• comments? What are the pros and cons?

Page 18: Metrics

11/30/99 CS 406 Testing 18

Problems with lines of code

• Problems• More on how to count

• Job control language statements?• What if lines are changed or deleted?• What if code is reused?

• Not all code is delivered to clients• code may be for tool support

• What if you are using a code generator?• Early on, you can only estimate the lines of

code. So, the cost estimation is based on another estimated quantity!!!

Page 19: Metrics

11/30/99 CS 406 Testing 19

Estimating size of product

• FFP metric for cost estimation of medium-scale products

• Files, flows and processes (FFP)• File: collection of logically or physically related

records that are permanently resident• Flow: a data interface between the product

and the environment• Process: functionally defined logical or

arithmetic manipulation of data

S = #Files + #Flows + #Process, C = d X S

Page 20: Metrics

11/30/99 CS 406 Testing 20

Techniques of cost estimation

• Take into account the following:• Skill levels of the programmers• Complexity of the project• Size of the project• Familiarity of the development team• Hardware• Availability of CASE tools• Deadline effect

Page 21: Metrics

11/30/99 CS 406 Testing 21

Techniques of cost estimation

• Expert judgement by analogy• Bottom up approach• Algorithmic cost estimation models

• Based on mathematical theories• resource consumption during s/w development

obeys a specific distribution

• Based on statistics• large number of projects are studied

• Hybrid models• mathematical models, statistics and expert

judgement

Page 22: Metrics

11/30/99 CS 406 Testing 22

COCOMO

• COnstructive COst MOdel• Series of three models

• Basic - macroestimation model

• Intermediate COCOMO

• Detailed - microestimation model

• Estimates total effort in terms of person-months

• Cost of development, management, support tasks included

• Secretarial staff not included

Page 23: Metrics

11/30/99 CS 406 Testing 23

Intermediate COCOMO

• Obtain an initial estimate (nominal estimate) of the development effort from the estimate of KDSI• Nominal effort = a X (KDSI)b person-months

System a b

OrganicSemi-detachedEmbedded

3.23.02.8

1.051.121.20

Page 24: Metrics

11/30/99 CS 406 Testing 24

Kind of systems

• Organic• Organization has considerable experience in

that area• Requirements are less stringent• Small teams• Simple business systems, data processing sys

• Semi-detached• New operating system• Database management system• Complex inventory management system

Page 25: Metrics

11/30/99 CS 406 Testing 25

Kind of systems

• Embedded• Ambitious, novel projects• Organization has little experience• Stringent requirements for interfacing,

reliability• Tight constraints from the environment• Embedded avionics systems, real-time

command systems

Page 26: Metrics

11/30/99 CS 406 Testing 26

Intermediate COCOMO (contd)

• Determine a set of 15 multiplying factors from different attributes (cost driver attributes) of the project• Page 274 of the book

• Adjust the effort estimate by multiplying the initial estimate with all the multiplying factors

• Also have phase-wise distribution

Page 27: Metrics

11/30/99 CS 406 Testing 27

Determining the rating

• Module complexity multiplier• Very low: control operations consist of a

sequence of constructs of structured programming

• Low: Nested operators• Nominal: Intermodule control and decision

tables• High: Highly nested operators, compound

predicates, stacks and queues• Very high: Reentrant and recursive coding,

fixed priority handling

Page 28: Metrics

11/30/99 CS 406 Testing 28

COCOMO example

• System for office automation• Four major modules

• data entry: 0.6 KDSI• data update: 0.6 KDSI• date query: 0.8 KDSI• report generator: 1.0 KDSI• Total: 3.0 KDSI

• Category: organic• Initial effort: 3.2 * 31.05 = 10.14 PM

• (PM = person-months)

Page 29: Metrics

11/30/99 CS 406 Testing 29

COCOMO example (contd)

• From the requirements the ratings were assessed:• Complexity High 1.15• Storage High 1.06• Experience Low 1.13• Programmer Capability Low 1.17• Other ratings are nominal

• EAF = 1.15 * 1.06 * 1.13 * 1.17 = 1.61• Adjusted effort = 1.61 * 10.14 = 16.3 PM

Page 30: Metrics

11/30/99 CS 406 Testing 30

Metrics: requirements phase

• Number of requirements that change during the rest of the software development process• if a large number changed during

specification, design, …, something is wrong in the requirements phase

• Metrics for rapid prototyping• Are defect rates, mean-time-to-failure useful?• Knowing how often requirements change?• Knowing number of times features are tried?

Page 31: Metrics

11/30/99 CS 406 Testing 31

Metrics: specification phase

• Size of specifications document• may predict effort required for subsequent

products• What can be counted?

• Number of items in the data dictionary• number of files

• number of data items

• number of processes

• Tentative information• a process in a DFD may be broken down later into

different modules• a number of processes may constitute one module

Page 32: Metrics

11/30/99 CS 406 Testing 32

Metrics: specification phase

• Cost• Duration• Effort• Quality

• number of faults found during inspection• rate at which faults are found (efficiency of

inspection)

Page 33: Metrics

11/30/99 CS 406 Testing 33

Metrics: design phase

• Number of modules (measure of size of target product)

• Fault statistics• Module cohesion• Module coupling• Cyclomatic complexity• Fan-in, fan-out

Page 34: Metrics

11/30/99 CS 406 Testing 34

Cyclomatic complexity

• Number of binary decisions + 1• The number of branches in a module• Proposed by McCabe• Lower the value of this number, the better• Only control complexity, no data complexity• For OO, cyclomatic complexity is usually

low because methods are mostly small• also, data component is important for OO, but

ignored in cyclomatic complexity

Page 35: Metrics

11/30/99 CS 406 Testing 35

Architecture design as a directed graph• Fan-in of a module:

• number of flows into a module plus the number of global data structures accessed by the module

• Fan-out of a module:• number of flows out of the module plus the

number of data structures updated by the module

• Measure of complexity:• length X (fan-in X fan-out)2

Page 36: Metrics

11/30/99 CS 406 Testing 36

OO design metrics

• Assumption: The effort in developing a class is determined by the number of methods.

• Hence the overall complexity of a class can be measured as a function of the complexity of its methods.

Proposal: Weighted Methods per class (WMC)

Page 37: Metrics

11/30/99 CS 406 Testing 37

WMC

• Let class C have methods M1, M2, .....Mn.

• Let Ci denote the complexity of method

• How to measure Ci?

n

iicWMC

1

Page 38: Metrics

11/30/99 CS 406 Testing 38

WMC validation

• Most classes tend to have a small number of methods, are simple, and provide some specific abstraction and operations.

• WMC metric has a reasonable correlation with fault-proneness of a class.

Page 39: Metrics

11/30/99 CS 406 Testing 39

Depth of inheritance tree

• Depth of a class in a class hierarchy determines potential for re-use. Deeper classes have higher potential for re-use.

• Inheritance increases coupling... changing classes becomes harder.

• Depth of Inheritance (DIT) of class C is the length of the shortest path from the root of the inheritance tree to C.

• In case of multiple inheritance DIT is the maximum length of the path from the root to C.

Page 40: Metrics

11/30/99 CS 406 Testing 40

DIT evaluation

• Basili et al. study,1995. • Chidamber and Kemerer study, 1994.

• Most classes tend to be close to the root. • Maximum DIT value found to be 10.• Most classes have DIT=0.• DIT is significant in predicting error proneness of a

class. Higher DIT leads to higher error-proneness.

Page 41: Metrics

11/30/99 CS 406 Testing 41

Metrics: implementation phase

• Intuition: more complex modules are more likely to contain faults

• Redesigning complex modules may be cheaper than debugging complex faulty modules

• Measures of complexity:• LOC

• assume constant probability of fault per LOC• empirical evidence: number of faults related to the

size of the product

Page 42: Metrics

11/30/99 CS 406 Testing 42

Metrics: implementation phase

• McCabe’s cyclomatic complexity• Essentially the number of branches in a module• Number of tests needed for branch coverage of a

module• Easily computed• In some cases, good for predicting faults• Validity questioned

• Theoretical grounds

• Experimentally

Page 43: Metrics

11/30/99 CS 406 Testing 43

Metrics: implementation phase

• Halstead’s software metrics• Number of distinct operators in the module (+. -. If,

goto)• Number of distinct operands• Total number of operators• Total number of operands

Page 44: Metrics

11/30/99 CS 406 Testing 44

Metrics: implementation phase

• High correlation shown between LOC and other complexity metrics

• Complexity metrics provide little improvement over LOC

• Problem with Halstead metrics for modern languages• Constructor: is it an operator? Operand?

Page 45: Metrics

11/30/99 CS 406 Testing 45

Metrics: implementation and integration phase• Total number of test cases• Number of tests resulting in failure• Fault statistics

• Total number of faults• Types of faults

• misunderstanding the design• lack of initialization• inconsistent use of variables

• Statistical-based testing: • zero-failure technique

Page 46: Metrics

11/30/99 CS 406 Testing 46

Zero failure technique

• The longer a product is tested without a single failure being observed, the greater the likelihood that the product is free of faults.

• Assume that the chance of failure decreases exponentially as testing proceeds.

• Figure out the number of test hours required without a single failure occurring.

Page 47: Metrics

11/30/99 CS 406 Testing 47

Metrics: inspections

• Purpose: measure effectiveness of inspections• may reflect deficiencies of the development

team, quality of code

• Measure fault density• Faults per page - specs and design inspection• Faults per KLOC - code inspection• Fault detection rate - #faults / hour• Fault detection efficiency - #faults/person-hour

Page 48: Metrics

11/30/99 CS 406 Testing 48

Metrics: maintenance phase

• Metrics related to the activities performed. What are they?

• Specific metrics:• total number of faults reported• classifications by severity, fault type• status of fault reports (reported/fixed)

Page 49: Metrics

11/30/99 CS 406 Testing 49

References

• Textbook• S. R. Scach - Classical and Object-Oriented Software

Engineering (Look at metrics under Index)

• Other books• P. Jalote - An Integrated Approach to Software

Engineering (Look at metrics under Index)


Recommended