+ All Categories
Home > Documents > Software Metrics.

Software Metrics.

Date post: 05-Jan-2016
Category:
Upload: dougal
View: 43 times
Download: 2 times
Share this document with a friend
Description:
CEN 5076 Class 7 – 10/17. Software Metrics. Views on Testing Categories of Metrics Review of several OO metrics. Intro to TT – Views on Testing. Exhaustive Testing : Usually cited as an impractical means of achieving a reliable and valid test. Questions arising: - PowerPoint PPT Presentation
27
Software Metrics. Views on Testing Categories of Metrics Review of several OO metrics CEN 5076 Class 7 – 10/17
Transcript
Page 1: Software Metrics.

Software Metrics.

• Views on Testing

• Categories of Metrics

• Review of several OO metrics

CEN 5076 Class 7 – 10/17

Page 2: Software Metrics.

CEN 5076 Class 7 - 10/17 2

Intro to TT – Views on Testing

Exhaustive Testing:

• Usually cited as an impractical means of achieving a reliable and valid test.

• Questions arising:

1. Is there a way to test a program which is equivalent to exhaustive testing (in the sense of being reliable and valid)?

2. Is there a practical approximation to exhaustive testing?

3. If so, how good is the approximation?

Page 3: Software Metrics.

CEN 5076 Class 7 - 10/17 3

Intro to TT – Views on Testing

Is proving really better than testing?

• One should not rely solely on proofs to demonstrate program correctness since testing is not completely reliable.

• Proofs aren’t completely reliable either. Proofs can only provide assurance of correctness if all of the following are true:

1. There is a complete axiomatization of the entire running environment of the program – all languages, operating system, and hardware processors.

Page 4: Software Metrics.

CEN 5076 Class 7 - 10/17 4

Intro to TT – Views on Testing2. The processors are proved consistent with the

axiomatization.

3. The program is completely and formally implemented in such a way a proof can be performed or checked mathematically.

4. The specifications are correct in that if every program in the system is correct w.r.t. its specifications, then the entire system performs as desired.

• The above is far beyond the state of the art of program specification and mechanical theorem proving (1975).

Page 5: Software Metrics.

CEN 5076 Class 7 - 10/17 5

Intro to TT – Views on Testing• Despite the practical fallibility of proving,

attempts at proof are valuable:

- Can reveal errors and assist in their prevention.

- Requires an understanding of the program.

- Provability sets a worthy standard for good languages, program structure, specifications, and documentation and thereby assist in preventing errors.

• Neither proofs nor tests can in practice provide complete assurance that programs will not fail.

Page 6: Software Metrics.

CEN 5076 Class 7 - 10/17 6

Intro to TT – Views on TestingPoints to note:

• Tests provides accurate information about a program’s actual behavior and its actual environment.

• A proof is limited to conclusions about behavior in a postulated (start from axioms) environment.

• Testing and proving are complementary methods for decreasing the likelihood of program failure.

Page 7: Software Metrics.

CEN 5076 Class 7 - 10/17 7

Intro to TT – Axiomatizing Software Test Data Adequacy

Defns:

The specification S is a partial function. It defines what a program should compute.

The domain of S is the set of values for which S is defined.

The domain of a program P is the set of all values for which the program is defined.

For program P let P(x) denote the result of P executing input vector x.

Page 8: Software Metrics.

CEN 5076 Class 7 - 10/17 8

Axiomatizing S/w Test Data Adequacy

If x is in the specification’s domain, then we let S(x) denote the value which a program intended to fulfill S should produce on input x.

For programs P and Q using the same set of identifiers, P;Q means the program formed by replacing P’s unique exit and output statements by Q with Q’s input statements deleted. Assume programs have single-entry/single-exit.

P, Q denotes programs, S denote a spec, and T, T’, Ti, i=1,2, . . . denote test sets.

Page 9: Software Metrics.

CEN 5076 Class 7 - 10/17 9

Axiomatizing S/w Test Data Adequacy

Axioms [Weyuker ’86 and Perry Kaiser ’90]:

Applicability: For every program there exists an adequate test set.

Non-Exhaustive Applicability: There is a program P and a test set T s.t. P is adequately tested by T, and T is not an exhaustive test set.

Monotonicity: If T is adequate for P, and T is a subset of T’ then T’ is adequate for P.

Page 10: Software Metrics.

CEN 5076 Class 7 - 10/17 10

Axiomatizing S/w Test Data Adequacy

Axioms cont:

Inadequate Empty Set: The empty set is not an adequate test set for any program.

Renaming: Let P be a renaming of Q; then T is adequate for P iff T is adequate for Q.

Complexity: For every n, there is a program P, s.t. P is adequately tested by a size n test set, but not by any size n-1 test set.

Page 11: Software Metrics.

CEN 5076 Class 7 - 10/17 11

Axiomatizing S/w Test Data Adequacy

Axioms cont:

Antiextensionality: If two programs compute the same function (that is they are semantically close), a test set adequate for one is not necessarily adequate for the other.

There are programs P and Q s.t. P Q, T is adequate for P, but T is not adequate for Q.

E.g., A test set generated from S may be adequate for one implementation of S, (P) but not another implementation of S, (Q).

Page 12: Software Metrics.

CEN 5076 Class 7 - 10/17 12

Axiomatizing S/w Test Data AdequacyAxioms cont:

General Multiple Change: When two programs are syntactically similar (that is, they have the same shape) they usually require different test sets.

There are programs P and Q which are the same shape, and a test set T such that T is adequate for P, but is not adequate for Q.

Two programs are the same shape if they are the same size, have the same form, and compute the same function in essentially the same way, using the same variables.

Page 13: Software Metrics.

CEN 5076 Class 7 - 10/17 13

Axiomatizing S/w Test Data Adequacy

Axioms cont:

Antidecomposition: Testing a program component in the context of an enclosing program may be adequate w.r.t. that enclosing program but not necessarily adequate for other uses of the component.

There exists a program P and component Q such that T is adequate for P, T’ is the set of vectors of values that variables can assume on entrance to Q for some t of T, and T’ is not adequate for Q.

Page 14: Software Metrics.

CEN 5076 Class 7 - 10/17 14

Axiomatizing S/w Test Data AdequacyAxioms cont:

Anticomposition: Adequately testing each individual program component in isolation does not necessarily suffice to adequately test the entire program. Composing two program components results in interactions that cannot arise in isolation.

There exist programs P and Q, and test set T, s.t. T is adequate for P, and the set of vectors of values that variables can assume on entrance to Q for inputs in T is adequate for Q but is not adequate for P;Q. [P;Q is the composition of P and Q]

Page 15: Software Metrics.

CEN 5076 Class 7 - 10/17 15

Axiomatizing S/w Test Data Adequacy

OO testing and axioms [Perry and Kaiser ’90]:

Encapsulation – anticomposition

Inheritance – anticomposition, antidecomposition.

Overriding of methods – antiextensionality.

Multiple inheritance – general multiple change, antiextensionality

Page 16: Software Metrics.

CEN 5076 Class 7 - 10/17 16

Software Metrics

• Software metrics deal with the measurement of the software product and the process by which it is developed. Software Metrics. [ SEI Curriculum Module SEI-CM-12-1.1, 1988]

• Metrics are used to estimate/predict product costs and schedules and to measure productivity and product quality

• Good metrics should facilitate the development of models that are capable of predicting process or product parameters, not just describing them.

Page 17: Software Metrics.

CEN 5076 Class 7 - 10/17 17

Software Metrics

To identify the metrics that can support testing we ask the following questions:

1. When can we stop testing?

2. How much bugs can we expect?

3. Which testing technique is more effective?

4. Are we testing hard or are we testing smart?

5. Do we have a strong program or a weak test suite?

Page 18: Software Metrics.

CEN 5076 Class 7 - 10/17 18

Software Metrics

• Ideal metrics should be:

- Simple, precisely definable – it is clear how the metric can be evaluated,

- Objective, to the greatest extent possible,

- Easily obtainable (i.e., at a reasonable cost),

- Valid – the metric should measure what it is intended to measure,

- Robust – relatively insensitive to insignificant changes in the process or product.

Page 19: Software Metrics.

CEN 5076 Class 7 - 10/17 19

Software MetricsCategories of metrics (Bezier ’90)

1. Linguistic metrics – metrics based on measuring properties of program or specification text without interpreting what the test means or the ordering of components of the test.

E.g. lines of code, number of statements, number of unique operators, number of unique operands, total number of operators, total number of operands, total number of keyword appearances, total number of tokens.

Page 20: Software Metrics.

CEN 5076 Class 7 - 10/17 20

Software MetricsCategories of metrics (Bezier ’90)

2. Structural metrics – metrics based on structural relations between objects in the program, i.e., metrics on properties of control flowgraphs, or data flowgraphs; e.g., number of links, number of nodes, nesting depth.

3. Hybrid metrics – metrics based on some combinations of structural and linguistic properties of a program or based on a function of both structural and linguistics properties.

Page 21: Software Metrics.

CEN 5076 Class 7 - 10/17 21

Software Metrics

• Software metrics can be broadly classified as either product metrics or process metrics.

• Product metrics are measures of the software product at any stage of its development, from requirements to installed system.

• Process metrics are measures of the software development process, such as overall development time, type of methodology used, or the average level of experience of the programming staff.

Page 22: Software Metrics.

CEN 5076 Class 7 - 10/17 22

Software Metrics – Product Metrics

• Most of the initial work on product metrics dealt with the characteristics of source code.

Type of product metrics:

1. Size Metrics:

a) Lines of code (LOC) - possibly the most widely used.

2. Complexity Metrics:

a) Cyclomatic complexity ( v(G) )

b) Extensions to v(G)

Page 23: Software Metrics.

CEN 5076 Class 7 - 10/17 23

Software Metrics – Product Metrics

3. Halstead’s Product Metrics – is a unified set of metrics that apply to several aspects of programs, as well as to the overall software product.

a) Program vocabulary, n = n1 + n2 where n is the total number of unique tokens, n1 total number of unique operators, n2 total number of unique operands from which the program is constructed.

b) Program length, H = n1*log2 n1 + n2*log2 n2

Page 24: Software Metrics.

CEN 5076 Class 7 - 10/17 24

Software Metrics – Product Metrics

c) Halstead length N = N1 + N2, where N1 is the program operator count, N2 the program operand count.

d) Bug prediction (Bezier 90):

3000

)(log)( 21221 nnNNB

Page 25: Software Metrics.

CEN 5076 Class 7 - 10/17 25

Software Metrics – Product Metrics

4. Quality Metrics – software quality is a characteristic that can be measured at every phase of the software development cycle.

a) Defect Metrics

- Number of design changes

- Number of errors detected by code inspections

- Number of errors detected in program tests

- Number of code changes required

b) Reliability Metrics

- Mean time to failure (MTTF)

Page 26: Software Metrics.

CEN 5076 Class 7 - 10/17 26

Metrics for OO Systems

• We will review 9 metrics used by the Software Assurance Technology Center (SATC) at NASA Goddard Flight Center. [Dr. Linda Rosenberg]

1. Cyclomatic complexity (CC)

2. Lines of Code (LOC)

3. Comment percentage (CP)

4. Weighted methods per class (WMC)

5. Response for a class (RFC)

Page 27: Software Metrics.

CEN 5076 Class 7 - 10/17 27

Metrics for OO Systems

6. Lack of cohesion of methods (LCOM)

7. Coupling between objects (CBO)

8. Depth of inheritance tree (DIT)

9. Number of children (NOC)


Recommended