+ All Categories
Home > Documents > Lecture 7

Lecture 7

Date post: 26-Feb-2016
Category:
Upload: viveka
View: 42 times
Download: 0 times
Share this document with a friend
Description:
Lecture 7. Advanced Topics in Testing. Mutation Testing. Mutation testing concerns evaluating test suites for their inherent quality, i.e. ability to reveal errors. Need an objective method to determine quality - PowerPoint PPT Presentation
Popular Tags:
33
Lecture 7 Advanced Topics in Testing
Transcript
Page 1: Lecture  7

Lecture 7

Advanced Topics in Testing

Page 2: Lecture  7

Mutation Testing

• Mutation testing concerns evaluating test suites for their inherent quality, i.e. ability to reveal errors.

• Need an objective method to determine quality• Differs from structural coverage since it tries to

define “what is an error”• Basic idea is to inject defined errors into the SUT

and evaluate whether a given test suite finds them.• “Killing a mutation”

Page 3: Lecture  7

Basic Idea

• We can statistically estimate (say) fish in a lake by releasing a number of marked fish, and then counting the number of marked fish in a subsequent small catch.

• Example: release 20 fish• Catch 40 fish, 4 marked.• Then 1 in 10 is marked, so estimate 200 fish• Pursue the same idea with marked SW bugs?

Page 4: Lecture  7

Mutations and Mutants

• The “marked fish” are injected errors, termed mutations

• The mutated code is termed a mutant• Example: replace < by > in a one Boolean expression• if ( x < 0 ) then … becomes if ( x > 0 ) then …• If test suite finds mutation we say this particular

mutant is killed• Make large set of mutants – typically using a

checklist of known mutations - mutation tool

Page 5: Lecture  7

Mutation score

• Idea is that if we can kill a mutant we can identify a real bug too

• Mutants which are semantically equivalent to the original code are called equivalents

• Write Q P if Q and P are equivalents• Clearly cannot kill equivalents• Mutation score % =

Number of killed mutants total number of non-equivalent mutants *100

Page 6: Lecture  7

Why should it work?

• Two assumptions are used in this fieldCompetent programmer hypothesis

i.e. “The program is mostly correct”and the

Coupling Effect

Page 7: Lecture  7

Semantic Neighbourhoods

• Let Φ be the set of all programs semantically close to P (defined in various ways)

• Φ is neighbourhood of P• Let T be a test suite, f:DD be a functional spec of

P• Traditionally assume

t T P.x = f(x) x D P.x = f(x) • i.e. T is a reliable test suite• Requires exhaustive testing

Page 8: Lecture  7

Competent Programmer Hypothesis

• P is pathological iff P Φ • Assume programmers have some competence

Mutation testing assumptionEither P is pathological or elset T P.x = f(x) x D P.x = f(x)

• Can now focus on building a test suite T that would distinguish P from all other programs in Φ

Page 9: Lecture  7

Coupling Effect

• The competent programmer hypothesis limits the problem from infinite to finite.

• But remaining problem is still too largeCoupling effect says that there is a small subset μ Φ such that: We only need to distinguish P from all

programs in μ by tests in T

Page 10: Lecture  7

Problems

• Can we be sure the coupling effect holds? Do simple syntactic changes define a set?

• Can we detect and count equivalents? If we can’t kill a mutant Q is Q P or is Q just hard to kill?

• How large is μ? May still be too large to be practical?

Page 11: Lecture  7

Equivalent Mutants

• Offut and Pan [1997] estimated 9% of all mutants equivalent

• Bybro [2003] concurs with 8%• Automatic detection algorithms (basically

static analysers) detect about 50% of these• Use theorem proving (verification) techniques

Page 12: Lecture  7

Coupling Effect

• For Q an incorrect version of P• Semantic error size = PrQ.x P.x • If for every semantically large fault there is an

overlap with at least one small syntactic fault then the coupling effect holds.

• Selective mutation based on a small set of semantically small errors - “Hard to kill”

Page 13: Lecture  7

Early Research: 22 Standard (Fortran) mutation operators

AAR Array reference for array reference replacementABS Absolute value insertionACR Array reference for constant replacementAOR Arithmetic operator replacementASR Array reference for scalar replacementCAR Constant for array reference replacementCNR Comparable array name replacementCRP Constants replacementCSR Constant for Scalar variable replacementDER Do statement End replacementDSA Data statement alterationsGLR Goto label replacementLCR Logical connector replacementROR Relational operator replacementRSR Return statement replacementSAN Statement analysisSAR Scalar for array replacementSCR Scalar for constant replacementSDL Statement deletionSRC Source constant replacementSVR Scalar variable replacementUOI Unary operator insertion

Page 14: Lecture  7

Recent Research: Java Mutation Operators

• First letter is category• A = access control• E = common programming mistakes• I = inheritance• J = Java-specific features• O = method overloading• P = polymorphism

Page 15: Lecture  7

AMC Access modifier changeEAM Accessor method changeEMM Modifier method changeEOA Reference assignment and content assignment replacementEOC Reference comparison and content comparison replacementIHD Hiding variable deletionIHI Hiding variable insertionIOD Overriding method deletionIOP Overriding method calling position changeIOR Overridden method renameIPC Explicit call of parent’s constructor deletionISK super keyword deletionJDC Java supported default constructor create

Page 16: Lecture  7

JID Member variable initialisation deletionJSC static modifier changeJTD this keyword deletionOAO Argument order changeOAN Argument number changeOMD Overloaded method deletionOMR Overloaded method contents changePMD Instance variable deletion with parent class typePNC new method call with child class typePPD Parameter variable declaration with child class typePRV Reference asssignment with other compatible type

Page 17: Lecture  7

Practical Example

• Triangle program• Myers “complete” test suite (13 test cases)• Bybro [2003] Java mutation tool and code• 88% mutation score, 96% statement coverage

Page 18: Lecture  7

Status of Mutation Testing

• Various strategies: weak mutation, interface mutation, specification-based mutation

• Our version is called strong mutation• Many mutation tools available on the internet• Cost of generating mutants and detecting

equivalents has come down• Not yet widely used in industry• Still considered “academic”, not understood?

Page 19: Lecture  7

Learning-based Testing1. Specification-based Black-box Testing2. Learning-based Testing paradigm (LBT)

- connections between learning and testing- testing as a search problem- testing as an identification problem- testing as a parameter inference problem

3. Example frameworks: 1. Procedural systems2. Boolean reactive systems

Page 20: Lecture  7

Specification-based Black-box Testing

1. System requirement (Sys-Req)2. System under Test (SUT )3. Test verdict pass/fail (Oracle step)

SUTTCG Oracle

Sys-Req pass/fail

Testcase

Output

Constraint solver Constraint checker

Language runtime

Page 21: Lecture  7

Procedural System Example: Newton’s Square root algorithm

SUTTCG Oracle

precondition x ≥ 0.0Postcondition Ι y*y – x Ι ≤ ε

Input Output

x=4.0 satisfies x ≥ 0.0 Verdict x=4.0, y=2.0 satisfies

Ι y*y – x Ι ≤ ε

Newton Code

x=4.0 y=2.0

Constraint solver Constraint checker

Page 22: Lecture  7

Reactive System Example: Coffee Machine

SUTTCG OracleInput Output

Coffee machine

In0 := $1 out11 := coffee

Sys-Req: always( in=$1 implies after(10, out=coffee) )

Constraint solver

pass/fail

Constraint checker

in0 := $1, out11 := coffee Satisfies always( in=1$ implies after(10,

out=coffee))

Page 23: Lecture  7

Key Problem: Feedback

Problem: How to modify this architecture to..

1.Improve next test case using previous test outcomes

2.Execute a large number of good quality tests?3.Obtain good coverage?4.Find bugs quickly?

Page 24: Lecture  7

SUTTCG OracleInput Output

Verdict

Learning-Based Testing

“Model based testing without a model”

Sys-Req pass/fail

Learner

Sys-Model

Page 25: Lecture  7

Basic Idea …

LBT is a search heuristic that:

1.Incrementally learns an SUT model2.Uses generalisation to predict bugs3.Uses best prediction as next test case4.Refines model according to test outcome

Page 26: Lecture  7

Abstract LBT Algorithm

1. Use (i1 , o1), … , (ik , ok) to learn model Mk 2. Model check Mk against Sys-Req3. Choose “best counterexample” ik+1 from step 24. Execute ik+1 on SUT to produce ok+1

5. Check if (ik+1 , ok+1) satisfies Sys-Reqa) Yes: terminate with ik+1 as a bugb) No: goto step 1

Difficulties lie in the technical details …

Page 27: Lecture  7

General ProblemsDifficulty is to find combinations of

models, requirements languages and checking algorithms (M, L, A)

so that …

1. models M are:- expressive, - compact, - partial and/or local (an abstraction method)- easy to manipulate and learn

2. M and L are feasible to model check with A

Page 28: Lecture  7

Incremental Learning

• Real systems are too large to be completely learned

• Complete learning is not necessary to test many requirements (e.g. use cases)

• We use incremental (on-the-fly) learning– Generate a sequence of refined models

• M0 M1 … Mi

– Convergence in the limit

Page 29: Lecture  7

Example: Boolean reactive systems

1. SUT: reactive systems2. Model: deterministic Kripke structure3. Requirements: propositional linear temporal

logic (PLTL)4. Learning: IKL incremental learning algorithm5. Model Checker: NuSMV

Page 30: Lecture  7

LBT Architecture

Page 31: Lecture  7

A Case Study: Elevator Model

Page 32: Lecture  7

Elevator ResultsReq t first

(sec)t total(sec)

MCQ first MCQ tot PQ first PQ tot RQ first RQ tot

Req 1 0.34 1301.3 1.9 81.7 1574 729570 1.9 89.5

Req 2 0.49 1146 3.9 99.6 2350 238311 2.9 98.6

Req 3 0.94 525 1.6 21.7 6475 172861 5.7 70.4

Req 4 0.052 1458 1.0 90.3 15 450233 0.0 91

Req 5 77.48 2275 1.2 78.3 79769 368721 20.5 100.3

Req 6 90.6 1301 2.0 60.9 129384 422462 26.1 85.4

Page 33: Lecture  7

Conclusions• A promising approach …• Flexible general heuristic,

• many models and requirement languages seem possible

• Many SUT types might be testable • procedural, reactive, real-time etc.

Open Questions• Benchmarking?• Scalability? (abstraction, infinite state?)• Efficiency? (model checking and learning?)


Recommended