Computer Science Automated Test Data Generation for Aspect-Oriented Programs Mark Harman (King’s...

Post on 16-Dec-2015

213 views 1 download

transcript

Computer Science

Automated Test Data Generation for Aspect-Oriented Programs

Mark Harman (King’s College London, UK)

Fayezin Islam (T-Zero Processing Services, US)

Tao Xie (North Carolina State University, US)

Stefan Wappler (Berner & Mattner, Germany)

Computer Science

Background

Automated testing of aspect-oriented programs

• Testing aspectual composition behavior (pointcut behavior) [Ferrari et al. ICST 08, Anbalagan&Xie ISSRE 08, …]

• Testing aspectual behavior (advice behavior) [Xie&Zhao AOSD 06, Xie et al. ISSRE 06, ...]

Computer Science

Testing Aspectual Behavior

• Aspect weaving (e.g., with ajc, abc)– Aspect Class (bytecode)

– Advice Method (bytecode)

• Straightforward unit testing: feed aspect classes to OO test generation tools based on bytecode– Issues: arguments can be thisJoinPoint or AroundClosure objects

• Aspectra: generate tests for woven classes but focus on aspectual behavior– Feed woven classes to OO test generation tools

– Base classes == “test drivers”

• Leverage existing OO tools for testing AOP programs

[Xie&Zhao AOSD 05]

E.g., Parasoft Jtest (random testing tool)

Computer Science

ExampleProgram

Under Test

+

Computer Science

New Contributions

• A new system of automated test data generation for AOP based on Search-Based Testing, i.e., evolutionary testing– Input domain: receiver object, method parameters

– E.g., account.debit(amount)

• Empirical studies to demonstrate the benefits of the system in AOP structural testing– Effectiveness: better than random testing

– Efficiency: AOP domain reduction techniques

– Efficiency: focusing test effort on aspectual branches to improve efficiency

Computer Science

What is Search-Based Testing?

In search-based testing, we apply search techniques to search large input spaces, guided by a fitness function.

Fitness function measures how good/close one input is in reaching the goal e.g., covering true branch of

Computer Science

Evolutionary Algorithms

Selection

Insertion

Recombination

Mutation

Fitness evaluation

End?

chromosome

Computer Science

Evolutionary Testing

Selection

Insertion

Mutation

End?

Test cases

Recombination

Fitness evaluation

Computer Science

Evolutionary Testing

Selection

Insertion

Mutation

End?

Test cases

Execution

Recombination

Fitness evaluation

Computer Science

Evolutionary Testing

Selection

Insertion

Mutation

End?

Test cases

Execution

MonitoringRecombination

Fitness evaluation

chromosome

Method seq

Computer Science

Structural Evolutionary TestingStructural Evolutionary Testing

Target:true branch

Computer Science

Fitness = (A - B) + 1

101 = (100 – 0) + 1

51 = (100 – 50) + 1

0 = (100 – 101) + 1

Evaluation of predicate in a branching condition

if (A < B)

Structural Evolutionary TestingStructural Evolutionary Testing

Lower fitness value, the better 0 fitness value, reach the target

Target:true branch

Computer Science

AOP Domain Reduction - Motivation

• Input domain: receiver object, method parameters– E.g., account.debit(amount)

• Not all input variables are relevant to the coverage of the target branch inside aspects

Target:false branch

Computer Science

Slicing-based Domain Reduction

• Irrelevant-input-variable identification• Start with slicing criterion: predicates of target branches

• Extract backward program slices (based on data and control dependence)

• Identify relevant input variables: input variables showing up in the slices: account.debit(amount)

• Domain reduction: searching for only relevant input vars

Target:false branch…

Computer Science

EvolutionaryAspectTester (EAT) System Implementation Indus slicer

[Ranganath et al. 07]

EvoUnit [Wappler 08]

Aspectra [Xie&Zhao 06]

Computer Science

Evaluation Benchmarks

14 benchmarks from [Xie&Zhao 06, Rinard et al. 04, Dufour et al. 04, Hannemann&Kiczales 02]

Computer Science

Study 1: Assessment of evo testing

RQ 1.1. Can evolutionary testing outperform random testing for AOP testing?

Computer Science

RQ 1.1: Assessment of evo testing

Coverage improvement of evolutionary testing over random testing

Better branch coverage on 5/14 benchmarks

43%

Computer Science

RQ 1.1: Assessment of evo testing cont.

61%

Effort reduction of evolutionary testing over random testing

Effort reduction on 9/14 benchmarks

Computer Science

Findings: Assessment of evo testing

RQ 1.1. Can evolutionary testing outperform random testing for testing aspect-oriented programs?

•Better branch coverage on 5/14 benchmarks (0%~43%)•Effort reduction on 9/14 benchmarks

(0%~61%)

Computer Science

Study 2: Impact of domain reduction

RQ 2.1. #branches that have irrelevant parameters and %parameters that are irrelevant for each such branch?

RQ 2.2. %effort reduction for each such branch?

RQ 2.3. %effort reduction for each program?

Computer Science

RQ 2.1: Impact of domain reduction

90/434 aspectual branches with irrelevant parameters

Computer Science

RQ 2.1: Impact of domain reduction

Input domain reduction for branches with non-0 reduction

Input domain reduction (25%~100%)

Computer Science

RQ 2.2: Impact of domain reduction

Effort reduction per branch of using domain reduction

Effort increase on 25%, same on 6%, and reduction on 69% branches

94%

-88%

Easy/trivial branches

Computer Science

RQ 2.3: Impact of domain reduction

Effort reduction per program of using domain reduction

Effort reduction (17%~93%)

Computer Science

Findings: Impact of domain reduction

RQ 2.1. #branches that have irrelevant parameters (99/434) and %parameters that are irrelevant for each such branch (25%~100%)?

RQ 2.2. %effort reduction for each such branch (-88%~94%)(69% branches get reduction)?

RQ 2.3. %effort reduction for each program (17%~93%)?

Computer Science

Study 3: Impact of focusing on testing aspectual behavior

RQ 3.1. %effort reduction for test data generation if aspectual behavior instead of all behavior is focused on?

Computer Science

RQ 3.1: Impact of aspect focusing

Effort reduction of focusing on aspectual behavior over all behavior

Effort reduction on all 14 benchmarks

99.99%

3%

Computer Science

RQ 3.1: Impact of aspect focusing cont.

Coverage improvement of focusing on aspectual behavior over all behavior

Coverage improvement on 6/14 benchmarks

62%

Computer Science

Study 3: Impact of focusing on testing aspectual behavior

RQ 3.1. %effort reduction for test data generation if aspectual behavior instead of all behavior is focused on?

•Effort reduction on all 14 benchmarks (3% ~ 99.99%)•Coverage improvement on 6/14 benchmarks (0% ~ 62%)

Computer Science

Conclusion

• A new system of automated test data generation for AOP based on Search-Based Testing

• Empirical studies to demonstrate the benefits of the system in AOP structural testing– Effectiveness: better than random testing

– Efficiency: AOP domain reduction techniques

– Efficiency: focusing test effort on aspectual branches to improve efficiency

• Future work on more advanced techniques (e.g., symbolic execution), more testing objectives, larger AOP programs

Computer Science

Questions?

Computer Science

Level 4

Level 3

Level 2

Level 1

Fitness = Approximation_Level + Local_Distance

101 = 0 + (100 – 0) + 1

51 = 0 + (100 – 50) + 1

0 = 0 + (100 – 101) + 1

Evaluation of predicate in a branching condition

if (A < B) Local_Distance = (A – B) + 1

Identify relevant branching statements using control dependence, e.g.,(#expected/#actual – 1)

Target

Approximation levelApproximation levelApproximation levelApproximation level

Structural Evolutionary TestingStructural Evolutionary Testing

Local distanceLocal distanceLocal distanceLocal distance

Lower fitness value, the better 0 fitness value, reach the target

Computer Science

RQ 2.4: Impact of domain reduction

Co-lateral coverage improvement effect of domain reduction

9 branches have statistically significant change in co-lateral coverage