Date post: | 06-Feb-2015 |
Category: |
Technology |
Upload: | aung-thu-rha-hein |
View: | 987 times |
Download: | 0 times |
1
PRESENTED BY:AUNG THU RHA HEIN(5536871)BOONYA SUWANMANE(5436284)
NATTACHART TAMKITTIKHUN (5637378)
Partition-Based Regression Verification[1]
[ 1 ] M A R C E L B O ’ H M E , B R U N O C . D . S . O L I V E I R A , A B H I K R O Y C H O U D H U R YS C H O O L O F C O M P U T I N G , N AT I O N A L U N I V E R S I T Y O F S I N G A P O R EP U B L I S H E D O N I C S E ’ 1 3 , S A N F R A N C I S C O, U S A
2
Outline
IntroductionPartition-Based Regression VerificationEmpirical StudyResults and AnalysisThreats to ValidityRelated Works Discussion & Conclusion
3
Introduction-Software Regression
Software Regression- software bugs occur after changes to software functionalities
Regression Testing- selective retesting of a software system
Regression Verification-verify the correctness of the program relative to earlier versions
4
Introduction-Motivation
Verify error correctness of software version
Proposes a better approach than Regression verification
5
Introduction-Problem Statement
Requires specifications
Verification process is time consuming
Partial verification
6
Introduction-Research Contributions
Introduces (PRV) partition-based regression verification
Proposes a differential partitioning technique
Provides another way to regression test generation techniques
7
Introduction-PRV
A gradual approach to RV based on the exploration of differential partitions
Verify inputs by partitions Shares the advantages of RV and RT
8
Introduction-Differential Partitions
Computed the paths by symbolic executionRequire deterministic program execution
9
Partition-Based Regression Verification
Emphasize: Regression error• Interrupt• Terminate
PRV Experiment:Continuous => return error
10
PRV: A) Computing Differential Partitions
Base on the value detect of test suite(input) which lead to regression error.
Output a set of test suite and as set condition
11
PRV: B) Computing Reachability Conditions
Base on the detect condition. Output is a condition depend on your proposal input value be the
criteria Reachable condition or Unreachable condition
12
PRV: C) Computing Propagation Conditions
Base on the detect where the differential program states converge. Output: Statement instance at line Ni(i substitute line number)
13
PRV: D) Computing Difference Conditions
Base on the detect where the differential program states converge. Algorithm as same as C) plus the change value process , check does
the converge output are different? Output: A set of change statement instance at line Ni(i substitute line
number) which lead the output at converge line are different.
14
PRV: E) Generating Adjacent Test Cases
Base on the detect where the differential program states converge. Algorithm: If adjacent condition have been deleted after compute beyond the existing condition does program compute the output. : If place or add or reorder condition does program compute the output.If can compute how the output different or same at the converge statement.
15
PRV: F) Theorems
In practice, the absence of regression errors can be guaranteed for all inputs to the same extent as symbolic execution can guarantee the absence of program errors. Specifically, they assume deterministic program execution.
16
Empirical Study
Evaluate relative efficiency of PRV and discuss practicability based on authors’ experience.
Do not prove the scalability of PRV. It suffers from the same limitation as symbolic execution. However, it can benefit from optimizations like domain
reduction, parallelization, and better search strategies.
17
Empirical Study – Setup and Infrastructure
Built into authors’ dynamic backward slicing tool JSlice. The differential partitions are explored in a breadth-first
manner starting from the same initial input within 5 minutes, unless stated otherwise.
Every version of the same subject uses the same test driver to construct necessary inputs.
Subject programs are analyzed on a desktop computer with an Intel 3 GHz quad-core processor and 4 GB of memory.
18
Empirical Study – Subject Programs
Subject programs in the experiments are chosen according to the following 2 criteria: They represent a variety of evolving programs. They are discussed in related work (which allows the
comparison with our own experimental results). There are 83 versions of programs ranging from 20 to almost
5000 lines of code. Some are derived by seeding faults, called mutants, of the
original versions. Some are real versions that were committed to a version
control system.
19
Empirical Study – Subject Programs
20
Empirical Study – Subject Programs
The authors compare the empirical results of the references discussing regression verification and regression test generation.
No empirical results available for the regression test generation techniques and differenctial symbolic execution
All programs are tested as whole programs, except for Apache CLI. For Apache CLI, command line component was tested for
regression.
21
Empirical Study – Research Questions
RQ1: How efficiently does PRV find the first input that exposes
semantic difference? RQ2:
How efficiently does PRV find the first input that exposes software regression?
RQ3: How practical is PRV in an example usage scenario?
22
Results and Analysis – RQ1: Efficientcy – Semantic Difference
Measure 2 aspects when searching for the first difference-revealing input: average time
If > 5 min, not included. mutation score.
The fraction of versions for which a difference-revealing input can be found within 5 minutes.
23
Results and Analysis – RQ1: Efficientcy – Semantic Difference
24
Results and Analysis – RQ1: Efficientcy – Semantic Difference
Answer to RQ1. PRV generates a difference-revealing test case on average
for 21% mor version pairs in 41% less time, than the eXpress-like approach that analyzes only the changed version P’.
25
Results and Analysis – RQ2: Efficientcy – Software Regression
In practice, not every difference-revealing test case reveals software regression.
A difference-revealing test case can be checked formally on informally against the programmer’s expectation.
26
Results and Analysis – RQ2: Efficientcy – Software Regression
27
Results and Analysis – RQ2: Efficientcy – Software Regression
Answer to RQ2. PRV generates a regression-revealing test case on average
for 48% more version pairs in 63% less time than the eXpress-like approach that analyzes on the changed version P’.
28
Results and Analysis – RQ3 Practicability – Usage Scenario: Apache CLI
Apache CLI is used to evaluate PRV in a practical usage scenario.
PRV generates difference-revealing test cases within the bound of 20 minutes for every version pair.
A developer checks these test cases for regression and relates the regression revealing test cases to changes that semantically interfere.
29
Results and Analysis – RQ3 Practicability – Usage Scenario: Apache CLI
Answer to RQ3. For the evolution of Apache CLI over 6 years, tests
generated as witnesses of differential behavior of 2 successive versions suggest: An average progression of 49%, regression of 18% and
intermediate semantic changes of 33% towards the latest revision.
30
Threats to Validity
Main threat to internal validity: Implementation of PRV into JSlice Tried to mitigate by using the same implementation to
gather results for the DART-like and eXpress-like approches.
Main threat to external validity The generalization of the results
Limited choice and # of subjects does not suggest generallizability.
The subjects are served mainly as comparison to relavant works, and give an idea about practibility of PRV.
31
Related Works
Regression Verification(RV) based on semantic equivalence time consuming no intermediate guarantees
Differential Symbolic Execution (DSE) based on symbolic summaries less scalable
Regression Test Generation (RTG) construct sample input that can expose software regression
32
Discussion & Conclusion
Introduces differential partitions technique
Enable partial verification
Retain regression guarantees
Detect regression errors more
33
Thank you.
Questions ?