+ All Categories
Home > Documents > System/Software Testing Error detection and removal determine level of reliability well-planned...

System/Software Testing Error detection and removal determine level of reliability well-planned...

Date post: 25-Dec-2015
Category:
Upload: clarence-miller
View: 218 times
Download: 0 times
Share this document with a friend
24
System/Software Testing Error detection and removal determine level of reliability well-planned procedure done by independent quality assurance group (except for unit testing)
Transcript

System/Software Testing

• Error detection and removal• determine level of reliability• well-planned procedure• done by independent quality assurance group

(except for unit testing)

Test Strategy• UNIT TESTING (Module testing)

– debuggers, tracers– Programmers

• Why unit testing?– Reduces the complexity of overall testing– Makes it easier to pinpoint and correct faults– Allows parallelism in testing activities

• INTEGRATION TESTING– communication between modules– start with one module, then add incrementally

• SYSTEM TESTING– manual procedures, restart and recovery, user interface– real data is used– users involved

• ACCEPTANCE TESTING– user-prepared test data– Verification Testing, Validation testing, Audit Testing

Test Case

• Set of inputs and expected results that exercises a component with the purpose of causing failures

• Attributes of a Test Case:– name : Name of the Test case

– location: full path of the executable

– input: Input data or commands

– oracle: Expected test results against which the output is compared

– log: output produced by the test

White Box and Black Box Testing

• White Box Testing– knowing internal working ,and exercising different parts– test various paths through software; if exhaustive testing is

impossible, test high-risk paths

• Black Box Testing– knowing functions to be performed and testing whether these are

performed properly– correct outputs from inputs – DBs accessed/updated properly– test cases designed from user requirements– appropriate at Integration, Systems and Acceptance testing levels

Black Box and White Box Testing

• Two types of Tests:– Black-box

– White-box

• Techniques:– Equivalence testing (black-box)

– Boundary testing (black-box)

– Path testing (white-box)

– State based testing (white-box)

White Box Testing

• Another technique for a limited amount of test casesGet knowledge about the inner-workings of the

unit being tested• Focus: Thoroughness (Coverage). Every statement in

the component is executed at least once• Four types of white box testing

– Statement testing

– Loop testing

– Path testing

– Branch testing

White-box Testing (contd.)

• Statement Testing (Algebraic Testing): test single statements (choice of operators in polynomials etc.)

• Loop Testing:– Cause execution of the loop to be skipped completely (Exception: Repeat

Loops)

– Loop to be executed exactly once

– Loop to be executed more than once

• Path testing:– Make sure all paths in the program are executed

• Branch Testing (Conditional testing): Make sure that each possible outcome from a condition is tested at least once.

White-box Testing (contd.)

if (answer) {

out.println(“Yes”);

}else {

out.println (“No”); }

Test cases: 1) answer == true; answer == false;

Path Testing

• Whitebox Testing technique– Identifies faults in the implementation of the

component

– Assumption: by exercising all possible paths through the code at least once, most faults will trigger failures

– Requires knowledge of source code and data structures (whitebox - can look inside!)

• Starting point of Path Testing: Flow Graph– Nodes representing executable blocks and associations

representing flow of control

Black box Testing

• Focus: I/O Behavior. If for any given input we can predict the output, the module passes the test.– Almost always impossible to generate all possible

inputs (“test cases”)

• But - number of test cases can get very large!• Goal: Reduce the number of test cases by

equivalence partitioning - equivalence testing!

Equivalence testing

• Black box testing technique• Minimizes the number of use cases

– Possible inputs are partitioned into equivalence classes and test case is developed for each equivalence class

– Assumption: systems usually behave similarly for all members of a class

• Two steps:– Identification of equivalence classes– Selection of test inputs

Equivalence Testing

• Determining equivalence classes:– Coverage: Every possible input belongs to one of the

equivalence classes

– Disjointedness: No input belongs to more than one equivalence class

– Representation: If the execution demonstrates an erroneous state when one member of the equivalence class is used as input, then the same erroneous state can be detected using another member of the same class

Equivalence Testing

• What should the equivalence classes be?Three classes: Month Parameter

- months with 31 days (1, 3, 5, 7, 8, 10, 12)

- months with 30 days (4, 6, 9, 11)

- February (28 or 29 days)

Two classes: Year Parameter

- leap year

- non leap year

Equivalence Testing

• Testing Equivalence Cases• Two inputs are in the same Equivalence Class if they are

handled similarly by system– eg. data field valid value in 1-50 – So, 20, 38, 1, 47 belong to the same Equivalence Class

– no need to test multiple values from same Equivalent Class– Bounds testing

• eg. test 38, then end points 1 and 50

– test valid and invalid equivalence classes– reduces the number of test cases required

Example: 3 inputs

I1 has 10 equivalence classes Total tests cases required:

I2 has 10 equivalence classes 10 x 10 x 10 = 1000 test cases.

I3 has 10 equivalence classes

Boundary Testing

• Special case of equivalence testing

• Focuses on the conditions at the boundary of the equivalence classes

• Must select input from the edges of the equivalence classes

• Example: month of February represents several boundary cases– Years that are multiples of 4 are leap years– Years that are multiples of 100 are not, unless they are also multiples

of 400.• 2000 was a leap year but 1900 was not!

– 1900 and 2000 are both good boundary cases that should be tested!

Flow Graph

public class MonthOutOfBounds extends Exception { ……} public class YearOutOfBounds extends Exception { … …} class MyGregorianCalendar { public static boolean isLeapYear (int year) { boolean leap; if ((year%4) == 0){ leap = true;} else { leap = false; } return leap; }

public static int getNumDaysInMonth (int month, int year) throw MonthOutOfBounds, YearOutOfBounds { int numDays; if (year < 1 ) throw new YearOutOfBounds (year); if (month == 1 || month == 3 || month == 5 || month == 7 || month == 10 ||

month == 12){ numDays = 32; } else if (month == 4 || month == 6 || month == 9 || month == 11) { numDays = 30; } else if (month == 2) { if (isLeapYear (year)){ numDays = 29;} else numDays = 28; } else throw new MonthOutOfBounds ( month ); return numDays; }…}

Comparison of White and Black-box testing

• White-box Testing:– Potentially infinite number of paths have to be tested

– White-box testing often tests what is done, instead of what should be done

– Cannot detect missing use cases

• Black box Testing– Potential combinatorial explosion of test cases (valid and invalid

data)

– Often not clear whether the selected test cases uncover a particular problem

– Does discover extraneous use cases (“features” :) )

Comparison of White and Black-box testing

• Both types of testing are needed• White-box testing and black box testing are the

extreme ends of a testing continuum.• Any choice of test cases lies in between and

depends on the following:– Number of possible logical paths

– Nature of input data

– Amount of computation

– Complexity of algorithms and data structures

Integration Testing:

– Groups of subsystems and eventually the entire system

– Carried out by developers– Goal: Test the interface among the subsystem

System Testing: tests all the components together as a single system

• Recovery Testing– forces software failure and verifies complete recovery

• Security Testing– verify proper controls have been designed

• Stress Testing– resource demands (frequency, volume, etc.)

System Testing

Impact of requirements on system testing:– The more explicit the requirements, the easier they are to

test.

– Quality of use cases determines the ease of functional testing

– Quality of subsystem decomposition determines the ease of structure testing

– Quality of nonfunctional requirements and constraints determines the ease of performance tests:

Acceptance Testing• Alpha Testing (Verification testing)

– real world operating environment

– simulated data, in a lab setting

– systems professionals present • observers, record errors, usage problems.

Beta Testing (Validation Testing)– live environment, using real data

– no systems professional present

– performance (throughput, response-time)

– peak workload performance, human factors test, methods and procedures, backup and recovery - audit test


Recommended