CSE 7314 Software Testing and Reliability Robert Oshana Lectures 5 - 8

Post on 15-Jan-2016

28 views 0 download

Tags:

description

CSE 7314 Software Testing and Reliability Robert Oshana Lectures 5 - 8. oshana@airmail.net. Chapter 3. Master test planning. Introduction. Test planning is key to successful software testing Largely omitted due to time constraints, lack of training, cultural bias - PowerPoint PPT Presentation

transcript

CSE 7314Software Testing and Reliability

Robert Oshana

Lectures 5 - 8

oshana@airmail.net

Chapter 3

Master test planning

Introduction

• Test planning is key to successful software testing

• Largely omitted due to time constraints, lack of training, cultural bias

• A test schedule is not a test plan !!• Usually measured by number of test

cases run

Levels of test planning

• Master test plan; orchestrates testing at all levels– Unit– Integration– System– Acceptance– Others are alpha, beta, customer

acceptance, build, string, development

Levels of test planning

Importance of test planning

• Goal is to deal with the important issues of testing– Strategy– Resource utilization– Responsibilities– Risks– Priorities

• Process is more important than the document

Importance of test planning

Audience analysis

• Audience for a unit test plan is a lot different than the audience for a system test plan

• Different tolerances for what will be read

Activity timing

• Test planning should be started as soon as possible– Same time as requirements specs and

project plan are being developed

• Could have significant impact on the project plan

• Use TBD’s !!

Timing of test planning

Standard templates

• Important to have a template

• IEEE Std 829-1998 for Software Test Documentation

• Customize as necessary

• Strive to improve usability over time

IEEE test plan template

• Test plan identifier• Table of contents• References• Glossary• Introduction• Test items• Software risk issues• Features to be tested• Features not to be

tested

• Approach• Item pass/fail criteria• Suspension criteria

and resumption requirements

• Test deliverables• Testing tasks• Environmental needs• Responsibilities• Staffing, training,

schedule, risks

1.0 Test Plan Identifier

• Keep track of most current version

• Use CM

• Must be kept up to date !

• No “post-implementation” test planning!

2.0 Table of Contents

• List each topic in the plan

• References

• Glossaries

• Appendices

• Reader can use to quickly review topics of interest

Introduction or Scope

• Scope of the project

• Scope of the plan

• Master test plan may cover the entire project (embedded system) or may be many MTPs for a project

Scope of test and evaluation plans

Test items

• Describes programmatically what should be tested

• Oriented to the level of the test plan

• Must reference requirements spec, design spec, user’s guide, operations guide, etc

Risk issues section

• Used to determine what the primary focus of testing should be

• Hard to test everything in a given release

• Software risks that drive testing– Interfaces to other systems– Modules with a history of defects– Security, performance, reliability SW– Features difficult to change or test

Features to be tested

• What to be tested from a customer point of view (as opposed to test items -> viewpoint of the developer)

• May help determine which features can be removed

Prioritized list with cut line

Features not to be tested

• Some features do not need to be tested– Not changed– Not available for use

• Used to reduce risk by raising awareness – Can help you attain additional resources

• May grow if project falls behind

Approach/strategy section

• Description of how testing will be performed (approach)

• Explain any issues that have a major impact on the success of testing and the project (strategy)

• Include entry and exit criteria for each level of testing

Influences on strategy decisions

Methodology decisions

• “off the shelf” methodology or create your own– How many testers and when?– How many beta sites?– Testing techniques?– Testing levels?

• Environment becomes more realistic the higher you go

Test level decisions

Typical test levels

staticanalysis

fully specifiedand compiledcode

* code inspection* correctness verification* tools (lint)

unittest ?

unit test

* code coverage* oracle compare

run timecodeanalysis

yes

no operationalprofiles

statisticaltesting withusage models

* leak detectors* instrumentation

* field data

functiontheoretic sequenceenumeration

testingoracletest grammar

and usagemodel creation

functionmappingrules

Model for testing

Automating the Testing Process

Software Test StationTestSequences

ExpectedResults

Output Data/Results

Oracle

UsageModels

CraftedTest

Cases

Script SW

SoftwareUnder Test

Control

Data

Control andSequencing ofScript

TestData/

Results

Modelingtool

Modelingtool

Output Data/Responses

Output Data/Responses

Certification team

Testscriptingsoftware

Randomtest casegenerator

Testresultsverification

Teststation controlsoftware

Resources

• Best laid plans can be ruined– Development running late (testers wait)– Ship date moved forward

• Testing schedule should contain contingencies

• Finding testing resources is a strategy decision

• Can also become a political issue

Test coverage decisions

• Several types of coverage– Code coverage– Requirements coverage– Design and interfaces–Model coverage

Walkthroughs and inspections

• Reviews of requirements, design, code, etc is an important verification technique

• Complementary activities to testing

• Walkthrough; peer review

• Inspection; formal evaluation technique

Software evaluation process

Walkthroughs vs inspections

Walkthroughs inspections

Participants Peer(s) led by author

Peers in designated roles

Rigor Informal to formal Formal

Training required None, informal, or structured

Structured, preferably by teams

Purpose Judge quality, find defects, training

Measure/improve quality of product and process

Effectiveness Low to medium Low to very high, depending on training and commitment

Configuration management

• Describes how CM should be handled during testing– Change management– Process for reviewing, prioritizing,

fixing, and re-testing bugs

• Tradeoff between fixing too many bugs and freezing code too early

• Use a CCB

Collection and validation of metrics

• Metrics collection and validation can be a significant overhead

• What metrics to collect?

• What will they be used for?

• How will they be validated?

• Need a way to measure testing status, effectiveness, quality, etc

Tools and automation

• Testing tools can be a big advantage but, if not used correctly, can also be a disadvantage

• May take more time than manual• Regression may take less time• Use of testing tools should be well

planned (plan for training and integration)

Changes to the test plan

• Include all the key stakeholders in development and review cycles

• Small changes may not have to go through approval cycle

• How often to update (weekly, monthly)

• How should the plan be reviewed?

Meetings and communication

• Standard meetings should be described here– CCB– Presentations to users and

management

• Status reporting guidelines• Chains of command for conflict

resolution

Other strategy issues

• Multiple production environments

• Beta testing

• Test environment setup and maintenance

• Use of contractual support

• Unknown quality of software

• Feature creep

Items pass/fail criteria

• Each item has to have an expected result– Test cases passed and failed– Number– Type– Severity and location of bugs– Usability– Reliability and stability

• All test cases not created equal

Suspension criteria and resumption requirements

• Conditions that warrant temporary suspension of testing

• Sometimes continuing on can be wasted effort

• Metrics can be used to flag these conditions

• Testing may be halted if a certain number of bugs are found

Simple GANT chart

Test deliverables

• List of documents, tools, other components to be developed and maintained– Test plans– Design specs– Test cases– Custom tools– Defect reports– simulators

Testing tasks

• Identifies the set of tasks necessary to prepare for and perform testing

• Intertask dependencies and any special skills

Environmental needs

• Hardware, software, data, interfaces, facilities, publications, security access pertaining to the testing effort

• Attempt to configure testing environment similar to real world if possible

• Where will the data come from?

Environmental needs

Responsibilities

• Define major responsibilities– Establishment of test environment– CM– Unit testing

• Putting a name next to a task helps to get it done !!

Responsibilities matrix

Staffing and training needs

• Number and type of people required depend on the scope

• Various training needs– Tools– Methodologies– Management systems (defects)– Basic business knowledge

Schedule

• Testing schedule should be built around milestones in the project plan

• Milestones built around major events

• Initial generic schedule is useful (no dates)

• Plan for risks and contingencies

• Record the schedule (audit trail)

Risks and contingencies

• Planning risks– Unrealistic delivery dates– Staff availability– Budget– Tool inventory– Training needs– Scope of testing– Usage assumptions– Feature creep– Poor quality software

Risks and contingencies

• Contingencies– Reducing the scope of the application– Delaying implementations– Adding resources– Reducing quality processes

Approvals

• Should be the person who can declare the software ready for next stage– Unit test plan (developer)– Acceptance test plan (customer)

• Should be technical or business experts (not managers)

CSE 7314Software Testing and Reliability

Robert Oshana

End of Lecture

oshana@airmail.net