+ All Categories
Home > Engineering > SOFTWARE TESTING

SOFTWARE TESTING

Date post: 29-Jun-2015
Category:
Upload: priyanka-karancy
View: 113 times
Download: 0 times
Share this document with a friend
Description:
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. I hope this ppt will help u to learn about software testing.
Popular Tags:
76
Software Testing Submitted by, C.PRIYANKA KARANCY, M.TECH,CCE, REG. NO. PR13CS2013.
Transcript
Page 1: SOFTWARE TESTING

Software Testing

Submitted by,C.PRIYANKA KARANCY,

M.TECH,CCE,REG. NO. PR13CS2013.

Page 2: SOFTWARE TESTING

INTRODUCTION TO TESTING AS AN ENGINEERING ACTIVITY

UNIT-I

Page 3: SOFTWARE TESTING

The software development implies that the following approaches. The development process is well understood; Projects are planned. Life cycle models are defined and adhered to. Standards are in place for product and process. Measurements are employed to evaluate product and process

quality. Components are reused. Validation and verification processes play a key role in quality determination; engineers have proper education, training, and certification.

1.1 The Evolving Profession of Software Engineering

Page 4: SOFTWARE TESTING

Type Purposes Activities Roles

1 Test Products Test Development, Test Execution.

Testers: Extension of Development.

2 Measure Products Test oversight, reporting results.

Measurers; Quality Hurdle.

3 Measure Processes Metrics. Information Engineers.

4 Define Processes Process and Risk Management.

Quality and Process Engineers.

5 Guidance Resource Quality Reference. Quality Engineers.

1.2 The Role of Process in Software Quality

Page 5: SOFTWARE TESTING

Validation is the process of evaluating a software system or component during, or at the end of, the development cycle in order to determine whether it satisfies specified requirements.

Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

1.3 Testing as a Process

Page 6: SOFTWARE TESTING

Software Development Process

Page 7: SOFTWARE TESTING

Testing is generally described as a group of procedures carried out to evaluate some aspect of a piece of software.

Debugging, or fault localization is the process of locating the fault or defect,repairing the code, retesting the code.

Page 8: SOFTWARE TESTING

Benefits of test process improvement are the following: smarter testers higher quality software the ability to meet budget and scheduling goals improved planning the ability to meet quantifiable testing goals

1.4 Testing Maturity Model

Page 9: SOFTWARE TESTING

Maturity goalsEach maturity level, except 1, contains certain maturity goals.For an organization to reach a certain level, the corresponding

maturity goals must be met by the organization.Maturity subgoals

Maturity goals are supported by maturity subgoalsATRs

Maturity subgoals are achieved by means of ATRs.ATRs address issues concerning implementation of activities and

tasks.ATRs also address how an organization can adapt its practices so

that it can move in-line with the TMM model.ATRs are refined into “views” from the perspectives of three

groups:ManagersDevelopers and test engineersCustomers (user/client).

Page 10: SOFTWARE TESTING

1.5 TMM Levels

Page 11: SOFTWARE TESTING

Level 1 – InitialThere are no maturity goals to be met at this level.Testing begins after code is written.An organization performs testing to demonstrate that the

system works.No serious effort is made to track the progress of testing.Test cases are designed and executed in an ad hoc

manner.In summary, testing is not viewed as a critical, distinct

phase in software development.

Page 12: SOFTWARE TESTING

Level 2 – Phase Definition: The maturity goals are as follows:Develop testing and debugging goals.

Some concrete maturity subgoals that can support this goal are as follows:Organizations form committees on testing and debugging.The committees develop and document testing and debugging goals.

Initiate a test planning process. (Identify test objectives. Analyze risks. Devise strategies. Develop test specifications. Allocate resources.)

Some concrete maturity subgoals that can support this goal are as follows:Assign the task of test planning to a committee.The committee develops a test plan template.Proper tools are used to create and manage test plans.Provisions are put in place so that customer needs constitute a part of the

test plan.Institutionalize basic testing techniques and methods.

The following concrete subgoals support the above maturity subgoal.An expert group recommends a set of basic testing techniques and

methods.The management establishes policies to execute the recommendations.

Page 13: SOFTWARE TESTING

Level 3 – Integration: The maturity goals are as follows:Establish a software test group.

Concrete subgoals to support the above are:An organization-wide test group is formed with leadership,

support, and $$. The test group is involved in all stages of the software

development.Trained and motivated test engineers are assigned to the group.The test group communicates with the customers.

Establish a technical training program.Integrate testing into the software lifecycle.

Concrete subgoals to support the above are:The test phase is partitined into several activities: unit,

integration, system, and acceptance testing.Follow the V-model.

Control and monitor the testing process.Concrete subgoals to support the above are:

Develop policies and mechanisms to monitor and control test projects.

Define a set of metrics related to the test project.Be prepared with a contingency plan

Page 14: SOFTWARE TESTING

Level 4 – Management and Measurement: The maturity goals are:Establish an organization-wide review program.

Maturity subgoals to support the above are as follows.The management develops review policies.The test group develops goals, plans, procedures, and recording

mechanisms for carrying out reviews.Members of the test group are trained to be effective.

Establish a test management program.Maturity subgoals to support the above are as follows.

Test metrics should be identified along with their goals.A test measurement plan is developed for data collection and analysis.An action plan should be developed to achieve process improvement.

Evaluate software quality.Maturity subgoals to support the above are as follows.

The organization defines quality attributes and quality goals for products.The management develops policies and mechanisms to collect test metrics

to support the quality goals.

Page 15: SOFTWARE TESTING

Level 5 –Optimization, Defect Prevention and Quality Control: The maturity goals are as follows:

Application of process data for defect preventionMaturity subgoals to support the above are as follows.

Establish a defect prevention team.Document defects that have been identified and removed.Each defect is analyzed to get to its root cause.Develop an action plan to eliminate recurrence of common defects.

Statistical quality controlMaturity subgoals to support the above are as follows.

Establish high-level measurable quality goals. (Ex. Test case execution rate, defect arrival rate, …)

Ensure that the new quality goals form a part of the test plan.The test group is trained in statistical testing and analysis methods.

Page 16: SOFTWARE TESTING

2.1 Basic Definition Errors: An error is a mistake, misconception, or

misunderstanding on the part of a software developer. Faults (Defects) A fault (defect) is introduced into the software as the result

of an error. It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification.

Failures A failure is the inability of a software system or component

to perform its require functions within specified performance requirements.

CHAPTER-2TESTING FUNDAMENTA L S

Page 17: SOFTWARE TESTING

A test case in a practical sense is a test-related item which contains the following information:

1. A set of test inputs. These are data items received from an external source by the code under test. The external source can be hardware, software, or human.

2. Execution conditions. These are conditions required for running the test, for example, a certain state of a database, or a configuration of a hardware device.

3. Expected outputs. These are the specified results to be produced by the code under test.

2.2 Test Case

Page 18: SOFTWARE TESTING

TestA test is a group of related test cases, or a group of related test cases and test Procedures.

Test OracleA test oracle is a document, or piece of software that allows testers to determine whether a test has been passed or failed.

Test BedA test bed is an environment that contains all the hardware and software needed to test a software component or a software system.

Page 19: SOFTWARE TESTING

1. Quality relates to the degree to which a system, system component, or process meets specified requirements.

2. Quality relates to the degree to which a system, system component, or process meets customer or user needs, or expectations.

Metric: A metric is a quantitative measure of the degree to

which a system, system component, or process possesses a given attribute.

2.3 Software Quality

Page 20: SOFTWARE TESTING

The software quality assurance (SQA) group is a team of people with the necessary training and skills to ensure that all necessary actions are taken during the development process so hat the resulting software conforms to established technical requirements.

REVIEW:A review is a group meeting whose purpose is to

evaluate a software artifact or a set of software artifacts.

2.4 Software Quality Assurance Group

Page 21: SOFTWARE TESTING

Principle 1. Testing is the process of exercising a software component using a selected set of test cases, with the intent of (i) revealing defects, and (ii) evaluating quality.

Principle 2. When the test objective is to detect defects, then a good test case is one that has a high probability of revealing a yet undetected defect(s).

2.5 Software Testing Principles

Page 22: SOFTWARE TESTING

Principle 3. Test results should be inspected meticulously.Principle 4. A test case must contain the expected output or result.Principle 5. Test cases should be developed for both valid and invalid input conditions.Principle 6. The probability of the existence of additional defects in a software component is proportional to the number of defects already detected in that component.Principle 7. Testing should be carried out by a group that is independent of the development group.Principle 8. Tests must be repeatable and reusable.Principle 9. Testing should be planned.Principle 10. Testing activities should be integrated into the software life cycle.Principle 11. Testing is a creative and challenging task.

Page 23: SOFTWARE TESTING

The term defect and its relationship to the terms error and failure in the context of the software development domain.

Sources of defects are Education, Communication, Oversight, Transcription, Process.

CHAPTER-33.1 DEFECTS, HYPOTHESES, AND TESTS

Page 24: SOFTWARE TESTING

3.2 Defect Classes, the Defect Repository, and Test Design

Page 25: SOFTWARE TESTING

Defect classes and the defect repository.

1 . Functional Description Defects.2 . Feature Defects

Features may be described as distinguishing characteristics of a software component or system.

3 . Feature Interaction Defects 4 . Interface Description Defects

Page 26: SOFTWARE TESTING

Design defects occur when system components, interactions between system components, interactions between the components and outside software/hardware, or users are incorrectly designed.

This covers defects in the design of algorithms, control, logic, data elements, module interface descriptions, and external software/hardware/user interface descriptions.

3.3 Design Defects

Page 27: SOFTWARE TESTING

Algorithmic and Processing Defects Control, Logic, and Sequence Defects Data Defects Module Interface Description Defects Functional Description Defects External Interface Description Defects.

3.4 Design Defects

Page 28: SOFTWARE TESTING

Coding defects are derived from errors in implementing the code. Coding defects classes are closely related to design defect classes especially if pseudo code has been used for detailed design.

Algorithmic and Processing Defects Control, Logic and Sequence Defects Typographical Defects Initialization Defects Data-Flow Defects Data Defects Module Interface Defects Code Documentation Defects External Hardware, Software Interfaces Defects

3.5 Coding Defects

Page 29: SOFTWARE TESTING

Test Harness Defects Test Case Design and Test Procedure Defects

3.6 Testing Defects

Page 30: SOFTWARE TESTING

The Smart Tester:Software components have defects, no matter

how well our defect prevention activities are implemented. Developers cannot prevent/eliminate all defects during development.

Test Case Design StrategiesA smart tester who wants to maximize use of

time and resources knows that she needs to develop what we will call effective test cases for execution-based testing.

CHAPTER-4Strategies and Methods for Test Case Design-I

Page 31: SOFTWARE TESTING

Black box testing approach uses only the input and the output on the basis of designing test cases.

infinite time and resources are not available to exhaustively test all possible inputs.

The goal for the smart tester is to effectively use the resources available by developing a set of test cases that gives the maximum yield of defects for the time and effort spent.

4.1 Using the Black Box Approach to Test Case Design

Page 32: SOFTWARE TESTING

Each software module or system has an input domain from which test input data is selected. If a tester randomly selects inputs from the domain, this is called random testing.

Issues in random testing: Are the three values adequate to show that the module meets

its specification when the tests are run? Should additional or fewer values be used to make the most

effective use of resources? Are there any input values, other than those selected, more

likely to reveal defects? Should any values outside the valid domain be used as test

inputs?

4.2 Random Testing

Page 33: SOFTWARE TESTING

If a tester is viewing the software-under-test as a black box with well-defined inputs and outputs, a good approach to selecting test inputs is to use a method called equivalence class partitioning.

Equivalence class partitioning results in a partitioning of the input domain of the software under test. The technique can also be used to partition the output domain, but this is not a common usage.

4.3 Equivalence Class Partitioning

Page 34: SOFTWARE TESTING

1. It eliminates the need for exhaustive testing, which is not feasible.

2. It guides a tester in selecting a subset of test inputs with a high probability of detecting a defect.

3.It allows a tester to cover a larger domain of inputs/outputs with a smaller subset selected from an equivalence class.

4.4 Advantage

Page 35: SOFTWARE TESTING

1. ‘‘If an input condition for the software-under-test is specified as a range of values, select one valid equivalence class that covers the allowed range and two invalid equivalence classes, one outside each end of the range.’’

2. ‘‘If an input condition for the software-under-test is specified as a number of values, then select one valid equivalence class that includes the allowed number of values and two invalid equivalence classes that are outside each end of the allowed number.’’

4.5 List of Conditions

Page 36: SOFTWARE TESTING

3. ‘‘If an input condition for the software-under-test is specified as a set of valid input values, then select one valid equivalence class that contains all the members of the set and one invalid equivalence class for any value outside the set.’’

4.‘‘If an input condition for the software-under-test is specified as a“must be” condition, select one valid equivalence class to representthe “must be” condition and one invalid class that does not includethe “must be” condition.’’

5. ‘‘If the input specification or any other information leads to the beliefthat an element in an equivalence class is not handled in an identicalway by the software-under-test, then the class should be further partitioned into smaller equivalence classes.’’

Page 37: SOFTWARE TESTING

Equivalence class partitioning gives the tester a useful tool with which to develop black box based-test cases for the software-under-test. The method requires that a tester has access to a specification of input/output behavior for the target software.

The test cases developed based on equivalence class partitioning can be strengthened by use of an another technique called boundary value analysis.

4.6 Boundary Value Analysis

Page 38: SOFTWARE TESTING

C a u s e - a n d - E f f e c t G r a p h i n gCause-and-effect graphing is a technique that can be used

to combine conditions and derive an effective set of test cases that may disclose inconsistencies in a specification.

S t a t e T r a n s i t i o n T e s t i n gState transition testing is useful for both procedural and

object-oriented development. It is based on the concepts of states and finite-state machines, and allows the tester to view the developing software in term of its states, transitions between states, and the inputs and events that trigge state changes.

4.7 Other Black Box Test Design Approaches

Page 39: SOFTWARE TESTING

Designing test cases using the error guessing approach is based on the tester’s/developer’s past experience with code similar to the code-under test, and their intuition as to where defects may lurk in the code.

Code similarities may extend to the structure of the code, its domain, the design approach used, its complexity, and other factors.

4.8 Error Guessing

Page 40: SOFTWARE TESTING

As software development evolves into an engineering discipline, the reuse of software components will play an increasingly important role.

Reuse of components means that developers need not reinvent the wheel; instead they can reuse an existing software component with the required functionality.

Black Box Testing and Commercial Off-the-Shelf (COTS) Components

Page 41: SOFTWARE TESTING

Black box methods have ties to the other maturity goals at TMM level 2.

The defect/problem fix report should contain the following information:• project identifier• the problem/defect identifier• testing phase that uncovered the defect• a classification for the defect found• a description of the repairs that were done• the identification number(s) of the associated tests• the date of repair• the name of the repairer.

4.9 Black Box Methods and TMM Level 2 Maturity Goals

Page 42: SOFTWARE TESTING

S T R A T E G I E S A N D M E T H O D S F O R T E S T C A S E D E S I G N -II

UNIT -II

Page 43: SOFTWARE TESTING

Testers need a framework for deciding which structural elements to select as the focus of testing, for choosing the appropriate test data, and for deciding when the testing efforts are adequate enough to terminate the process with confidence that the software is working properly.

Such a framework exists in the form of test adequacy criteria.

CHAPTER-5Test Adequacy Criteria

Page 44: SOFTWARE TESTING

The application scope of adequacy criteria also includes:(i) helping testers to select properties of a program to focus on duringtest;(ii) helping testers to select a test data set for a program based on theselected properties;(iii) supporting testers with the development of quantitative objectives fortesting;(iv) indicating to testers whether or not testing can be stopped for thatprogram.

A program is said to be adequately tested with respect to a givencriterion if all of the target structural elements have been exercised according to the selected criterion.

Page 45: SOFTWARE TESTING

The application of coverage analysis is typically associated with the use of control and data flow models to represent program structural elements and data.

The logic elements most commonly considered for coverage are based on the flow of control in a unit of code.

5.1 Coverage and Control Flow Graphs

Page 46: SOFTWARE TESTING

Logic-based white box–based test design and use of test data adequacy/coverage concepts provide two major payoffs for the tester: (i) quantitative coverage goals can be proposed, and (ii) commercial tool support is readily available to facilitate the tester’s work.

The tester must decide, based on the type of code, reliability requirements, and resources available which criterion to select, since the stronger the criterion selected the more resources are usually required to satisfy it.

5.2 Covering Code Logic

Page 47: SOFTWARE TESTING

The cyclomatic complexity attribute is very useful to a tester.

The complexity value is usually calculated from the control flow graph (G) by the formula

V(G) =E -N +2 The value E is the number of edges in the control

flow graph and N is the number of nodes.

5.3 Paths: Their Role in White Box–Based Test Design

Page 48: SOFTWARE TESTING

Additionally, use of software logic and control structures to guide test data generation and to evaluate test completeness there are alternative methods that focus on other characteristics of the code.

To satisfy the all def-use criterion the tester must identify and classify occurrences of all the variables in the software under test.

5.4 Additional White Box Test Design Approaches

Page 49: SOFTWARE TESTING

Loops are among the most frequently used control structures. Experienced software engineers realize that many defects are associated with loop constructs.

Loop testing strategies focus on detecting common defects associate with these structures.

5.5 Loop Testing

Page 50: SOFTWARE TESTING

Mutation testing is another approach to test data generation that requires knowledge of code structure, but it is classified as a fault-based testing approach.

It considers the possible faults that could occur in a software component as the basis for test data generation and evaluation of testing effectiveness.

5.6 Mutation Testing

Page 51: SOFTWARE TESTING

Mutation testing makes two major assumptions:

1. The competent programmer hypothesis. This states that a competent programmer writes programs that are nearly correct. Therefore assume that there are no major construction errors in the program; the code is correct except for a simple error(s).

2. The coupling effect. This effect relates to questions a tester might have about how well mutation testing can detect complex errors since the changes made to the code are very simple.

Page 52: SOFTWARE TESTING

Testers are often faced with the decision of which criterion to apply to a given item under test given the nature of the item and the constraints of the test environment (time, costs, resources).

Testers can use the axioms to◦ Recognize both strong and weak adequacy criteria; a tester may

decide to use a weak criterion, but should be aware of its weakness with respect to the properties described by the axioms;

◦ Focus attention on the properties that an effective test data adequacy criterion should exhibit;

◦ Select an appropriate criterion for the item under test;◦ Stimulate thought for the development of new criteria; the axioms

are the framework with which to evaluate these new criteria.

5.7 Evaluating Test Adequacy Criteria

Page 53: SOFTWARE TESTING

The axioms are based on the following set of assumptions:

(i) programs are written in a structured programming language;

(ii) programs are SESE (single entry/single exit);(iii) all input statements appear at the beginning

of the program;(iv) all output statements appear at the end of the

program.

Assumptions

Page 54: SOFTWARE TESTING

Applicability Property Non-exhaustive applicability Property Monotonicity Property Inadequate Empty Set Antiextensionality Property General Multiple Change Property Antidecomposition Property Anticomposition Property Complexity Property Statement Coveraage Property

Properties

Page 55: SOFTWARE TESTING

Use of different testing strategies and methods has the following benefits:

1. The tester is encouraged to view the developing software from several different views to generate the test data.

2.The tester must interact with other development personnel such as requirements analysts and designers to review their representations of the software.

3.The tester is better equipped to evaluate the quality of the testin effort (there are more tools and approaches available from the combination

of strategies).

4.The tester is better able to contribute to organizational test process improvement efforts based on his/her knowledge of a variety of testing strategies.

5.8 White Box Testing Methods and the TMM

Page 56: SOFTWARE TESTING

Levels of testing include the different methodologies that can be used while conducting Software Testing.

They are:◦Unit Test◦Integration Test◦System Test◦Acceptance Test

CHAPTER-6Levels of Testing

Page 57: SOFTWARE TESTING
Page 58: SOFTWARE TESTING

This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases.

Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance team.

The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.

6.1 Unit Test

Page 59: SOFTWARE TESTING

Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing.

There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with other units.

6.2 Limitation of unit testing

Page 60: SOFTWARE TESTING

The testing of combined parts of an application to determine if they function correctly together is Integration testing. There are two methods of doing Integration Testing Bottom-up Integration testing and Top Down Integration testing.

The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic those it will encounter in customers' computers, systems and network.

6.3 Integration Test

Page 61: SOFTWARE TESTING

SNO. Integration Testing Method

1 Bottom-up integrationThis testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.

2 Top-Down integration This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.

Page 62: SOFTWARE TESTING

This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team.

System testing is so important because of the following reasons: System Testing is the first step in the Software Development Life

Cycle, where the application is tested as a whole. The application is tested thoroughly to verify that it meets the

functional and technical specifications. The application is tested in an environment which is very close to the

production environment where the application will be deployed. System Testing enables us to test, verify and validate both the business

requirements as well as the Applications Architecture.

6.4 System Testing

Page 63: SOFTWARE TESTING

Functional testing Performance testing Stress testing Configuration testing Security testing Recovery testing

6.4.1 Types of System Testing

Page 64: SOFTWARE TESTING

This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client.s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.

More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application.

By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.

6.5 Acceptance Testing

Page 65: SOFTWARE TESTING

This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:

Spelling Mistakes Broken Links Cloudy Directions The Application will be tested on machines with the lowest

specification to test loading times and any latency problems.

6.6 Alpha testing

Page 66: SOFTWARE TESTING

Beta test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. In this phase the audience will be testing the following:

Users will install, run the application and send their feedback to the project team.

Typographical errors, confusing application flow, and even crashes. Getting the feedback, the project team can fix the problems before

releasing the software to the actual users. The more issues you fix that solve real user problems, the higher

the quality of your application will be. Having a higher-quality application when you release to the

general public will increase customer satisfaction.

6.7 Beta Testing

Page 67: SOFTWARE TESTING

The development, documentation, and institutionalization of goals and related policies is important to an organization.

The goals/policies may be business-related, technical, or political in nature.

They are the basis for decision making; therefore setting goals and policies requires the participation and support of upper management.

CHAPTER-7Testing Goals, Policies, Plans and Documentation

Page 68: SOFTWARE TESTING

1.Business goal: to increase market share 10% in the next 2 years in the area of financial software.

2. Technical goal: to reduce defects by 2% per year over the next 3 years.

3. Business/technical goal: to reduce hotline calls by 5% over the next 2 years.

4. Political goal: to increase the number of women and minorities in high management positions by 15% in the next 3 years.

7.1 Testing Goals

Page 69: SOFTWARE TESTING

A plan is a document that provides a framework or approach for achieving a set of goals.

Test plans for software projects are very complex and detailed documents.

The planner usually includes the following essential high-level items.

Overall test objectives. What to test (scope of the tests). Who will test How to test When to test When to stop testing

7.2 Test Planning

Page 70: SOFTWARE TESTING

7.3 Test Plan Components

Page 71: SOFTWARE TESTING

Test Plan Attachments

Page 72: SOFTWARE TESTING

Test Item Transmittal Report is not a component of the test plan, but is necessary to locate and track the items that are submitted for test. Each Test Item Transmittal Report has a unique identifier.

It should contain the following information for each item that is tracked.

Version/revision number of the item; Location of the item; Persons responsible for the item (e.g., the developer); References to item documentation and the test plan it is related to; Status of the item; Approvals—space for signatures of staff who approve the

transmittal.

7.4 Locating Test Items

Page 73: SOFTWARE TESTING

The test plan and its attachments are test-related documents that are prepared prior to test execution. There are additional documents related to testing that are prepared during and after execution of the tests.

The test log should be prepared by the person executing the tests. It is a diary of the events that take place during the test.

Test Log IdentifierEach test log should have a unique identifier.

7.5 Reporting Test Results

Page 74: SOFTWARE TESTING

7.6 The Role of the Three Critical Groups in Testing Planning and Test Policy Development

Page 75: SOFTWARE TESTING

Working with management to develop testing and debugging policy and goals.

Participating in the teams that oversee policy compliance and change management.

Familiarizing themselves with the approved set of testing/debugging goals and policies, keeping up-to-date with revisions, and making suggestions for changes when appropriate. When developing test plans, setting testing goals for each project at each level of test that reflect organizational testing goals and

policies. Carrying out testing activities that are in compliance with

organizational policies.

7.7 Tasks of Developer

Page 76: SOFTWARE TESTING

THANK YOU


Recommended