NATIONAL SOFTWARE TESTING GUIDELINE
2019
National Software Testing Guideline
1 | P a g e
Table of Contents
TERMS AND DEFINITIONS ...................................................................................................................... 4
EXECUTIVE SUMMARY ......................................................................................................................... 19
1 INTRODUCTION ............................................................................................................................ 20
1.1 Authority ............................................................................................................................... 20
1.2 Scope ..................................................................................................................................... 20
1.3 Benefits of a National Software Testing Guideline ............................................................. 20
1.4 Objectives of The National Software Testing Guideline ..................................................... 21
2 THE SOFTWARE DEVELOPMENT LIFECYCLE (SDLC) ..................................................................... 21
2.1 The Software Testing Life Cycle ........................................................................................... 23
2.2 Software Testing Principles .................................................................................................. 24
2.3 Project Organisation Structure ............................................................................................ 24
2.4 Minimum Requirements to become a Software Test Professional .................................... 26
3 SOFTWARE TESTING GUIDELINE APPROACH ............................................................................... 26
3.1 Organisational Test Process ................................................................................................. 27
3.1.1 Organisational Test Specifications ....................................................................................... 27
3.1.1.2 Purpose................................................................................................................................. 28
3.1.2 Practical Guideline................................................................................................................ 29
3.2 Test Management Processes ............................................................................................... 30
3.2.2 Practical Guidelines .............................................................................................................. 31
3.3 Test Planning Process ........................................................................................................... 31
3.3.1 Test Planning Input Requirements ...................................................................................... 31
3.3.2 Practical Guideline................................................................................................................ 36
3.3.3 Information Items ................................................................................................................ 40
.......................................................................................................................................................... 41
3.4 Test Monitoring and Control Process .................................................................................. 42
3.4.1 Overview ............................................................................................................................... 42
3.4.2 Purpose ................................................................................................................................. 43
3.4.3 Outcomes .............................................................................................................................. 43
3.4.4 Practical Guideline................................................................................................................ 43
3.4.5 Information Items ................................................................................................................ 44
3.5 Test Completion Process ...................................................................................................... 45
3.5.1 Overview ............................................................................................................................... 45
3.5.2 Purpose ................................................................................................................................. 45
3.5.3 Outcome ................................................................................................................................ 45
3.5.4 Activities and Tasks ............................................................................................................... 45
National Software Testing Guideline
2 | P a g e
3.5.5 Practical Guideline ............................................................................................................... 45
3.5.6 Information Items................................................................................................................ 46
3.6 Dynamic Testing Process ...................................................................................................... 46
3.6.2 Test Design and Implementation Process ........................................................................... 47
3.6.3 Purpose ................................................................................................................................. 47
3.6.4 Practical Guidelines .............................................................................................................. 48
3.6.5 Information Items ................................................................................................................ 49
3.7 Test Environment Set-Up and Maintenance ....................................................................... 49
3.7.1 Overview................................................................................................................................ 49
3.7.2 Purpose .................................................................................................................................. 50
3.7.3 Outcomes............................................................................................................................... 50
3.7.4 Activities and tasks ................................................................................................................ 50
3.7.5 Practical Guideline ................................................................................................................ 50
3.7.6 Information Items ................................................................................................................. 50
3.8 Test Execution Process ......................................................................................................... 51
3.8.1 Overview ............................................................................................................................... 51
3.8.2 Purpose ................................................................................................................................. 51
3.8.3 Outcomes............................................................................................................................... 51
3.8.4 Activities and tasks ................................................................................................................ 51
3.8.5 Practical Guideline ................................................................................................................ 51
3.8.6 Information Items ................................................................................................................. 52
3.9 Test Incident Report Process ............................................................................................... 52
3.9.1 Overview................................................................................................................................ 52
3.9.2 Purpose .................................................................................................................................. 52
3.9.3 Outcomes............................................................................................................................... 52
3.9.4 Activities and tasks ................................................................................................................ 52
3.9.5 Practical Guideline ................................................................................................................ 53
3.10 Software Testing Practical Steps .......................................................................................... 53
4 LEVELS OF TESTING ....................................................................................................................... 58
4.1 Unit Testing........................................................................................................................... 59
4.1.1 Purpose ................................................................................................................................. 59
4.2 Integration Testing ............................................................................................................... 59
4.2.1 Purpose ................................................................................................................................. 59
4.3 System Testing ...................................................................................................................... 59
4.3.1 Purpose ................................................................................................................................. 60
4.4 Acceptance Testing ............................................................................................................... 60
National Software Testing Guideline
3 | P a g e
4.4.1 Purpose ................................................................................................................................. 60
4.5 Software Test Automation Tools ......................................................................................... 61
5 TEST TECHNIQUES ........................................................................................................................ 62
5.1 Specification-Based Testing Techniques (Black-Box Testing Techniques) ......................... 62
5.2 Structure-Based Testing Techniques (White Box Testing Techniques) .............................. 62
5.2.1 White Box Testing Techniques ............................................................................................. 62
5.3 Experience-Based Testing Technique .................................................................................. 63
6 TEST DOCUMENTATION ............................................................................................................... 63
6.1 Overview ............................................................................................................................... 63
6.2 Organisational Test Policy Documentation ......................................................................... 64
6.2.1 Test Policy Document Template .......................................................................................... 64
6.3 Organisational Test Strategy Documentation ..................................................................... 65
6.3.1 Organisational Test Strategy Template ............................................................................... 65
6.4 Test Plan Documentation ..................................................................................................... 67
6.4.1 Test Plan Template ............................................................................................................... 67
6.5 Test Status Report Documentation ..................................................................................... 70
6.5.1 Test Status Report Template ............................................................................................... 70
6.6 Test Completion Report Documentation ............................................................................ 71
6.6.1 Test Completion Report Template....................................................................................... 71
6.7 Test Design Specification Documentation ........................................................................... 73
6.7.1 Test Design Specification Template ..................................................................................... 73
6.8 Test Case Specification Documentation .............................................................................. 74
6.8.1 Test Case Specification Templates ....................................................................................... 74
6.9 Test Procedure Specification Documentation ..................................................................... 75
6.9.1 Test Procedure Specification Template ............................................................................... 75
6.10 Test Data Requirements Documentation ............................................................................ 76
6.10.1 Test Data Requirements Template ...................................................................................... 76
6.11 Test Environment Requirements Documentation .............................................................. 76
6.11.1 Test Environment Requirements Template ........................................................................ 77
6.12 Test Execution Log Documentation ..................................................................................... 77
6.12.1 Test Execution Log Template ............................................................................................... 77
6.13 Incident Report Documentation .......................................................................................... 77
6.13.1 Incident Report Template .................................................................................................... 77
APPEDIX A ............................................................................................................................................. 79
6.14 Software Testing Certifications ............................................................................................ 79
REFERENCES .......................................................................................................................................... 80
National Software Testing Guideline
4 | P a g e
TERMS AND DEFINITIONS
National Software Testing Guideline
5 | P a g e
Actual Results Set of behaviours or conditions of a test item, or set of
conditions of associated data or the test environment,
observed as a result of test execution
EXAMPLE: outputs to hardware, changes to data, reports,
and communication messages sent
BSI British Standards Institution, the national standards body
of the United Kingdom
CEO Chief Executive Officer, the most senior executive with
responsibility for running an organization
CFO Chief Financial Officer, a senior executive with
responsibility for the financial affairs of an organization
Compliance A general concept of conforming to a rule, standard, law,
or requirement such that the assessment of compliance
results in a binomial result stated as "compliant" or
"noncompliant"
(A Guide to the Project Management Body of Knowledge
(PMBOK® Guide) -- Fifth Edition)
EXAMPLE: comply with a regulation.
Conformance Fulfilment by a product, process or service of specified
requirements
Defect Management A test practice involving the investigation and
documentation of test incident reports and the retesting
of defects when required
Dynamic Testing (1) Testing that requires the execution of the test item
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.9)
(2) Testing that requires the execution of program code
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering-- Software testing--Part 2: Test processes, 4.4)
Exhaustive Testing A testing approach where every possible combination of
National Software Testing Guideline
6 | P a g e
input value is executed (infeasible for all practical test
items)
Expected Results Observable predicted behaviour of the test item under
specified conditions based on its specification or another
source
(ISO/IEC/IEEE 29119- 1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.15)
Experience-Based Testing A testing practice, in which testing is based on the tester's
previous experience, such as their domain knowledge and
knowledge of particular software and systems, along with
metrics from previous projects
Feature Set (1) Collection of items which contain the test conditions of
the test item to be tested which can be collected from
risks, requirements, functions, models, etc.
(ISO/IEC/IEEE 29119- 1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions)
(2) Logical subset of the test item(s) could be treated
independently of other feature sets in the subsequent test
design activities
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering-- Software testing --Part 2: Test processes,
4.10)
ICT Information and Communication Technology
(ISO/IEC/IEEE 23026:2015 Systems and software
engineering--Engineering and management of websites for
systems, software, and services information, 5)
IEC The International Electrotechnical Commission is a global
organization for the preparation and publication of
International Standards for all electrical electronic and
National Software Testing Guideline
7 | P a g e
related technologies.
IEEE The Institute of Electrical and Electronics Engineers in the
world's largest technical professional organization
dedicated to educational and technical advancement of
electrical and electronic engineering, telecommunications,
computer engineering and allied disciplines.
Incident Report Documentation of the occurrence, nature, and status of
an incident
(ISO/IEC/IEEE 29119- 1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.18)
Information Item Separate identifiable body of information that is
produced, stored, and delivered for human use
(ISO/IEC/IEEE 15289:2015 Systems and software
engineering--Content of life-cycle information products
(documentation). 5.11)
ISO The International Organization for Standardization (ISO) is
an international standard-setting body composed of
representatives from various national standards
organizations.
Lessons Learned The knowledge gained during a project which shows how
project events were addressed or should be addressed in
the future with the purpose of improving future
performance
(A Guide to the Project Management Body of Knowledge
(PMBOK ® Guide) -- Fifth Edition)
One-to-One Test Case
Design
An approach to test case design, where one test case is
derived for each test coverage item
Organizational Test
Specification
A document that provides information about testing for
an organization. i.e. information that is not project specific
(ISO/IEC/IEEE 29119-1:2013 Software and systems
National Software Testing Guideline
8 | P a g e
engineering-- Software testing--Part 1: Concepts and
definitions, 4.24)
Organizational Test Strategy Documents that express the generic requirements for the
testing to be performed on all the projects run within the
organization, providing detail on how the testing is to be
performed
(ISO/IEC/IEEE 29119- 1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.25)
Product Risk Risk that a product could be defective in some specific
aspect of its function, quality, or structure
ISO/IEC/IEEE 29119-3:2013 Software and systems
engineering--Software testing-- Part 3: Test
documentation, 4.8)
Project Risk Risk related to the management of a project
ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.31)
EXAMPLE: lack of staffing, strict deadlines, changing
requirements
Regression Testing Testing following modifications to a test item or to its
operational environment, to identify whether regression
failures occur
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.32)
Requirements-Based Testing An approach to deriving test cases based on exercising
specified user requirements
National Software Testing Guideline
9 | P a g e
Retesting Re-execution of test cases that previously returned a "fail"
result, to evaluate the effectiveness of interviewing
corrective actions
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing-- Part 1: Concepts and
definitions, 4.34)
SYN: confirmation testing
Risk Combination of the probability of an abnormal event or
failure and the consequence(s) of that event or failure to a
system's components, operators, users, or environment
(IEEE 1012-2012 IEEE Standard for System and Software
Verification and Validation, 3.1)
Risk Exposure Potential loss presented to an individual, project, or
organization by a risk
(ISO/IEC 16085:2006 Systems and software engineering--
Life cycle processes--Risk management, 3.10)
Risk Register A document in which the results of risk analysis and risk
response planning are recorded
(A Guide to the Project Management Body of Knowledge
(PMBOK® Guide) -- Fifth Edition)
NOTE: The risk register details all identified risks, including
description, category, cause, probability of occurring,
impact(s) on objectives, proposed responses, owners, and
current status. It can be kept in a database. See Also: risk
management plan
Risk-Based Testing Testing in which the management, selection, prioritization,
and use of testing activities and resources are consciously
based on corresponding types and levels of analyzed risk
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions,4.35)
National Software Testing Guideline
10 | P a g e
Scripted Testing (1) Dynamic testing in which the tester's actions are
prescribed by written instructions in a test case
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.37)
(2) Testing performed based on a documented test script
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering--Software testing--Part 2: Test Processes,
4.23)
NOTE: Normally applies to manually executed testing,
rather than the execution of an automated script
Static Testing Testing in which a test item is examined against a set of
quality or other criteria without code being executed
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.42)
EXAMPLE: reviews, static analysis
SEE ALSO: inspection
Test Automation A test practice involving the automation of testing using
software that is usually referred to as test tools
NOTE: Automated testing is often considered to be mainly
concerned with the execution of scripted tests rather than
have the testers execute tests manually, however many
additional testing activities can be supported by software-
based tools.
Test Basis Body of knowledge used as the basis for the design of
tests and test cases
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering --Software testing--Part 1: Concepts and
definitions, 4.47)
NOTE: The test basis can take the form of documentation,
National Software Testing Guideline
11 | P a g e
such as a requirements specification, design specification,
or module specification, but can also be an undocumented
understanding of the required behaviour.
Test Case Set of test case preconditions, inputs (including actions,
where applicable), and expected results, developed to
drive the execution of a test item to meet test objectives,
including correct implementation, error identification,
checking quality, and other valued information
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering -- Software testing--Part 1: Concepts and
definitions, 4.48)
NOTE: A test case is the lowest level of test input (i.e. test
cases are not made up of test cases) for the test sub
process for which it is intended.
Test Completion Criteria A set of criteria to determine whether a specific test sub-
process can be considered as completed
EXAMPLE: 100% of statements exercised by test cases
EXAMPLE: Zero critical defects remain open
Test Completion Report A report that summarizes the testing that was performed
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.51)
SYN: test summary report
Test Condition Testable aspect of a component or system, such as a
function, transaction, feature, quality attribute, or
structural element identified as a basis for testing
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.52)
National Software Testing Guideline
12 | P a g e
Test Coverage Item An attribute or combination of attributes that is derived
from one or more test conditions by using a test design
technique that enables the measurement of the
thoroughness of the test execution
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.54)
Test Data Data created or selected to satisfy the input requirements
for executing one or more test cases, which can be
defined in the test plan, test case, or test procedure
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering-- Software testing--Part 2: Test processes,
4.34)
Test Environment Facilities, hardware, software, firmware, procedures and
documentation intended for or used to perform testing of
software
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.60)
Test Execution A process of running a test on the test item, producing
actual results
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.64)
Test Execution Log A document that records details of the execution of one or
more test procedures
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.65)
National Software Testing Guideline
13 | P a g e
Test Item Work product that is an object of testing
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.68)
EXAMPLE: A system, a software item, a requirements
document, a design specification, a user guide
SYN: test object
Test Level Specific instantiation of a test sub-process
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.69)
EXAMPLE: Component, component integration, system,
and acceptance testing
SYN: test phase
Test Management Planning, estimating, monitoring, reporting, control and
completion of test activities
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering--Software testing--Part 2: Test processes,
4.49)
Test Measures Variable to which a value is assigned as the result of
measuring an attribute of the testing
EXAMPLE: 80% branch coverage
Test Objective Purpose of performing a testing activity associated with a
feature or feature set
EXAMPLE: Provision of information about product
qualities
Test Phase Specific instantiation of a test sub-process
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering--Software testing--Part 2: Test Processes,
4.52)
SYN: test phase
National Software Testing Guideline
14 | P a g e
Test Plan Detailed description of test objectives to be achieved and
the means and schedule for achieving them, organized to
coordinate testing activities for some test item or set of
items
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
definitions, 4.75)
EXAMPLE: A project test plan (also known as a master test
plan) that encompasses all testing activities on the project;
further detail of particular test activities can be defined in
one or more test sub-process plans (i.e. a system test plan
or a performance test plan)
NOTE: It identifies test items, the features to be tested,
the testing tasks, who will do each task, and any risks
requiring contingency planning. Typical contents identify
the items to be tested, tasks to be performed,
responsibilities, schedules, and required resources for the
testing activity.
Test Policy Executive-level document that describes the purpose,
goals, principles and scope of testing within an
organization
(ISO/IEC/IEEE 29119-3:2013 Software and systems
engineering--Software testing--Part 3: Test
documentation, 4.26)
SYN: organizational test policy
Test Procedure Sequence of test cases in execution order, associated
actions to set up the initial preconditions, and wrap-up
activities post execution
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering--Software testing--Part 1: Concepts and
National Software Testing Guideline
15 | P a g e
definitions, 4.78)
Test Procedure Specification Document specifying one or more test procedures, which
are collections of test cases to be executed for a particular
objective
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions, 4.79)
NOTE: A test procedure specification for an automated
test run is usually called a test script.
Test Process Process that provides information on the quality of a
software product
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing--Part 1: Concepts and
definitions. 4.80)
NOTE: Often comprised of a number of activities, grouped
into one or more test sub-processes
Test Result Indication of whether or not a specific test case has
passed or failed, i.e. if the actual result observed as test
item output corresponds to the expected result or if
deviations were observed
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.82)
Test Script Test procedure specification for manual or automated
testing
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.83)
National Software Testing Guideline
16 | P a g e
Test Set (1) Set of one or more test cases with a common
constraint on their execution
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definition, 4.84)
(2) Collection of test cases for the purpose of testing a
specific test objective
(ISO/IEC/IEEE 29119-2:2013 Software and systems
engineering-- Software testing-- Part 2: Test processes,
4.62)
NOTE: The test sets will typically reflect the feature sets,
but they could contain test cases for a number of feature
sets. Test cases for a test set could be selected based on
the identified risks, test basis, retesting, or regression
testing.
Test Status Report Report that provides information about the status of the
testing that is being performed in a specified reporting
period
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.86)
Test Strategy Part of the Test Plan that describes the approach to
testing for a specific test project or test sub-process or
sub-processes
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.87)
NOTE: The test strategy usually describes some or all of
the following: the test practices used; the test sub
processes to be implemented; the retesting and
regression testing to be employed; the test design
National Software Testing Guideline
17 | P a g e
techniques and corresponding test completion criteria to
be used; test data; test environment and testing tool
requirements; and expectations for test deliverables.
Test Sub-process Test management and dynamic (and static) test processes
used to perform a specific test level (e.g. system testing,
acceptance testing) or test type (e.g. usability testing,
performance testing) normally within the context of an
overall test process for a test project
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.88)
NOTE: Depending on the life cycle model used, test sub-
processes are also typically called test phases, test levels,
test stages or test tasks.
SYN: test sub process
Test Type Group of testing activities that are focused on specific
quality characteristics
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.91)
Example: security testing, functional testing, usability
testing, performance testing
Traceability Matrix Matrix that records the relationship between two or more
products of the development process
(ISO/IEC/IEEE 24765:2017 Systems and software
engineering-Vocabulary)
EXAMPLE: a matrix that records the relationship between
the requirements and the design of a given software
component
National Software Testing Guideline
18 | P a g e
Unscripted Testing Dynamic testing in which the tester's actions are not
prescribed by written instructions in a test case
(ISO/IEC/IEEE 29119-1:2013 Software and systems
engineering-- Software testing-- Part 1: Concepts and
definitions, 4.94)
Validation Confirmation, through the provision of objective evidence,
that the requirements for a specific intended use or
application have been fulfilled
(ISO/IEC 25000:2014 Systems and software Engineering--
Systems and software product Quality Requirements and
Evaluation (SQuaRE)-- Guide to SQuaRE, 4.41
Verification Confirmation, through the provision of objective evidence,
that specified requirements have been fulfilled
(ISO/IEC 25000:2014 Systems and software Engineering--
Systems and software product Quality Requirements and
Evaluation (SQuaRE)-- Guide to SQuaRE, 4.64
National Software Testing Guideline
19 | P a g e
EXECUTIVE SUMMARY
The importance of software testing cannot be over emphasised in today’s rapidly growing
technological environment; according to Forester publication (2018) the global tech market will grow
from 3.4% to 4% in 2017 and 2018 respectively; reaching a three (3) trillion dollar high in 2018. The
acquisition of technology driven solutions to drive growth in the public and private sectors have
continued to increase in Nigeria. The indigenous software market has not been left out of this growth
trend, but continues to suffer stiff competition from foreign off-the-shelf software used to meet local
needs, where indigenous software could have provided the appropriate solutions. According to
industry analysts, this trend has affected growth of the local software sector, which by now should be
in excess of probably ten (10) billion dollars annually if well harnessed. However, quality assurance
challenges amongst other factors have contributed to low adoption of indigenous software products
in the country.
In view of this, the National Information Technology Development Agency (NITDA) advocates a
structured approach to software quality assurance through a holistic software testing framework. The
National Software Testing Guideline (NSTG) adopts a multi-layer approach to software testing, which
is easily adaptable to any software domain to improve the quality of software and reduce cost of
development. Developed with the public and private sector in mind, the NSTG provides a practical
guide towards software testing as defined by the ISO/IEC/IEEE 29119 series of standards.
The guideline provides a brief introduction of NITDA’s purpose, its scope and objectives. It identifies
the relationship and importance of software testing in the Software Development Life Cycle (SDLC)
and introduces the multi-layer approach to testing, principles of software testing and the project
organisation structure.
The third section of the guideline focuses on the test approach, detailing the tree processes defined
by ISO/IEC/IEEE/29119; which are the Organisational Test Process, Test Management Processes and
Dynamic Test Process. It provides activities and tasks required during the test processes and practical
steps to implement software testing from planning to test execution.
Section four provides the important concept of testing levels, where every unit or component of
software is tested for defects and conformance with the intent of the system.
The fifth section provides details on testing techniques and its role in software testing. The final
section of the guideline provides details on test documentation and how it fits into the multi-layer
testing approach.
National Software Testing Guideline
20 | P a g e
1 INTRODUCTION
In a bid to create the necessary enabling environment for Information Technology (IT) for national
growth and development, the National Information Technology Development Agency (NITDA) is
focused on developing and regulating the IT sector in Nigeria, through the development of
appropriate guidelines and frameworks, to promote trust, efficiency, accuracy, reliability of
information systems in delivering information technology services. The National Software Testing
Guideline (NSTG) is developed on the premise that mitigating software vulnerability risks through the
promotion of structured software testing practice for safety and quality of software development will
create the enabling environment for growth of the indigenous testing sector in Nigeria.
The NSTG is developed based on international software testing standards which provides support for
a wide range of application areas in any software development life cycle; thus, being generic and can
be applied to different software domains.
1.1 Authority
In exercise of the mandate conferred on NITDA, specifically by section 6(b) and 6(e) of the National
Information Technology Agency Act of 2007; the National Software Testing Guideline is hereby issued.
1.2 Scope
This document shall play the specific role of ensuing a National Software Testing Guideline, developed
in line with international standard practices for software testing teams in initiating, planning,
managing, implementing and maintaining testing for software by public and private organizations;
and applicable to the testing of software developed, purchased or modified in-house.
1.3 Benefits of a National Software Testing Guideline
Every guide or standard brings about its own specific values and inherent benefits. This is obviously
the case with the NSTG, which stands to attain the following benefits among others:
Assist NITDA to achieve its slated mandate as required by Law, through effective
regulation of software practices in Nigeria;
It provides a standard guideline in identifying and fixing errors before the software
becomes operational, by reducing the risk of software failures considerably, which would
improve quality of software products developed in the country; thereby increasing user
satisfaction and demand for local software products;
Reduces cost of mitigation using structured risk-based techniques for testing and
maintaining software development and products;
National Software Testing Guideline
21 | P a g e
Improved reliability of software product by following set software testing technical
guidelines;
Improves integration of legacy and varying software platforms by providing assurance
that integration would work optimally with no performance retardation through
structured testing processes;
Helps to reduce production time spent on software integration for public and private
sectors; and,
Helps attain professional discipline in software testing practices in the country.
1.4 Objectives of The National Software Testing Guideline
The objectives of this document are:
To provide the software industry with a structured software testing guideline, which
offers best practices in software testing to promote safety and quality of software
development in Nigeria; and,
To provide an enabling environment for indigenous software testing administration,
management and sustainability, through structured testing guidelines to improve quality,
availability, reliability and reduce cost of software developed in Nigeria.
2 THE SOFTWARE DEVELOPMENT LIFECYCLE (SDLC)
The Software Development Life Cycle (SDLC) are phases in software development with an initiation
phase through to system maintenance. Each phase produces outputs required by the next phase in
the lifecycle. A typical SDLC is initiated by gathering business requirements, which are then analysed
and translated into a design; codes are then produced based on the design specification
(development phase). Software testing typically starts after coding is complete and before software is
deployed for use (deployment phase). During the deployment phase, the software is maintained
(maintenance/operation phase) until it is retired. There are many SDLC models used during software
development process, which include:
Waterfall model
Incremental model
V-model
Iterative model
RAD model
Agile model
National Software Testing Guideline
22 | P a g e
Spiral model
Prototype model
The NSTG is not developed upon any specific SDLC model but built in line with the ISO/IEC/IEEE 29119
standard which defines test terminology, processes, documentation and techniques that can be
adopted within the testing phase of any SDLC model. The following phases are predominately found
in all SDLC models:
Requirement gathering and analysis phase
Design phase
Implementation phase
Testing phase
Deployment phase
Maintenance Phase
REQUIREMENT SPECIFICATION
TEST PLANNING
TEST CASE DEVELOPMENT
TEST ENVIRONMENT SET-UP
TEST EXECUTION
TEST CYCLE CLOSURE
REQUIREMENT & ANALYSIS PHASE
DESIGN PHASE
CODING PHASE
TESTING PHASE
DEPLOYMENT PHASE
MAINTENNCE PHASE
SDLC STLC
Figure 1: SDLC and STLC Relationship Diagram
National Software Testing Guideline
23 | P a g e
The software testing phase starts after the codes have been developed, which are in turn tested
against the system requirement specification (test basis) gathered during the SDLC requirement
phase to make sure the software conforms with the intent of the development. During the testing
phase various tests are conducted to test for conformity, such tests are:
Functional testing;
Non-functional testing;
Unit testing;
Integration testing;
System testing; and,
Acceptance testing.
2.1 The Software Testing Life Cycle
Software developers follow the Software Development Life Cycle (SDLC), while software testers adopt
the Software Testing Life Cycle (STLC). The STLC refers to a testing process with distinct steps to be
executed in sequence to ensure that the quality goals of the software have been achieved. Each
phase of the STLC has different goals and deliverables, which provides a quality management process
for testing software. The STLC is initiated from the testing phase of the SDLC (See Figure 1: SDLC and
STLC Relationship Diagram) after coding is completed and again after deployment, where the
software is operational and being maintained until it’s retired.
The NSTG follows the STLC multi-layer testing process defined by ISO/IEC/IEEE 29119 standard, which
includes the Organisational Test Process, Test Management Process and the Dynamic Test Process.
ORGANISATION TEST PROCESS
TEST MANAGEMENT PROCESS
DYNAMIC TEST PROCESS
Figure 2: Multi-layer Approach Software Testing Process
National Software Testing Guideline
24 | P a g e
2.2 Software Testing Principles
While conducting software testing, the expectation is to achieve 100% test coverage within the
threshold of the testing process. To achieve this, the following testing principles are common
practice, which should be maintained:
1. Testing should be conducted through risk-based assessment and not on exhaustive
testing methods;
2. Testing should start early at the development phase of the SDLC, so that any defects in
the requirements and design phase are detected early. The earlier a defect is detected,
the cheaper it is to repair;
3. External test teams should be used for software testing implementation different from
those that developed the software;
4. All software testing processes must be comprehensive to test for invalid and unforeseen,
as well as valid and anticipated input conditions;
5. Any software testing specification must be detailed, complete and clear;
6. Test cases need to be regularly reviewed and revised, with new and different test cases to
find more defects;
7. If the software product does not meet the user’s requirements, there is no point in
testing to find defects and repairing it; and,
8. Do not assume in your test planning that no errors would be found in the system.
2.3 Project Organisation Structure
In IT projects, it is important to create a project organisation structure for the management of the
day-to-day activities of the project. In software testing projects, several organisational structures are
possible, each having its own inherent requirements depending on the use. For example, in safety
systems where strict adherence to safety is required, a Software Quality Assurance team would be
required in the project organisational structure as a subject specialist on quality of the system.
However, for this guideline, the Test Project Team is streamlined and composed of the Test Team
(Test Manager/Test Lead, Testers, Developer in Test and Test Administrator), Development Team
(Development Manager and Programmers) and Software Quality Assurance Members; and they all
report to the Project Manager.
The Project Manager in turn reports to the organisation’s management (Project Board). The test team
are external to the organisation and different from those who developed the software. This scenario
can also be applied to off-the-shelf purchased software and to external software development
National Software Testing Guideline
25 | P a g e
contractors. Depending on the nature of the software and the size of the organisation, the
organisation structure below can be adapted to fit any testing scenario as required.
PROJECT MANAGER
TEST MANAGER
DEVELOPMENT MANAGER
TEST ADMINISTRATOR
PROGRAMMERS
DEVELOPER IN TESTING
TESTERS
SQA
Figure 3: Software Testing Project Organisation
Position Responsibilities
Project Manager (PM) Represents the client and manages the day to day activities on the
project level.
Test Manager (TM) TM manages the whole testing project processes:
Test Planning Process;
Test Management Process;
Test Execution Process; and,
defines the test project direction.
Tester Tester develop the Test Cases;
Generate the Test Procedures;
Executes the test;
Logs the results of the tests; and
Report the defect.
Developer in Test Creates the program to test the codes created by the developers
(unit and integration testing); and,
Creates the test automation scripts.
Test Administrator Builds the Test Environment;
National Software Testing Guideline
26 | P a g e
Manages and maintains the test environment assets;
Supports the test team to use the test environment for test
execution; and,
Decommissions and archives the test environment when testing
is completed.
Software Quality Assurance Provides quality assurance support on the test project.
2.4 Minimum Requirements to become a Software Test Professional
It is expected that a Software tester shall possess the following:
a. A bachelor’s degree in Computer Science, Information Systems, Software Engineering or a
related technical field;
b. Should be able to create and document automated and manual test plans and procedures,
execute tests, analyse results and report on test problems and anomalies (document bugs);
c. Perform software testing in all level of the design-develop-test-release-maintain software life
cycle;
d. Understand various development methodologies;
e. Possess thorough knowledge of several testing tools;
f. Be fluent in UNIX, Linux and/or Windows, as well as scripting and command-line tools;
g. Be an excellent communicator (written and verbal) with development, operations, product
management and customers;
h. Have working knowledge of various programming languages, such as Java, JavaScript, C# or
C++, SQL, Python, PHP and Ruby on Rails;
i. Have Professional Certification such as:
I. ISTQB Certified Tester: The American Software Testing Qualifications Board (ASTQB);
II. Certified Software Tester (CSTE): The International Software Certification Board
(ISCB);
j. 3 to 6 years of cognate experience depending on the certification level attained.
See the full list of Software testing certifications in Appendix A.
3 SOFTWARE TESTING GUIDELINE APPROACH
The NSTG adopts the multi-layer approach to software testing process (See Figure 4: Multi-Layer
Testing Process of ISO/IEC/IEEE 29119). This approach combines various standards that have been
sequentially integrated into a single structured standard for software testing. Noting that
National Software Testing Guideline
27 | P a g e
conformance to the standard is flexible and simple to adopt in any software testing domain. The
ISO/IEC/IEEE 29119 structure consist of the following standards:
BS 7925-1
ISO/IEC 33063
IEEE 829
ISO/IEC 20246
BS 7925-2
Figure 4: Multi-Layer Testing Process
3.1 Organisational Test Process
This process is adopted for the development and management of the Organisational Test
Specifications (OTS). The Organisation Test Process is instantiated twice to develop the Organisational
Test Policy and Organisational Test Strategy. The OTS generally applies to testing throughout the
organisation.
3.1.1 Organisational Test Specifications
The following documents are components of the Organisational Test Specifications:
a. Organisational Test Policy: Executive level document that lays out the purpose, goals and
scope of testing in the organisation. It also provides the mechanism for establishment,
review and updates of the Organisation’s Test Policy, Test Strategy and approach to
project test management (See Test Management Process in section 3.3).
•Creation and Maintenance of Organisational Test Policy, Strategy, Processes, Procedures and Other Assets.
Organsational Test Process
•Test Planning Process;
•Test Monitoring and Control Process; and,
•Test Completion Process.
Test Management Processes
•Test Design and Implementation Process;
•Test Environment Set-up and Maintenance Process;
•Test Execution Process; and,
•Test Incident Reporting Process.
Dynamic Test Processes
National Software Testing Guideline
28 | P a g e
b. Organisational Test Strategy: A detailed technical document, defining how testing is
carried out in the organisation. It’s a generic document (programme level), which
provides guidelines for software testing projects in the organisation.
ORGANISATION TEST PROCESS
ORGANISATION TEST PROCESS(TEST POLICY)
ORGANISATION TEST PROCESS(TEST STRATEGY)
Organisation Test Process initiated twice, once each to create and maintain the Organisational
Test Policy and Organisational Test Strategy
Figure 5: Organisation Test Specification Process
3.1.1.1 Purpose
The purpose of the Organisation Test Process is to develop, monitor conformance and maintain
Organisational Test Specifications.
3.1.1.2 Outcomes
a. Requirements for the Organisational Test Specifications are identified;
b. Organisation Test Specifications document is developed;
NOTES:
The OTS is generic (programme oriented) and not specific to a test project. E.g. testing for
different test levels and test types may require different approaches which may not be defined in
the Organisational Test Strategy but defined in the project Test Strategy during test planning by
the Test Manager.
National Software Testing Guideline
29 | P a g e
c. Stakeholder buy-in is solicited for the Organisational Test Specifications;
d. The Organisational Test Specifications is made accessible across the organisation;
e. Conformance to the OTS is monitored;
f. Stakeholders agree to updates to the OTS; and,
g. Updates are affected to the OTS document.
3.1.1.3 Activities and tasks
The following activities and tasks are to be implemented in line with the applicable organisational
policies and procedures by the department (e.g. ICT department) or person (Director/Head of ICT)
responsible for the Organisational Test Specifications:
a. Develop the Organisational Test Specifications through gathering requirements from
existing; testing practices, stakeholders, documents, workshops, interviews and other
relevant resources;
b. The developed requirements are used to create the Organisational Test Specifications
document;
c. Stakeholders approval is required on the content of the Organisational Test
Specifications; and,
d. The Organisational Test Specifications availability is communicated to relevant
stakeholders.
3.1.2 Practical Guideline
The following steps should be followed while maintaining the OTS.
3.1.2.1 Monitor and Control Use of Organisational Test Specifications
a. Monitor usage of OTS to determine if conformance is maintained; and,
b. Keep stakeholders aligned with the Organisational Test Specifications document
updates.
3.1.2.2 Update Organisational Test Specifications
a. Review all practice feedback of the Organisational Test Specifications;
b. Effectiveness of use and management of the Organisational Test Specifications should
be considered and feedback on changes to improve its effectiveness should be
determined and approved by stakeholders;
c. Implement all improvement changes that have been approved; and,
d. All changes to the document should be communicated to all stakeholders.
National Software Testing Guideline
30 | P a g e
3.2 Test Management Processes
This process covers the management of testing for a test project (project test management), and test
management for test levels (unit test, integration test, system test, acceptance test) and test types
(usability, accessibility, performance, configuration, security, availability tests etc) within a test
project.
TEST MANAGEMENT PROCESS
TEST MANAGEMENT PROCESS (PROJECT LEVEL)
TEST MANAGEMENT PROCESS
TEST MANAGEMENT PROCESS
TEST MANAGEMENT PROCESS (TEST LEVELS & TEST TYPES )
Test Management Process initiated once at the project level (i.e. small test projects)
Test Management Process several instances initiated at the test levels & for test types
(i.e. large test projects)
Figure 6: Test Management Process
The Test Management Process includes:
a. Test Planning;
b. Test Monitoring and control; and,
NOTES:
Stakeholders in this process refers to the organisation’s management (Directors, CEO, CTO,
Heads of ICT, etc) and internal Project Managers’ involved in testing projects.
Find Organisational Test Policy and Organisational Test Strategy document template in Test
Documentation section.
National Software Testing Guideline
31 | P a g e
c. Test Completion.
3.2.1 Practical Guidelines
The Test Management Process is initiated and managed by the Test Manager.
1. Apply the Test Management Processes at the project level according to the project test
plan (i.e. unit testing, integration testing, system testing, performance testing and
acceptance testing)
2. The Test Management Process should be applied separately to each individual test level
and test type based on separate test plans,
o Test levels: i.e. unit test plan, integration test plan, system test plan, and acceptance
test plan;
o Test types: i.e. performance test plan, load test plan, stress test plan, usability test
plan etc.
3. The Test Management Process must align with the outputs of the Organisation Test
Process (Organisation Test Policy and Organisation Test Strategy).
4. On small projects, the Test Management process should be initiated ones, having a single
Test Plan, one instance of the Monitoring and Control process and one instance of the
Test Completion process.
5. On larger projects, the Test Management Process would be instantiated many times
depending on the system test requirements highlighted in the Test Plan.
o Initial Instance: Project Test Management process to manage the test project as a
whole; this instance will have its own Project test plan; and,
o Subsequent Instances: Test Management Process for each level or type of testing that
are managed separately with corresponding test plans.
3.3 Test Planning Process
The Test Manager’s first task is to develop the Test Plan (TP) for the software testing project, in line
with The Organisational Test Policy and Organisational Test Strategy; the Organisational Test Policy
and Organisational Test Strategy documents are usually developed and found in large organisations
(public and private sectors), but rarely found in small ones. In case a Test Manager has no
Organisational Test Specifications to follow, s/he would need to develop the Test Plan from
understanding the requirements of the software system being tested.
3.3.1 Test Planning Input Requirements
The initial development of the test plan requires the following inputs into the test planning process:
Organisational Test Policy;
National Software Testing Guideline
32 | P a g e
Organisational Test Strategy;
Available regulatory standards;
Project Test Plan according to testing for specifics:
o Test Phase
o Test Type
Incident report;
Project management plan (from the Project Manager);
Applicable product documentation (from the Development Manager);
Software development Plan (from the Development Manager);
Project and Product Risk; and,
Test Plan Updates.
The following activities are required to develop a comprehensive Test Plan for the project depending
on what tests are being executed:
a. Understanding the requirements of the system (scope);
b. Organise test plan development (organise stakeholders’ workshops and interviews);
a. Identify and assess risks (provide risk scores considering impact on system);
b. Identify risk mitigation approaches (determine testing approach/techniques);
c. Design Test Strategy (develop test strategy considering the Organisational Test Strategy);
d. Determine staffing and scheduling (schedule, staffing requirements);
e. Document Test Plan (draft Test Plan);
f. Get consensus on test plan from relevant stakeholders (Project Manager); and
g. Communicate and publish Test plan (Test Plan).
National Software Testing Guideline
33 | P a g e
UNDERSTAND THE
REQUIREMENTS
ORGANISE TEST PLAN
DEVELOPMENT
IDENTIFY AND ASSESS RISKS
IDENTIFY RISK MITIGATION
METHODS
DESIGN TEST STRATEGY
DETERMINE STAFFING & SCHEDULING
DOCUMENT TEST PLAN
GET CONSENSUS
ON TEST PLAN
DISTRIBUTE TEST PLAN
Test Plan
Scope Scored Risks
Mitigation Methods
Test Strategy
Scheduling & Staffing
Profile
Draft Test Plan
Approved Test Plan
Stakeholders Workshops
and Interviews
Figure 7: High Level Test Planning Process
National Software Testing Guideline
34 | P a g e
At this stage, the test plan activities need to be organised, most especially risks and how to mitigate
them through testing; to achieve this, stakeholder engagement is required to help with the risk
assessment. Referring to the high-level Test Plan Process Activity above, test planning activities can be
broken-down using a Work Breakdown Structure (WBS), by breaking down activities into manageable
hierarchical structures.
NOTES:
The sequence above practically can be iterative; the plan may need to be modified due to new risks
identified, change in testing staff and scheduling requirements, and test environmental changes; the
process may be revisited at the appropriate test planning process and updated to reflect the testing
realities.
Document system architecture to gather the system functional and non-functional features for testing.
Having understood the context of the project, document all findings in the Project Test Plan
documentation. Find Test Plan template in Test Documentation section.
Stakeholders during the test planning process refers to the Project Manager, Development Manager,
Test Team and Software Quality Assurance members.
National Software Testing Guideline
35 | P a g e
TEST PLANNING
DO
CUM
ENT
TEST
PLA
N
DETE
RMIN
E ST
AFFI
NG
&
SCHE
DULI
NG
DESI
GN
TES
T ST
RATE
GYU
NDER
STAN
D RE
QU
IREM
ENTS
ORG
ANIS
E TE
ST P
LAN
DE
VELO
PMEN
T
AGRE
E &
PU
BLIS
H
TEST
PLA
N
INIT
IAL
SCHE
DULI
NG
&
ESTI
MAT
ION
S
IDEN
TIFY
TA
SKS
MAN
AGE
RISK
S
MIT
IGAT
E RI
SKS
IDEN
TIFY
&
ASSE
SS
RISK
S
SELE
CT T
EST
PHAS
ES
SELE
CT T
EST
TECH
NIQ
UES
SELE
CT T
EST
COM
PLET
ION
CR
ITER
IA
DEFI
NE
TEST
EN
VIRO
NMEN
T AS
SIG
N ST
AFF
DETA
ILED
ES
TIM
ATIO
NS
IDEN
TIFY
RI
SKS
ASSE
SS
RISK
S
SELE
CT T
EST
TOO
LS
Figure 8: Test Planning Work Breakdown Structure
National Software Testing Guideline
36 | P a g e
3.3.1.1 Purpose
The purpose of the Test Planning Process is to set the scope, approach and requirements of testing in
terms of identification of resources, testing environments etc. early in the testing process.
3.3.1.2 The Outcomes
a. The context or scope of the test project is determined and analysed;
b. The stakeholders participating in the test planning are identified and informed;
c. Managing risk output from risk assessment, identification and mitigation processes are
identified, analysed and classified for testing;
d. Test strategy: test phases, test techniques, test completion criteria, test environment and
test tools are identified;
e. Staffing and training requirements are identified, and staff are assigned to tasks;
f. Scheduling is determined for each activity;
g. Testing tasks are broken down and scheduling/estimations are determined, justified and
recorded in the Test Plan documentation; and,
h. Test Plan is agreed by relevant stakeholders and distributed.
3.3.1.3 Activity and tasks
a. The Test Manager is responsible for the test planning, s/he is primarily concerned with
the functional and non-functional risk attributes of the test items;
b. All activities and tasks should be followed in line with the organisations policy and
procedure with respect to Test Planning Process; however, where an organisational policy
and procedures are non-existent, assessing existing project risk from the Project
Manager’s Risk Register, organising risk identification workshops and individual
stakeholder interviews are common practice of determining risks. It is essential for the
Test Manager to confirm all identified risks with stakeholders for validation.
3.3.2 Practical Guideline
3.3.2.1 Understanding the context:
The first activity is understanding the context, for it provides the understanding of what is being
tested and the scope of testing. The following documents can be used:
a. Organisational Test Specifications;
b. Project Management plan, which provides details on project testing budget and
resources;
c. Project Risk Register;
National Software Testing Guideline
37 | P a g e
d. Applicable product document specifications;
e. Software development plan;
f. Verification and validation plan;
g. Understanding the context and testing requirements can be determined through
identifying and interacting with relevant stakeholders; and,
h. Develop and record communication plan based on project organisation accepted line
of communication.
3.3.2.2 Organising Test Plan Development:
At this stage, the test plan activities need to be organised, most especially risks and how to mitigate
them through testing; to achieve this, stakeholder engagement is required to help with the risk
assessment. Test planning activities can be broken-down using a Work Breakdown Structure (WBS),
by breaking down activities into manageable hierarchical structures for organisation (see Figure 8:
Work Breakdown Structure).
Tasks included in this activity are:
a. Based on the understanding the context, activities that are needed to be performed
to complete test planning are identified and scheduled;
b. Identify stakeholders required for these activities; and,
c. Approval of activities, schedule and resources are obtained from the Project
Manager.
Testing tasks are broken down and scheduling/estimations are determined.
3.3.2.3 Risk Identification and Estimation:
All possible project and product risks have been identified, the next step is to assess them to
determine their individual relative risk scores (risk exposure levels); this activity can be combined with
the risk identification workshop.
a. A Test Manager is primarily concerned with functional and non-functional risk
attributes of the test items (product); however, assessing existing project risk from
the Project Risk Register already identified by other project stakeholders like the
organisation’s Project manager is necessary;
b. Additional risks related to test items should be identified; project level risks not
related to software testing should be communicated to the relevant stakeholders
(Project Manager or Development Manager). Gathering risks related to the product
can be determined through risk identification workshops and individual stakeholder
interviews;
National Software Testing Guideline
38 | P a g e
c. Risks should be categorised into project and product risk for ease of planning;
d. Each risk should be given a level of exposure considering its impact, likelihood and
given a risk score; and,
e. The result of the risk assessment should be recorded in the Test Plan risk register
separated as project and product risk section respectively.
3.3.2.4 Identify Risk Mitigation Approaches:
a. Once the risk exposure levels have been determined for each risk item, the next step is to
determine the risk items for testing, which are included into the Test Plan. Other risk
items outside the scope of testing, responsibility should be passed onto the relevant
stakeholder (Project Manager or Development Manager). Risk mitigation on a complete
system should be tested in the system test level; while risk associated with core system
functionality are tested in the acceptance test level.
b. Risk Mitigation results are recorded, which includes project and product risks. Risks
scores are assigned, and all product mitigation in system testing expectations should have
100% coverage.
Test techniques for risk mitigation comprises of functional and non-functional techniques.
Functional Test Techniques
o Scenario testing
o Decision table testing
o State transition testing
o Equivalence partitioning
Non-Functional Test Techniques
o Usability testing
o Accessibility testing
o Performance testing
o Configuration testing
o Security testing
o Availability testing
3.3.2.5 Design Test Strategy:
The Test Strategy is developed by the Test Manager, and is based on risks that have been identified,
assessed and mitigated. The Test Strategy is considered on individual risk factors. The activity consists
of the following tasks:
National Software Testing Guideline
39 | P a g e
a. The Organisational Test Specifications is the initial reference point for estimating the
resources required as defined in the Organisational Test Policy and Organisational
Test Strategy;
b. Identify the test data required for testing;
c. Identify Test environment and test tools requirements;
d. Provide resource estimate required to complete the set activities listed in the test
strategy (e.g. time, money, skills, environment and tools), and comparing this
estimate with what the project estimates. In most cases, test techniques are
considered after identifying, assessing and mitigating risk, which may not have been
budgeted for, but should be included in the Test Plan due to the risk mitigations
determined by the Test Manager;
e. Retesting and regression testing estimates most be included in the Test Strategy,
indicating the frequency each piece of code is repaired and retested before being
released;
f. The identified risk mitigations should be used to estimate resources required to
perform the individual mitigations listed. Risks with higher scores or exposure should
be prioritised;
g. Test Monitoring and Control metrics should be identified in terms of estimating the
progress, quality and health of software testing efforts;
h. Document the test strategy; and,
i. Send test strategy to relevant stakeholders to obtain approval.
3.3.2.6 Test Activity:
Once the Test Strategy is approved by relevant stakeholders, individual test activities are
identified and implemented based on the estimated time and effort required to perform
them.
3.3.2.7 Staffing & Scheduling:
a. Determine availability of testers based on the role and skill required to perform the
testing described in the Test Strategy. If required testers are not available, train or
recruit required testers;
b. The test activity schedule and the project test plan should be harmonised; the test
activity schedule can be developed using Work Breakdown structure (WBS), Gantt and
PERT charts to document the testing schedules within the constraints of the project test
schedule. Once the staffing and scheduling are completed and updated on the Test
Plan, it is sent to the Project Manager to gain consensus;
National Software Testing Guideline
40 | P a g e
c. All views of stakeholders are gathered through workshops, interviews or by other
means;
d. Discrepancies between the Test Plan and stakeholders’ views should be resolved, and
the Test Plan updated with their feedback; and,
e. The approved Test Plan is made available and communicated to stakeholders.
3.3.3 Information Items
The information item derived from the Test Planning Process are:
a. Test Strategy (technical document); and,
b. Test Plan document.
National Software Testing Guideline
41 | P a g e
NOTES:
Organising Test Plan Development
As the project and testing progresses, the initial estimates may change due to new
information arising during test planning
Risk Identification and Estimates:
The risk results are used to determine what part of the system should be tested more
than others (prioritising risks).
Identify Risk Mitigation Approaches:
All test completion conditions are expected to be one hundred percent for the result
corresponding to the test technique.
Design Test Strategy:
It’s likely the cost of performing the recommended testing exceeds the available budget,
and the Test Manager must decide which risk mitigations would not be included in the
Test Strategy. Decision making to drop or adopt a test technique should be considered
based on regulatory standards, contractual requirements, availability of test skills, tools
and environments. On safety related projects, regulatory standards most be adhered to.
Mitigating risk in terms of not adhering to regulatory standards test completion criteria
should be included in the Test Strategy.
Choosing the testing technique that makes up the Test Strategy is an iterative process,
usually Test Managers do not get it right the first time. Many of the entries in the Test
Strategy are usually based on the experience of the Test Manager and historic test data
from within the organisation.
It is necessary for the Test Manager to confirm all identified risks with the stakeholders
for validation
See Test Documentation section for the Test Plan documentation template
National Software Testing Guideline
42 | P a g e
3.4 Test Monitoring and Control Process
TEST STATUS REPORT
DECISIONMONITOR CONTROLSET-UP
TESTING ARCHIVE TEST
ASSETS
CLEAN UP TEST ENVIRONMENT
IDENTIFY LESSONS LEARNED
REPORT TEST COMPLETION
TEST COMPLETION REPORT
REPORT
TEST M
EASU
RES
TEST P
RO
GR
ESSREVIEWS
LOWER LEVEL TEST MANAGEMENT
DYNAMIC TESTING
TESTING COMPLETE
MEASURES CONTROL DIRECTIVES
TESTING INCOMPLETE
TEST CONTROL INFORMATIONTEST PROGRESS INFORMATION
TEST PLAN
3.4.1.1.1 Figure 9: Test Monitoring and Control Process
3.4.2 Overview
The testing processes are frequently inspected to determine if it is in line with the Test Plan and
Organisational Test Specifications. If there are significant variances from planned activities,
correctional actions need to be initiated to control the variances.
The Monitoring and Control process can be applied in managing test projects or in test levels and test
type management (Dynamic testing management).
Inputs into the Test Management and Control Process are:
The Test Plan;
Applicable product documentation (system requirements, contracts etc.);
Organisational Test Policy;
Organisational Test Strategy;
Control directives;
Measures.
National Software Testing Guideline
43 | P a g e
3.4.3 Purpose
The Test Monitor and Control process, is the day to day management of testing, were the Test
Manager ensures the Test Plan and Organisational Test Specifications are followed; any derailment
from the plan is managed and put back on track. The process can identify additional test
requirements, which can be included as an update in the Test Plan.
3.4.4 Outcomes
a. The process of gathering appropriate yards sticks to monitor test progress and changing
risks are developed;
b. Planned testing is monitored;
c. New and changed test related risks are identified, analysed and control directives are
identified;
d. Control actions are communicated to relevant stakeholders (Project Manager);
e. Checking if test completion criteria have been achieved and decision to stop testing is
approved; and,
f. Stakeholders should be informed of test progress, through Test Manager’s test status
report (see Test Documentation section), which are regularly delivered to the Project
Manager. If managed at a lower level, test activity (system test management), the status
report would be delivered to the project Test Manager.
3.4.5 Practical Guideline
The Test Team is responsible for the test monitoring and control process; the following activities and
tasks are in accordance with the Test Plan and Organisational Test Specification (Organisational Test
Policy and Organisational Test Strategy).
3.4.5.1 Test Set-Up Activity
a. Measures defined as part of the Test Strategy during test planning should be identified;
the measures can also be determined from the Organisational Test Strategy if available.
Additional measures can be identified during Set-Up activity and means of monitoring
them; and,
b. Commence monitoring activities and setup Test Status Report and Test Metrics
collection mechanism to collect measures identified in the Test Plan and Organisational
Test Strategy (see Test Status Report and Test Metrics documentation template in Test
Documentation section).
3.4.5.2 Test Monitor Activity
a. Record test measures collected;
National Software Testing Guideline
44 | P a g e
b. Measure progress against Test Plan using collected measures;
c. Identify if testing is diverging from Test Plan and record factors blocking test progress;
d. Identify new or changed risks to be monitored and mitigated by testing;
e. New and changed risks that require the Project Manager’s attention should be
communicated.
3.4.5.3 Test Control Activity:
Establish the actions needed to:
a. Testers implement instructions received from Project Manager and Test Manager;
b. Implement Test Plan;
c. Reduce divergence from Test Plan;
d. Mitigate new or changed risks;
e. Where necessary, recommend changes to Test Plan;
f. Test Manager to check and approve the completion of testing activities that have
been assigned to testers (e.g. usability testing assigned to usability test manager); and
g. Check if test completion criteria have been achieved and gain approval from Project
Manager to stop testing and commence test completion process.
3.4.5.4 Test Report
a. Create Test Status Report with test progress details;
b. Distribute Test Status Report to relevant Stakeholders, and
c. Report any new or changed risks to relevant Stakeholders.
3.4.6 Information Items
As a result of carrying out this process the following information documents are created:
a. Test status report;
b. Test Plan updates;
c. Control directives; and
d. Project and product risks information documents.
National Software Testing Guideline
45 | P a g e
3.5 Test Completion Process
3.5.1 Overview
The Test Completion Process commences when testing is finished; the process is made up of four
distinct activities. These activities are performed by the test team to complete the testing carried out
at specific test phases.
3.5.2 Purpose
The purpose of the Test Completion process is to document test assets for future use.
3.5.3 Outcome
a. Test assets are archived or given directly to relevant stakeholders (Project Manager);
b. Test environment should be handed over in its agreed state;
c. All test requirements are satisfied and verified;
d. Test Completion Report is recorded;
e. Test Completion Report is approved; and
f. Test Completion Report is distributed to relevant stakeholders.
3.5.4 Activities and Tasks
a. All reusable test assets should be recorded in the Test Completion Report;
b. Inform relevant stakeholders of available reusable test assets, and
c. Any valuable test assets created during the testing should be stored, so it can be reused
later in the future or provided in case of an audit.
3.5.5 Practical Guideline
3.5.5.1 Clean-up Test environment:
This activity ensures that the test environment is left in an agreed state, with no test data left on
it and ready to be used for another test.
3.5.5.2 Lessons Learned:
a. This activity is for future testing process improvements, identifying experiences that can
be reused and those that need to be avoided. This activity is not limited to testing only, it
can be extended to improvements in development and project related activities;
b. Record lessons learned in Test Completion Report, and
c. Inform relevant stakeholders of lessoned learned.
3.5.5.3 Reporting Test Completion:
a. This involves gathering information about the testing performed and documenting it into
a Test Completion Report;
National Software Testing Guideline
46 | P a g e
b. Get approval for the Test Completion Report from relevant stakeholders; and,
c. Circulate the approved Test Completion Report.
3.5.6 Information Items
As a result of this process, the following information item is produced:
a. Test Completion Report (see Test Completion Report template in Test Documentation)
3.6 Dynamic Testing Process
Dynamic testing Processes is used for testing within a particular level of testing (unit, integration,
system and acceptance testing) or type of testing (performance testing, security testing, usability
testing). Dynamic testing has four (4) distinct test processes. This is driven by the Test Plan and
control directives issued by the Test Manager and provides testing progress back to the Test
Managers through test measures.
Dynamic test Processes are:
a. Test design & implementation;
b. Test environment set-up and maintenance;
c. Test execution, and
d. Test incident reporting.
The dynamic test processes interact with the test management process; dynamic testing is usually
initiated as part of the Test Strategy documentation of the Test Plan for test levels or test types being
executed.
National Software Testing Guideline
47 | P a g e
TEST DESIGN & IMPLEMENTATION
TEST PROCEDURE
TEST DECISION
TEST INCIDENT REPORT
TEST COMPLETION
REPORT
TEST EXECUTION
TEST ENVIRONMENT
SET-UP
TEST ENVIRONMENT
READINESS REPORT
TEST DATA READINESS
REPORT
TEST ENVIRONMENT REQUIREMENTS
TEST DATA REQUIREMENTS
TEST RESULTTEST PLAN
DYNAMIC TEST PROCESS
DEFEC
T OR
RETEST
RESU
LT
NO DEFECT
CO
NTR
OL D
IREC
TIVES
TEST MEA
SUR
ES
3.6.1.1.1 Figure 10: Dynamic Testing Process Diagram
3.6.2 Test Design and Implementation Process
Test cases to be developed to execute test coverage items, which are derived from applying a test
case design technique to features that are being tested.
3.6.3 Purpose
The purpose of the Test Design & Implementation Process is to derive test procedures that will be
executed during the Test Execution Process. As part of this process the test requirements are
analysed, features are combined into feature sets, test conditions, test coverage items, test cases,
test procedures are derived, and test sets are assembled.
As a result of the successful implementation of the Test Design & Implementation Process:
a. The test requirements for each test item is analysed;
b. The features to be tested are combined into Feature Sets;
c. The Test Conditions are derived;
d. The Test Coverage Items are derived;
National Software Testing Guideline
48 | P a g e
e. Test Cases are derived;
f. Test Sets are assembled;
g. Test Procedures are derived.
3.6.4 Practical Guidelines
3.6.4.1 Identify feature sets:
a. This describes the expected behaviour of the test item from the test requirements;
b. Based on risk exposure scores prioritise the feature sets;
c. The feature sets and test requirements should be documented for traceability between
them;
d. Group features into sets which can be tested independently of other feature sets
(equivalence partitioning);
e. Get buy-in from stakeholders on the group of features into sets and their prioritisation.
3.6.4.2 Derive Testing Conditions
a. Identify test design techniques (Testing Techniques) based on test completion criterion
specified in the Test Plan:
Boundary value Analysis (full boundary coverage)
Equivalence Partitioning
Decision Table Testing
State Transition Diagrams
Use Case Testing
b. Derive Test Condition by applying the design technique to each feature;
c. Test condition prioritisation based on risk exposure scores;
d. Document test condition in the test Design Specification document;
e. Document traceability between test condition and feature sets;
f. Get approval for test Design Specification document;
3.6.4.3 Derive test Coverage Items:
a. Derive Test Coverage items by applying test design techniques to the test condition;
b. Select the highest priority test coverage item based on risk exposure scores to achieve
the test completion criterion specified in the Test plan;
c. Document the selected test coverage items in the Test Case Specification;
d. Document traceability between test coverage items and test condition;
National Software Testing Guideline
49 | P a g e
3.6.4.4 Derive Test Cases:
a. Derive test cases to cover the selected test coverage items;
b. Prioritise the test case based on risk exposure scores;
c. Document the test cases in the Test Case Specification documentation;
d. Document traceability between the test cases and the test coverage items;
e. Get approval for Test Design Specification;
3.6.4.5 Assemble Test Sets:
a. Document test sets in the Test Procedure Specification Documentation;
b. Document traceability between the test sets and test cases;
c. Group test cases into sets based on common executable constraints such as required test
data or test environment; or all test cases can be executed as a single test set.
3.6.4.6 Derive Test Procedure:
a. Assemble test cases based on any interdependencies to create test procedures;
b. Document any required test data not already specified in the Test Plan in the Test Data
Requirements;
c. Document any test environment requirements not already specified in the Test Plan in
the Test Environment Requirements;
d. Prioritise the test procedure based on risk exposure scores;
e. Document the test procedures in the Test Procedure Specification documentation;
f. Document traceability between the test procedures and test cases;
g. Get approval for the Test Procedure Specification.
3.6.5 Information Items
As a result of this process, the following information items are derived:
a. Test Specifications and related traceability information;
b. Test data requirements; and,
c. Test environment requirements;
3.7 Test Environment Set-Up and Maintenance
3.7.1 Overview
The Environment Set-Up and Maintenance Process is used to establish and maintain the environment
in which tests are executed. Maintenance of test environment may involve changes based on results
National Software Testing Guideline
50 | P a g e
of previous tests. The requirements for a test environment will initially be described in the Test Plan.
Only after the Test Design and Implementation process commences, the requirements for a test
environment becomes clearer.
3.7.2 Purpose
The purpose of the Test Environment Setup and Maintenance Process is to establish and maintain the
required test environment and to communicate its status to all relevant stakeholders
3.7.3 Outcomes
As a result of the implementation of the Test Environment Setup and Maintenance Process:
a. Test environment is set-up and ready for testing;
b. All relevant stakeholders are informed about the status of the test environment; and,
c. The test environment is maintained.
3.7.4 Activities and tasks
The IT support technicians are responsible for the test environment set-up and maintenance, they will
implement the following activities and tasks in accordance with applicable organisation policies and
procedures with respect to the Test Environment Set-Up and Maintenance Process.
3.7.5 Practical Guideline
3.7.5.1 Establish Test Environment
a. Plan set-up of test environment considering the requirements from the Test Plan and
the Test Environment Requirements;
b. Design and build the test environment;
c. Set-up any tools and test data required to support testing;
d. Install the test item on the test environment;
e. Check that the implemented test environment meets the Test Environment
Requirements; and,
f. Inform relevant stakeholders that the environment and test data is ready through the
Test Environment Readiness Report.
3.7.5.2 Maintain Test Environment
a. Maintain test environment based on requirements from the Test Plan and Test
Environment Requirements; and,
b. Communicate to relevant stakeholders of any changes to the test environment.
3.7.6 Information Items
As a result of this process, the following information items are derived:
a. Test Environment;
b. Test Data;
National Software Testing Guideline
51 | P a g e
c. Test Environment Readiness report;
d. Test Data Readiness Report; and,
e. Test Environment Updates.
3.8 Test Execution Process
3.8.1 Overview
The Test Execution Process is used to execute the test procedures developed from Test Design and
Implementation Process on the test environment established by the Test Environment Set-Up and
Maintenance Process.
3.8.2 Purpose
The purpose of the Test Execution Process is to run the test procedures created during the Test
Design and Implementation Process in the prepared test environment and record all results.
3.8.3 Outcomes
As a result of implementing the Test Execution Process:
a. Test procedures are executed;
b. The actual results are recorded and compared with the expected results; and,
c. The test results are determined.
3.8.4 Activities and tasks
The team of Software Testers shall implement the following activities and tasks in accordance with
applicable organisation policies and procedures with respect to the Test Execution Process. This
process applies to the manual and automated execution of test procedures (test scripts). The process
involves running the tests, checking the results, and recording the results.
3.8.5 Practical Guideline
3.8.5.1 Test Execution Procedures
a. Execute test Procedure; and,
b. Record the actual results for each test case.
3.8.5.2 Compare Test Results
a. Compare actual test results with expected results; and,
b. Determine if the tests passed or failed.
3.8.5.3 Record Test Execution
a. Document test execution occurred, including all significant events in the Test Execution
Log.
National Software Testing Guideline
52 | P a g e
3.8.6 Information Items
The following information items from this process are produced:
a. Actual test results;
b. Test results; and,
c. Text Execution Log;
3.9 Test Incident Report Process
3.9.1 Overview
The Test Incident Process is used for reporting test incidents; the process is initiated as a result of test
failures, unusual or unexpected occurrences during test execution, or when a retest passes. Inputs to
the activities in this process are:
Test Results
Test Procedures
Test Cases
Test Items
Test basis
Test Execution logs
3.9.2 Purpose
The purpose of the Test Incident Reporting process is to report to all relevant stakeholders’ incidents
requiring further action identified as a result of test execution.
3.9.3 Outcomes
As a result of the implementation of the Test Incident Reporting Process:
a. Test results are analysed;
b. New incidents are confirmed;
c. New incident report details are created;
d. The status and details of previously raised incidents are determined;
e. Previously raised incident report details are updated; and,
f. New and/ or updated incident reports are communicated to the relevant stakeholders.
3.9.4 Activities and tasks
The Test Manager and system test managers are responsible for test incident reporting on the project
and system levels respectively.
National Software Testing Guideline
53 | P a g e
3.9.5 Practical Guideline
3.9.5.1 Analyse Test Results
a. Where a test relates to an existing incident report, decide how to update it based on
the test result;
b. Where a test failed unrelated to an existing incident report, decide whether to raise an
incident report, handle it informally (create action item) or take no action; and,
c. Where the decision is made to handle a failed test information, then assign item for
resolution.
3.9.5.2 Create/Update Incident Report
a. Where a test relates to an existing incident report, update the incident report;
b. Where a test failed unrelated to an existing incident report, create a new incident
report;
c. Communicate the status of changed or new incident reports to the relevant
stakeholders.
3.10 Software Testing Practical Steps
The software testing level practical guideline summarises all the steps to be taken during dynamic
testing execution at test levels and test types.
1. Step 1: Test Planning Process
a. The Test Plan has been developed for testing an application developed for an
organisation and approved by the Project Manager.
2. Step 2: Test Manager will use the Test Plan to implement software testing of the application.
3. Step 3: Test Team commence testing the application based on the Test Strategy developed
during Test Planning process.
4. Step 4: Commence Test Monitoring and Control Process
a. Test Set-Up activities: all measures and mean of monitoring them defined in the Test
Strategy are Identified, additional measures can be identified during Test Set-Up
activities;
I. Setup Test Status Report and Test Metrics collection form to collect test
measures (see sample Test Status Report documentation in Test
Documentation in section)
b. Test team starts monitoring activities;
I. Test measures are collected and recorded in the Test Status Report;
II. Measure progress against Test Plan;
National Software Testing Guideline
54 | P a g e
III. Determine if testing is diverging from the Test Plan;
IV. Record any details of factors that may be interrupting test progress; and,
V. Identify new or changed risk if any.
c. Test Control activities:
I. Testers to implement instructions from Project and Test Manager by;
II. Implementing the project Test Plan;
III. Control divergence from Test Plan;
IV. Mitigate new or changed risks;
V. Recommend changes to Test Plan where necessary to relevant stakeholders
on new or changed risk;
VI. Test Manager to check satisfactory completion of test activities assigned to
testers (e.g. performance testing assigned to performance test manager) and
obtains approval for the test completion activities from the Project Manager.
Note, Software Quality Assurance (SQA) most review and approve Test Status
Report before Project Manager’s final consent. Testing is performed at
various test levels (e.g. unit testing, integration testing, system testing,
acceptance testing) and test types (performance testing, usability testing
etc); each test level and type testing may be assigned to test level or test type
test managers, who are in charge of testing at their assigned test levels or
test types.
d. Reports generated:
I. The Test Status Report describing test activities progress (see Test
Documentation section for Test Status Report);
II. Test Manager reports any new or change risk to relevant stakeholders
(Project Manager and Development Manager);
III. Risk Register updated with new or changed risk (Test Plan document
updated);
IV. Project Manager circulates approved Test Status Report to all relevant
stakeholders.
National Software Testing Guideline
55 | P a g e
5. Step 5: Test Completion Process
a. Clean-up Test Environment:
I. Test Administrators clean up the test environment, for use by another test.
b. Lessons Learned Meeting:
I. The lessons learned meeting is attended by all involved in the test project
(Project, Test & Development Manager, Testers, Programmers, SQA
members, Test Administrator etc);
II. Meeting identifies test experiences that can be reused or avoided; and
improvements in development and project related activities;
III. Lessons learned outcomes are recorded in the Test Completion Report (see
Test Documentation for template); and,
IV. All relevant stakeholders are informed of the lesson learned.
c. Reporting Test Completion:
I. Test Manager gathers all information about testing performed and document
in Test Completion Report;
II. Solicit approval from relevant stakeholders (Project Manager & SQA)
III. Distribute Test Completion Report.
6. Step 6: Dynamic Test Process:
a. Test Design and implementation:
I. Identify feature sets:
NOTES:
A test that is monitored and controlled could be a dynamic top-level test (e.g. system testing),
reviews (design review) or lower separately managed test type (e.g. performance testing). The
test technique usually used for dynamic testing (validation) is a black box test technique.
Therefore, in our hypothetical application sample above, the test technique being used to
monitor and control testing of the system and the performance is a black box test technique
used in testing for defects; and used in validating if the test completion criteria have been
achieved as documented in the Test Plan. See figure 9: Test Monitoring and Control process flow
diagram.
National Software Testing Guideline
56 | P a g e
i. Test team Analyses test requirement for each test item and describes
expected behaviour of the test items;
ii. Identify Features to be tested from the test items and group into
feature sets;
iii. Based on risk exposure scores determined during risk identification
and estimation (test planning process section) the Test team
prioritises the feature sets of the application;
iv. Document the traceability between feature sets and test
requirements in the traceability metrics found in Test Design
Specification documentation; and,
v. Get stakeholders approval on grouped feature sets their
prioritisation.
b. Drive Test Conditions:
I. Identify test technique to use in testing feature sets based on the test
completion criteria;
II. Extract the testable features of the test item (Derive test condition);
III. Prioritise test conditions based on risk exposure scores;
IV. Document test condition in Test Design Specification;
V. Document traceability between test condition and feature sets; and,
VI. Get Test Design Specification document approval by Project Manager.
c. Derive Test Coverage Items:
I. Apply test design techniques to test conditions to extract test coverage
items;
II. Prioritise test coverage items based on risk exposure scores to obtain test
completion criterion specified in the Test Plan;
III. Document traceability between test coverage items and test conditions; and,
IV. Document test coverage items in Test Case Specification;
d. Derive Test Case:
I. Extract test cases to cover the selected test coverage items;
II. Prioritise test cases based on risk exposure scores;
III. Document traceability between the test cases and test coverage items; and,
IV. Get Project Manager approval for the Test Design Specification.
e. Assemble Test Sets
I. Group test cases into test sets and document in the Test Procedure
Specification; grouping should be based on common test execution
National Software Testing Guideline
57 | P a g e
restrictions (e.g. maybe based on test execution methods, manual or
automated test executions); and,
II. Document traceability between test sets and test cases.
f. Derive Test Procedures:
I. Developer in Test organises test cases based on related dependencies to
generate test procedures (test script);
II. Document any required test data and test environment requirements not
already specified in the Test Plan in the Test Data Requirements and Test
Environmental Requirements respectively;
III. Prioritise test procedures based on risk exposure scores;
IV. Document traceability between the test procedures and test cases;
V. Document test procedures in Test Procedure Specification; and,
VI. Get Project Manager approval for the Test Procedure Specification.
7. Step 7: Test Environment Setup & Maintenance Process:
a. Establish Test Environment:
I. Test Administrator plans the set-up of the test environment base on the
requirements from the Test Plan and Test Environment Requirements;
II. Design and build test environment;
III. Set-up tools and test data required for testing;
IV. Install the test item on test environment;
V. Confirm test environment set-up meets the requirements of the Test Plan
and Test Environment Requirements; and,
VI. Test Administrator to inform stakeholders of test environment readiness
through the Test Environment Readiness Report.
b. Maintain Test Environment:
I. Test Administrator to maintain test environment based on requirements
from the Test Plan and the Test Environment Requirements.
8. Step 8: Test Execution Process:
a. Execute Test Procedures (manual or automated)
I. After the Developer in Test has developed the test procedures (test script),
Tester execute (run) the test procedures; and,
II. Record actual results of each test case.
b. Compare Test Results
I. Compare actual and planned result (expected)
II. Determine test status (fail/pass)
National Software Testing Guideline
58 | P a g e
c. Record Test Execution
I. Indicate test execution was accomplished, with any significant events in the
Test Execution Log.
9. Step 9: Test Incident Report Process:
a. Analyse Test Results and Create/Update Incident Report
I. If a test is related to an already existing incident report; update the report
with the current test results;
II. Where a test failed and is not related to an existing incident report; it is
recommended to raise an incident report is the failed test is critical to the
system; else handle it if informally and resolve.
III. Report the status of changed or new incident reports to relevant
stakeholders.
4 LEVELS OF TESTING
The levels of testing are processes where every unit or component of a software system is tested.
Noting that it starts from unit testing and ends at acceptance testing. The four level of software
testing are:
1. Unit Testing -Test Individual Components
2. Integration Test – Test Integration Components
3. System Testing -Test Entire System
4. Acceptance Testing -Test Final System.
4.1.1.1.1 Figure 11: Software Testing Levels
Each test level has its purpose and provides value to the Software Development Life Cycle.
UNIT TESTING INTEGRATION
TESTING SYSTEM TESTING
ACCEPTANCE TESTING
National Software Testing Guideline
59 | P a g e
4.2 Unit Testing
These are small testable portions of a system; unit testing aims to verify these portions by testing
each separate component to determine their correctness in line with their requirements and the
desired functionality. Unit testing is performed at the beginning stages of the software development
process by developers before the system is handed over to testers for integration, system and
acceptance testing. The advantage of unit testing is that errors are caught early in the development
lifecycle thereby reducing risks and resource waste (time and money).
4.2.1 Purpose
The purpose of unit testing is to improve the code quality by checking each unit of code used to
implement the functional requirements of the system.
Unit testing is a white box testing technique, which involves evaluating the internal code structure of
the program to determine the operations performed are in line with the requirement specification
(test basis) of the system.
4.3 Integration Testing
During integration testing, different software modules are combined and tested as a group to assess if
they work correctly together. By testing the units in groups, faults can be detected in the way they
interact together. Testers can use the big bang, bottom-up, top-down or sandwich integration testing
methods to test modules. The method used is dependent on the test features sets available during
integration testing; considering cases where some modules are absent during testing, DRIVERS and
STUBS are used to trigger modules where functions do not exist like in bottom-up and top-down
methods respectively.
4.3.1 Purpose
The purpose of integration testing is to expose defects in the interaction between integrated units.
Testing at this level is usual executed by testers with coding skills; depending on the test feature sets,
white box or black box testing techniques can be adopted for testing at this level.
4.4 System Testing
System testing is performed on a complete, integrated system to ensure that the overall product
meets the requirements specified. At this stage of testing the software is almost complete and can be
tested in a test environment by specialised testers. Black box testing techniques are used during
system testing, which does not require internal knowledge of the system like design and the codes.
National Software Testing Guideline
60 | P a g e
4.4.1 Purpose
The purpose of System Testing, is to evaluate functional and non-functional interoperability of the
system from the user’s perspective and the requirement specification document; it plays an
important role in delivering quality product to the customer.
a. System testing is performed on the integrated system (hardware and software), with aim of
confirming whether the specified requirements have been met, and if the system works as
expected or not;
b. System testing performs the role of delivering a quality product and reduces after
deployment defects, therefore reducing the cost of fixing bugs after deployment; and,
c. The system testing criteria focuses on:
Program complex functionality testing
Security testing
Adaptability testing
Compatibility testing
Regression testing
Recovery testing
Performance testing
Usability testing
Load and stress testing
Installation testing
User interaction with the system
Scalability testing
Procedure testing
4.5 Acceptance Testing
At this level the software development is complete and user acceptance testing (UAT) is performed
when functional, system and regression testing is complete; the UAT validates the software against
the requirements specifications (test basis). Black box testing techniques are employed during user
acceptance testing, were internal knowledge of the codes are not required for testing.
4.5.1 Purpose
The purpose of Acceptance Testing is to evaluate whether the system complies with end user
requirements and its ready for deployment. The UAT should be approached as an independent
project and should have a planning, design and execution phases. By performing the acceptance tests,
National Software Testing Guideline
61 | P a g e
testers can discover how the product performs in the user environment. User acceptance, alpha and
beta testing are different types of user acceptance testing approach.
4.6 Software Test Automation Tools
There are many third part open source and commercial test automation tools available to make
testing easy to manage; some popular test tools used for different test levels and types are:
Selenium
qTest
PractiTest
Zephyr
Test Collab
TestFLO for JIRA
XQual
TestCaseLab
QAComplete
QACoverage
Plutora Test
Inflectra
Meliora Testlab
aqua
Panaya
Testpad
TestingWhiz
HP-UFT
TestComplete
Ranorex
Sahi
Watir
Tosca Testsuite
Telerik TestStudio
WatiN
National Software Testing Guideline
62 | P a g e
5 TEST TECHNIQUES
There are many software testing techniques available, choosing a technique is dependent on the test
completion criterion specified in the Test Plan. Test techniques can be categorised as follow:
Specification-Based Testing Techniques
Structure-Based Testing Techniques
Experience-Based Testing Techniques
5.1 Specification-Based Testing Techniques (Black-Box Testing Techniques)
Specification-Based Testing Techniques also known as behavioural testing technique and are
commonly called black-box testing technique; it is applicable when the internal structure of the test
item is unknown. It is an input-output driven test technique; the testers have no knowledge of the
system or components and they are concerned on what the software does and not how it performs it.
The following are specification-base or black-box techniques:
Equivalent partitioning (EP)
Boundary value analysis (BVA)
State transition testing (STT)
Decision table testing (DTT)
Combinatorial test techniques
Case-effect graphing
Classification tree method
Random testing
Scenario testing
Syntax testing
5.2 Structure-Based Testing Techniques (White Box Testing Techniques)
The structured-based test technique or white box testing technique involves evaluating the code
and internal structure of the test item (unit and integration testing). The test is conducted to ensure
that the internal operations perform according to the requirement specification. White box testing is
performed to discover logical errors, design errors and typographical errors (syntax checking).
5.2.1 White Box Testing Techniques
Statement testing;
Decision testing;
Branch condition combination testing;
National Software Testing Guideline
63 | P a g e
Data flow testing;
Modified condition decision coverage testing.
5.3 Experience-Based Testing Technique
Experience-base techniques are based on previous knowledge, skill and background of the tester are
important in determining test conditions and test cases. This technique works well together with
specification-based and structure-based techniques or if there are limited or no specifications to
reference.
Experience-base test techniques include:
Error guessing;
Checklist based testing
Attack testing
Exploratory
Testing
6 TEST DOCUMENTATION
6.1 Overview
Test documentation requires data inputs from the multi-layer test process, which in turn generates
information about testing, which needs to be documented for analysis, decision making or used in
another testing process. This section provides generic information on testing documentation and
reporting used to plan for testing, monitoring and control testing and in test execution. Each test
documentation template is mapped to its associated sections within this document to provide a
seamless flow of data and information required to implementing this guideline for test planning, test
management and dynamic test processes.
The documents covered in this section are:
Organisational Test Policy
Organisational Test Strategy
Test Plan
Test Status Report
Test Completion Report
Test Design Specification
Test Case Specification
National Software Testing Guideline
64 | P a g e
Test Procedure Specification
Test Data Requirements
Test Environment Requirements
Test Execution Log
Incident Report
These test documents can be used in a flexible manner; it does not have to be generated the same
way as presented in this guideline. Documents can be combined, split or even renamed to suit
organisational requirements and terminology for individual items. Conformance is at the information
level, and all required information should be recorded in a form (hard copy, digital) or a test
management tool.
6.2 Organisational Test Policy Documentation
The Organisational Test Policy is a high-level management document, which spells out the purpose,
goals and scope of testing within the organisation. It provides details on how software testing
supports the organisation’s overall business strategy and mission. It is a non-technical document
describing what testing is expected to be performed across the organisation.
6.2.1 Test Policy Document Template
Item Item Description Mandatory
Objective of Testing * The describes the objective of performing testing
within the organisation
Yes
Test Processes This describes the test processes that would be
followed in the organisation. As document in the
ISO/IEC/IEEE 29119-2 or an organisation can
develop its own test processes in line with its test
objectives
No
Test Organisation Structure This describes how testing fits within the
organisation’s structure
No
Tester Qualification and
Training
This section describes the required qualification
and training testers should have considering their
role in testing. i.e. ISTQB qualification to be
attained in the organisation
No
Tester Ethics Describes code of ethics testers are to follow in the
organisation
No
Standards This describes the standards used when testing in No
National Software Testing Guideline
65 | P a g e
Item Item Description Mandatory
the organisation. i.e. ISO/IEC/IEEE 29119 set of
standards.
Other Relevant Policy Relevant documents like QA Policy and IT Policy as
input into the Test Policy.
No
Measuring the Value of Testing This describes or measures in value the
importance of testing in detecting defects early in
the SDLC in terms cost saved or cost which would
incurred after deployment
No
Test Asset Archiving and Reuse Describes how test assets are to be archived for
reuse by another test project
No
Test Process Improvements Describe how the organisation plans to improve its
testing processes. Input from project lessons
learned document can referenced and used in this
section
No
6.3 Organisational Test Strategy Documentation
The Organisational Test Strategy is a generic document which describes how the Test Policy should be
executed. It defines how testing should be implemented in projects within the organisation. The
document is usually found in large organisations and is a useful reference for Test Managers when
developing the project Test Plan.
Note: During the process of developing the Test Plan, the Test Manager may require an asset or
resource which may not be included in the Organisational Test Strategy; in this situation, the Test
Manager would need to determine the appropriate solution even though it contradicts the
Organisational Test Strategy and would require stakeholder approval before it can be adopted in the
Test Plan.
6.3.1 Organisational Test Strategy Template
Items Item Description Mandatory
Project-Wide Information This provides guidelines that apply to all projects
in the organisation
No
Generic Risk Management Describes the approach to risk-based testing to
be used.
No
National Software Testing Guideline
66 | P a g e
Items Item Description Mandatory
Test Selection/Prioritisation Test case and test procedure prioritisation
approach to be used.
No
Test Documentation and
Reporting
Describes what project wide documentation is to
be developed.
No
Test Automation and Tools Describes the project wide tools that are to be
used during testing. i.e. test management tools or
defect management tools.
No
Configuration Management Describes test asset configuration management
approach to be used.
No
Incident Management Describes how defects will be managed during
testing
No
Test Sub-Processes Test sub-processes that would be included into
the test project should be listed. i.e. test levels
and test types
No
Test Sub-Process Specification
Information
Detailed information on each test sub-process
should be provided
No
Entry and Exit Criteria Provides entry and exit criteria for each test
process and sub-test processes. i.e. when does
system testing start and when does it end.
No
Test Completion Criteria Provides criteria for test completion for a test
sub-process. i.e. 100% test coverage, test incident
report and test completion report.
No
Test Documentation and
Reporting
Lists and describes what documentation are
required as a result of performing the test sub-
process. i.e. Unit Test Case Specification, Test
Completion Report.
No
Degree of Independence Indicates the degree of independence of those
performing testing.
No
Test Design Techniques Specifies the test design technique to be
performed in the test sub-process with its
associated test coverage criteria.
No
Test Environment Describes the test environment to be used for
testing and who is responsible for managing it.
No
National Software Testing Guideline
67 | P a g e
Items Item Description Mandatory
i.e. developer environment for unit testing,
operational cloud environment for performance
testing.
Metrics to be collected Describes test metrics to be collected to estimate
the progress, quality and health of software
testing effort.
No
Retesting and Regression
Testing
Describes the approach for regression testing in
the organisation. i.e. regression testing should be
executed only when retests have passed.
No
6.4 Test Plan Documentation
The Test Plan is an important input into the test management process; it describes what will be
tested, when it will be tested, how it will be tested, and by whom. The Test Plan is not a project
generic plan for the whole organisation, but a guideline developed for a specific test project.
6.4.1 Test Plan Template
Items Item Description Mandatory
Context of the testing
Test Plan Type Describes what level or type of plan is being
performed:
Project Test Plan
Unit Test Plan/ Integration Test Plan/ System
Test Plan/ Acceptance Test Plan
Performance Test Plan/ Usability Test Plan/
Security Test Plan
Yes
Test Item(s)
Describes what should be tested:
A Complete System
One or more Subsystems/ Components/ Units
A mix of both
Yes
Test Scope Describes by listing the features to be tested and
those that will not be tested.
Yes
National Software Testing Guideline
68 | P a g e
Assumptions and Constraints Describes the assumptions made concerning
testing and constraints the testing must work
within.
Stakeholders List of who will be involved in testing the project.
Yes
Testing Lines of
Communication
This describes the line of communication on the
test project.
Yes
Risk Register Risk is managed by identification, analysing and
mitigating the risks; all identified risks with their
corresponding risk exposure scores are
documented in the Test Plan along with its
recommended mitigation. Risk should be
classified as project or product risk; the Test
Manager is concerned with the product risk, while
all related project risks are forwarded to the
relevant stakeholders (Project Manager or
Development Manager)
Yes
Test Strategy The Test Strategy is part of the Test Plan, which is
developed for the implementation of the
recommended mitigations from the risk analysis.
1. Test Sub-Processes List the test levels and test types that will be
performed during testing, based on the risks
identified.
Yes
2. Test Deliverables Describes a list of all documents expected for a
specific lower level test plan covering a single
dynamic test phase, of a test level i.e. integration
testing.
Test Plan;
Test Status Reports;
Test Design Specifications;
Test Case Specifications;
Test Procedure Specifications;
Test Environment Requirements and
Yes
National Software Testing Guideline
69 | P a g e
Readiness Report;
Test Results;
Test Execution Log;
Test Completion Report.
3. Test Design
Techniques
Lists the test techniques to be used and when.
Equivalence Partitioning
Decision Transition Testing
Boundary Value Analysis etc
Yes
4. Test Completion
Criteria
It describes the criteria used to show a test is
complete; the criteria are usually set to 100% for
any test coverage.
Yes
5. Metrics to be
Collected
List of metrics to be measured during testing.
Yes
6. Test Data
Requirements
Lists the needed test data and when it is needed
in the test process
Yes
7. Test Environment
Requirements
Describes the test environment required and it is
needed.
Hardware
Software
Test tools
External Interfaces
Yes
8. Retesting and
Regression Testing
Describes the required approach of expected
number retesting and regression testing, which is
useful in test estimation costs.
Yes
9. Suspension and
Resumption Criteria
Explains the situation when testing would be
suspended and resumed during the test project
i.e. if expected test items are not available.
Yes
10. Deviations from the
Organisational Test
Strategy
Justifies why the Organisation Test Strategy was
not complied with.
No
Testing Activities and Estimates Lists all activities and corresponding estimates
that are required to be performed to implement
the Test Strategy.
Yes
National Software Testing Guideline
70 | P a g e
6.5 Test Status Report Documentation
Use the Test Status Report to document the status of testing performed in a specific reporting period.
6.5.1 Test Status Report Template
Items Item Description Mandatory
Reporting Period Describes the period being reported Yes
Progress against the Test Plan Describes testing performed during the period
comparing it with planned testing
Yes
Factors Blocking Progress Describes issues that have prevented scheduled
progress of the plan and what action is required
to mitigate them
Yes
Test Measures Provides test measures for the reporting period.
E.g. Test that have been executed, tests that
failed and test that passed according to the Test
Yes
1. Schedule Indicates when the test activity will be performed.
All activities should schedule in line with the time
and resources constraints of the test project
Yes
2. Staffing
Roles, tasks, and responsibilities: Determine
the roles required to complete the activities
listed in the schedule by identifying the
individuals who will fill these roles:
Training: Identify training required for staff or
testers to take to allow them to perform the
activities required for the roles.
Hiring needs: if there are required roles to be
filled to perform scheduled test activities and
skilled tester are insufficient, then the
requirement to hire testers to fill the roles is
necessary and should be documented
Yes
National Software Testing Guideline
71 | P a g e
Items Item Description Mandatory
Plan
New and Changed Risks Indicates details of changes to known risks and
any newly identified risk for the reporting period.
Yes
Planned Testing Describes what testing is planned for the next
period as indicated in the Test Plan
No
6.6 Test Completion Report Documentation
Use the Test Completion Report to record summaries of all testing performed during the test project
Test Completion Report Template
Items Item Description Mandatory
Scope Defines the scope of the Test Completion Report
e.g. report for the whole test project (project test
completion report) or for a test level (acceptance
test completion report) or for a test type
(performance test completion report). The scope
should also describe any exclusions for tests
performed by another organisation reported
separately.
Yes
Testing Performed Provides a summary of test activities performed,
which should refer to the associated Test Plan.
Yes
Deviation from Planned Testing Describes any deviations from the planned testing
(Test Plan) and any new or changed risks.
Yes
Test Completion Evaluation Provides report on whether the achieved level of
test coverage met the specified test completion
criteria. If it failed, it would be categorised as an
outstanding risk or Residual Risks; comments on
why it failed should be included in the report.
Yes
Factors that Blocked Progress Lists the factors slowing or preventing test
progress and actions taken to mitigate. E.g. test
environment unavailable, testers unavailable and
skill shortages and tools unavailable are some of
the factors.
Yes
National Software Testing Guideline
72 | P a g e
Items Item Description Mandatory
Test Measures Tested measures should be compared with the
expected figures. The measures should match
those metrics required to be measured in the
Test Plan
Yes
Residual Risks Provides details of the unmitigated risks based on
the type of Test Completion Report for the
project, integration testing (test level), or usability
testing (test type).
No
Test Deliverables Provides a list of all deliverables that have been
developed during the testing process and their
repositories.
Test Plan
Test Design Specification; Test Case
Specification; Test Procedure Specification;
Test Data Readiness Report; Test
Environment Readiness Report;
Incident Reports;
Test Status Report;
Test Input Data; Test Output Data
Test Tools created during the testing
activities. etc
No
Reusable Test Assets Provides a list of all reusable test assets and
where they are stored.
Test Input Data
Test Procedure Specifications; and
Test Tools created during testing activity.
No
Lessons Learned Provides details of the outcome of the lessons
learned meeting.
No
National Software Testing Guideline
73 | P a g e
6.7 Test Design Specification Documentation
The Design Specification identifies features of the test item that are to be tested and describes the
corresponding test conditions which are derived from the requirement specification (test basis). This
document is used at all levels of dynamic testing.
6.7.1 Test Design Specification Template
Items Item Description Mandatory
Feature Sets
Overview Lists all feature sets of the test item that are to be
tested based on the test level. Similar feature sets
are group for ease of testing.
Yes
Unique Identifier Provides a unique identifier so the feature sets
can be uniquely identified form other feature sets
and traceable.
Yes
Objectives Describes the objective of the feature sets. Yes
Priority Ranks your list of feature sets so the ones with
the highest priorities are tested ahead of others.
Yes
Specific Strategy References the Test Plan for the test strategy
associated with the feature sets. E.g. Test
technique to be applied and test completion
criteria to be achieved.
Yes
Traceability Tracing feature sets to its test basis. Yes
Test Condition Yes
1. Overview Lists Test Conditions in the feature sets that are
to be executed by Test Cases. These are items of
interest that need to be tested.
2. Unique Identifier Each Test Condition must have a unique
identifier.
Yes
3. Description Describes the Test Condition.
4. Priority Prioritising Test Conditions of the system to be
tested ahead of others.
Yes
5. Traceability Provided to determine the features the Test
Condition is testing; it’s usually in form of a table
Yes
National Software Testing Guideline
74 | P a g e
Items Item Description Mandatory
determining the completeness of the relationship
by correlating any two baselines. E.g. Features to
associated Test Condition.
6.8 Test Case Specification Documentation
The Test Case Specification is a dynamic testing document that describes test cases and test coverage
items the test case executes.
6.8.1 Test Case Specification Templates
Items Item Description Mandatory
Test Coverage Items
1. Overview Lists the Test Coverage Items traceable to the
specific Test Conditions or system features.
2. Unique Identifier Each Test Coverage Item should be uniquely
identified with a unique identifier.
Yes
3. Description Describes what the Test Coverage Item is meant
to be doing.
Yes
4. Priority Prioritises Test Coverage Items to be executed
ahead of others with lower priority
Yes
5. Traceability Provided to determine what Test Conditions the
Test Coverage Item is executing
Yes
Test Cases
1. Overview Lists Test Cases related to a specific Test
Coverage Item by applying a given test design
technique.
2. Unique Identifier Provided so each Test Case can be uniquely
identified.
Yes
3. Objectives Describes the objectives of the Test Case
4. Priority Prioritises Test Cases so ones with higher priority
are executed ahead of those with a lower priority
value.
Yes
5. Traceability Provided to relate Test Coverage Item the Test Yes
National Software Testing Guideline
75 | P a g e
Items Item Description Mandatory
Case is executing
6. Preconditions Provides information on the required state of the
test environment for the test to run
Yes
7. Inputs Describes steps that need to be performed to run
the Test Case.
Yes
8. Expected Results Describes the expected behaviour and outputs
resulting from the execution of the Test Case.
9. Actual Results Records Test Case results
6.9 Test Procedure Specification Documentation
The Test Cases should be group into Test Sets and listed in order of execution with any necessary set-
up and shut down activities. Test procedure is all known as a test script.
6.9.1 Test Procedure Specification Template
Items Item Description Mandatory
Test Sets
1. Overview Create Test Sets by grouping related Test Cases in
order of execution.
2. Unique Identifier For each Test Set to be uniquely identified. Yes
3. Objectives Describes the objective of the Test Set. Yes
4. Priority Test Sets with higher priority are executed ahead
of those with lower priority.
Yes
5. Test Cases List of related Test Cases that creates the Test
Set.
Yes
Test Procedures
1. Overview Test Procedures (test script) are derived from
Test Cases. They list Test Case in order of
execution.
2. Unique Identifier Test Procedure unique identifier to uniquely
identify each Test Procedure.
Yes
3. Objectives Describes the objective of the Test Procedure
4. Priority Test Procedures with higher priority are executed
ahead of those with lower priority.
Yes
National Software Testing Guideline
76 | P a g e
Items Item Description Mandatory
5. Start-Up Provides necessary information on test
environment set-up ahead of executing the first
Test Case in the Test Procedure.
Yes
6. Test Cases Lists Test Cases in order of execution. Yes
7. Relationships to other
procedures
Describes any dependencies between Test
Procedures.
No
8. Stop and Wrap-Up Describes actions required to be performed after
all Test Cases have been executed.
Yes
6.10 Test Data Requirements Documentation
Test Data Requirements is data created to satisfy the input requirements for running Test Cases; the
data requirements are described in the Test Plan, and Test Strategy.
6.10.1 Test Data Requirements Template
Items Item Description Mandatory
Unique Identifier Provided for Test Data Requirements to by
uniquely identified.
Description Description of the test data.
Responsibility Describes who will organise the provision of the
test data.
Period Needed Describes when test environment is required.
Resetting Needs Describes requirement to reset the data during
testing.
Archiving or Disposal Describes requirement for the archiving or
disposal of test data when testing ends.
Yes
6.11 Test Environment Requirements Documentation
The Test Environment Requirement describes the requirements of the test environment needed to
execute the test. It’s usually described in the Test Plan and Test Strategy.
National Software Testing Guideline
77 | P a g e
6.11.1 Test Environment Requirements Template
Items Item Description Mandatory
Unique Identifier Provided so the Test Environment Requirements
can be uniquely identified.
Yes
Description Description of the test environment, with all
necessary hardware, software, and network
details that constitutes the test environment,
including set-up and shot down procedures.
Yes
Responsibility Describes who will organise the set-up and
maintenance of the test environment.
Yes
Period Needed Describes when test environment is required. Yes
6.12 Test Execution Log Documentation
The Test Execution Log documents events from start to end that occurs during test execution.
6.12.1 Test Execution Log Template
Items Item Description Mandatory
Unique Identifier Provided to uniquely identify the Test Execution
Logs.
Yes
Time Describes when the event was observed. Yes
Description Describes unusual events that occur during test
execution.
Impact Describes the effect of the event on test
execution.
6.13 Incident Report Documentation
Incident reports describes any issues found during testing that needs to be recorded; usually any
incident recorded are those that cannot be resolved during the current test cycle.
6.13.1 Incident Report Template
Items Item Description Mandatory
Timing Information Describes when incident occurred Yes
Originator Name of person who identified the incident. Yes
National Software Testing Guideline
78 | P a g e
Items Item Description Mandatory
Context Describes the location of the incident and if it
relates to the test item or a related item
specification. Includes description of the test
environment and its configuration; and the Test
Procedure and Test Case being executed.
Yes
Description of Incident Provides observational accounts of incident
describing the steps required to recreate the
incident with necessary evidence like logs and
screenshots.
Yes
Originator’s Assessment of
Severity
Describes the level of impact (critical, major,
medium, minor) of the incident on users and the
business; and any possible known remedy.
Yes
Originator’s Assessment of
Priority
Provides feedback on the urgency to fix the defect
and whether it affects the system adversely; and if
there is a workaround. Assign one of four levels:
Immediate, urgent, normal and low.
Yes
Risks Indicates if the incident introduces a new risk or a
change to an existing risk. Use existing risk scores
to estimate the risk.
Yes
Status of the Incident Describes the current status in the life cycle of the
incident; when found, is it new or open.
Subsequent stages could be: assigned, rejected,
fixed, tested and closed
Yes
National Software Testing Guideline
79 | P a g e
APPEDIX A
6.14 Software Testing Certifications
Institution Certification
Quality Assurance Institute Florida USA CAST- Certified Associate in Software Testing
International Software Testing Qualification Board
(ISTQB)
ISTQB- Certified Tester Foundation
Quality Assurance Institute Florida USA CSTE- Certified Software Test Engineer
HP HP HPO-M102 for UFT
International Software Testing Qualification Board
(ISTQB)
ISTQB- Advance Level- Test Analyst
International Software Testing Qualification Board
(ISTQB)
ISTQB- Advance Level- Technical Test Analyst
International Software Testing Qualification Board
(ISTQB)
ISTQB- Advance Level- Test Manager
International Software Testing Qualification Board
(ISTQB)
ISTQB- Expert Level-Test Manager
International Software Testing Qualification Board
(ISTQB)
ISTQB -Agile Tester Certification
International Institute for Software Testing CASTP- Certified Agile Software Test Professional
Practitioner Level (CASTP-P)
International Institute for Software Testing CASTP- Certified Agile Software Test Professional
Master Level (CASTP-M)
Professional Scrum Developer PSD Certification
International Software Testing Qualification Board
(ISTQB)
Advance Level Test Automation Engineer
V Skill Certified Automation Functional Testing
Professional
International Institute for Software Testing Certified Software Test Automation Specialist
International Institute for Software Testing Certified Software Test Automation Architect
National Software Testing Guideline
80 | P a g e
REFERENCES
Reid, S. (2017) ISO/IEC/IEEE 29119 Software testing Standards: A Practitioner’s Guide. Kindle format
[e-book reader]. Available at: www.amazon.com (Accessed 2 August 2018).
International Standard (2013) ISO IEC IEEE 29119-1: Software and Systems Engineering-Software
Testing Part 1 Concepts and Definitions. IEEE Online [ e-book]. Available at
https://ieeexplore.ieee.org/Xplore/home.jsp (Accessed: 14 July 2018)
International Standard (2013) ISO IEC IEEE 29119-2: Software and Systems Engineering-Software
Testing Part 2 Test Processing. IEEE Online [ e-book]. Available at
https://ieeexplore.ieee.org/Xplore/home.jsp (Accessed: 14 July 2018)
International Standard (2013) ISO IEC IEEE 29119-3: Software and Systems Engineering-Software
Testing Part 3 Test Documentation. IEEE Online [ e-book]. Available at
https://ieeexplore.ieee.org/Xplore/home.jsp (Accessed: 14 July 2018)
International Standard (2013) ISO IEC IEEE 29119-4: Software and Systems Engineering-Software
Testing Part 4 Test Techniques. IEEE Online [ e-book]. Available at
https://ieeexplore.ieee.org/Xplore/home.jsp (Accessed: 14 July 2018)
QA Framework (2004) W3C Working Draft: Test Guidelines. W3C Online[Online]. Available at:
https://www.w3.org/TR/2004/WD-qaframe-test-20040225/#Types (Accessed 14 July 2018)
Cohen, E. (2018) A Beginners Friendly Guide to Work Breakdown Structures. Available at:
https://www.workamajig.com/blog/guide-to-work-breakdown-structures-wbs (Accessed 17 July
2018).
Try QA (2018) Available at: http://tryqa.com/ (Accessed 20 July 2018)