Date post: | 03-Apr-2018 |
Category: |
Documents |
Upload: | sid-kharbanda |
View: | 231 times |
Download: | 0 times |
of 19
7/29/2019 Module6 and 4
1/19
MODULE-6
Quality Standards
standard is a document that provides requirements, specifications, guidelines or characteristics
that can be used consistently to ensure that materials, products, processes and services are fit for
their purpose. We publish over 19 500 International Standards that can be purchased from the
ISO store or from our member
ISO International Standards ensure that products and services are safe, reliable and of good
quality. For business, they are strategic tools that reduce costs by minimizing waste and errors,
and increasing productivity. They help companies to access new markets, level the playing field
for developing countries and facilitate free and fair global trade.
The ISO 9000 series of standards is based on eight quality management principles that senior
management can apply for organisational improvement:
1. Customer focus
2. Leadership
3. Involvement of People
4. Process Approach
5. System approach to management
6. Continual improvement
7. Factual approach to decision-making
8. Mutually beneficial supplier relationships
ISO 9000 series
Developed by the ISO Technical Committee 176, published in 1987 and updated approximatelyevery five years, the standards comprise five documents whose focus is Quality Assurance
Systems.
These five documents are:
ISO 9001
ISO 9001 is titled Quality Systems - Model for Quality Assurance In Design/Development,
Production, Installation and Servicing.
7/29/2019 Module6 and 4
2/19
This is the most comprehensive quality model. Companies pursue this standard when they assure
conformance to specified requirements during several stages, which may include design and
development, production, installation and servicing.
ISO 9002
ISO 9002 is titled Quality Systems - Model for Quality Assurance in Production and Installation.
Companies pursue registration to this standard when they assure conformance to specifiedrequirements during production and installation.
ISO 9003
ISO 9003 is titled Quality System - Model for Quality Assurance in Final Inspection and Test.
Companies pursue registration to this standard when they assure conformance to specified
requirements only during final inspection and test.
Guideline Standards
ISO 9000 and 9004 are two supporting guideline documents.
ISO 9000
ISO 9000 is titled Quality Management and Quality Assurance Standards - Guidelines for Their
Selection and Use.
This standard provides an overview of the entire series. It describes quality concepts, definesquality terms and helps a company decide which quality model standard to use.
ISO 9004
ISO 9004 is titled Quality Management and Quality System Elements - Guidelines.
This standard provides further insight into the three quality model standards. It helps a company
develop and implement an internal quality system or evaluate an existing system for ISO 9000
compliance.
SIX SIGMA
Six Sigma at many organizations simply means a measure of quality that strives for near
perfection. Six Sigma is a disciplined, data-driven approach and methodology for eliminating
defects (driving toward six standard deviations between the mean and the nearest specification
limit) in any process from manufacturing to transactional and from product to service.
7/29/2019 Module6 and 4
3/19
The statistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4 defects per million
opportunities. A Six Sigma defect is defined as anything outside of customer specifications. A
Six Sigma opportunity is then the total quantity of chances for a defect. Process sigma can easily
be calculated using a Six Sigma calculator.
The fundamental objective of the Six Sigma methodology is the implementation of a
measurement-based strategy that focuses on process improvement and variation reduction
through the application ofSix Sigma improvement projects. This is accomplished through the
use of two Six Sigma sub-methodologies: DMAIC and DMADV. The Six Sigma DMAIC
process (define, measure, analyze, improve, control) is an improvement system for existing
processes falling below specification and looking for incremental improvement. The Six Sigma
DMADV process (define, measure, analyze, design, verify) is an improvement system used to
develop new processes or products at Six Sigma quality levels. It can also be employed if a
current process requires more than just incremental improvement.
S EI CMMi Model
SEI CMMI is a process improvement approach that provides organizations with the essentialelements of effective processes.
CMMI can help you make decisions about your process improvement plans.
What is CMM ?
CMM stands forCapability Maturity Model.
Focuses on elements of essential practices and processes from various bodies ofknowledge.
Describes common sense, efficient, proven ways of doing business
CMM is a method to evaluate and measure the maturity of the software developmentprocess of an organizations.
CMM measures the maturity of the software development process on a scale of 1 to 5.
CMM v1.0 was developed by the Software Engineering Institute (SEI) at Carnegie
Mellon University in Pittsburgh, USA.
CMM was originally developed for Software Development and Maintenance but later it
was developed for :o Systems Engineering
o Supplier Sourcing
o Integrated Product and Process Development
o People CMM
o Software Acquisition
o Others...
http://www.isixsigma.com/new-to-six-sigma/statistical-six-sigma-definition/http://www.isixsigma.com/process-sigma-calculator/http://www.isixsigma.com/process-sigma-calculator/http://www.isixsigma.com/implementation/project-selection-tracking/http://www.isixsigma.com/implementation/project-selection-tracking/http://www.isixsigma.com/new-to-six-sigma/statistical-six-sigma-definition/http://www.isixsigma.com/process-sigma-calculator/http://www.isixsigma.com/implementation/project-selection-tracking/7/29/2019 Module6 and 4
4/19
Levels
There are five levels defined along tthe model and, according to the SEI: "Predictability,
effectiveness, and control of an organization's software processes are believed to improve as the
organization moves up these five levels.
1. Initial(chaotic, ad hoc, individual heroics) - the starting point for use of a new or
undocumented repeat process.
2. Repeatable - the process is at least documented sufficiently such that repeating the
same steps may be attempted.
3. Defined- the process is defined/confirmed as a standard business process, and
decomposed to levels 0, 1 and 2 (the last being Work Instructions).
4. Managed- the process is quantitatively managed in accordance with agreed-upon
metrics.
5. Optimizing- process management includes deliberate process
optimization/improvement.
Within each of these maturity levels are Key Process Areas which characterise that level, and for
each such area there are five factors: goals, commitment, ability, measurement, and verification.
These are not necessarily unique to CMM, representing as they do the stages that
organizations must go through on the way to becoming mature.
The model provides a theoretical continuum along which process maturity can be developed
incrementally from one level to the next. Skipping levels is not allowed/feasible.
Level 1 -Initial (Chaotic)It is characteristic of processes at this level that they are (typically) undocumented and in
a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive
manner by users or events. This provides a chaotic or unstable environment for the
processes.
Level 2 -Repeatable
7/29/2019 Module6 and 4
5/19
It is characteristic of processes at this level that some processes are repeatable, possibly
with consistent results. Process discipline is unlikely to be rigorous, but where it exists it
may help to ensure that existing processes are maintained during times of stress.
Level 3 -Defined
It is characteristic of processes at this level that there are sets of defined and documented
standard processes established and subject to some degree of improvement over time.
These standard processes are in place and used to establish consistency of process
performance across the organization.
Level 4 -Managed
It is characteristic of processes at this level that, using process metrics, management can
effectively control the AS-IS process (e.g., for software development ). In particular,
management can identify ways to adjust and adapt the process to particular projects
without measurable losses of quality or deviations from specifications. Process Capability
is established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving
process performance through both incremental and innovative technological
changes/improvements.
http://en.wikipedia.org/wiki/As_ishttp://en.wikipedia.org/wiki/Exempli_gratiahttp://en.wikipedia.org/wiki/As_ishttp://en.wikipedia.org/wiki/Exempli_gratia7/29/2019 Module6 and 4
6/19
What is Software Testing?
Software testing is more than just error detection;Testing software is operating the software under controlled conditions, to (1)verifythat it
behaves as specified; (2) todetecterrors, and (3) tovalidatethat what has been specified iswhat the user actually wanted.
1. Verification is the checking or testing of items, including software, for conformance and
consistency by evaluating the results against pre-specified requirements. [Verification:
Are we building the system right?]
2. Error Detection : Testing should intentionally attempt to make things go wrong to
determine if things happen when they shouldnt or things dont happen when they should.
3. Validation looks at the system correctness i.e. is the process of checking that what has
been specified is what the user actually wanted. [Validation: Are we building the rightsystem?]
AD HOC Testing
Adhoc testing is an informal testing type with an aim to break the system. This testing is usuallyan unplanned activity . It does not follow any test design techniques to create test cases. In fact is
does not create test cases altogether! This testing is primarily performed if the knowledge of
testers in the system under test is very high. Testers randomly test the application without any testcases or any business requirement document.
Adhoc Testing does not follow any structured way of testing and it is randomly done on any partof application. Main aim of this testing is to find defects by random checking. Adhoc testing can
Module IV: Software Quality Control
Testing Concepts - ad hoc, white box, black box and integration, Cost Effectiveness of
Software Testing credibility & ROI, right methods, Developing Testing Methodologies-
Acquire and study the test strategy, building the system test plan and unit plan ,
7/29/2019 Module6 and 4
7/19
be achieved with the testing technique called Error Guessing. Error guessing can be done by the
people having enough experience on the system to geuss the most likely source of errors.
This testing requires no documentation/ planning /process to be followed. Since this testing aims
at finding defects through random approach, without any documentation, defects will not be
mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there areno test-steps or requirements mapped to it.
White Box Testing
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method
in which the internal structure/design/implementation of the item being tested is known to the
tester. The tester chooses inputs to exercise paths through the code and determines the
appropriate outputs. Programming know-how and the implementation knowledge is essential.
This method is named so because the software program, in the eyes of the tester, is like a
white/transparent box; inside which one clearly sees.
EXAMPLE
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs
against the expected outcomes, which is also determined by studying the implementation code.
LEVELS APPLICABLE TO
White Box Testing method is applicable to the following levels of software testing:
Unit Testing: For testing paths within a unit
Integration Testing: For testing paths between units
System Testing: For testing paths between subsystems
However, it is mainly applied to Unit Testing.
White Box Testing Techniques
A major White box testing technique is Code Coverage analysis. Code Coverage analysis,eliminates gaps in a test case suite.It identifies areas of a program that are not exercised by a set
of test cases.Once gaps are identified, you create test cases to verify untested parts of code,
thereby increase the quality of the software product
There are automated tools available to perform Code coverage analysis. Below are a fewcoverage analysis techniques
Statement Coverage This technique requires every possible statement in the code to be
tested at least once during the testing process. Tools: An example of a tool that handles
statement coverage for C++ applications is Cantata++
http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://www.ipl.com/products/tools/pt400.uk.phphttp://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://www.ipl.com/products/tools/pt400.uk.php7/29/2019 Module6 and 4
8/19
Branch Coverage This technique checks every possible path (if-else and other conditional
loops) of a software application. Tools: An example of a tool that handles branch coverage
testing for C, C++ and Java applications is TCAT-PATH
WHITE BOX TESTING ADVANTAGES
Testing can be commenced at an earlier stage. One need not wait for the GUI to beavailable.
Testing is more thorough, with the possibility of covering most paths.
WHITE BOX TESTING DISADVANTAGES
Since tests can be very complex, highly skilled resources are required, with thorough
knowledge of programming and implementation.
Test script maintenance can be a burden if the implementation changes too frequently.
Since this method of testing it closely tied with the application being testing, tools to
cater to every kind of implementation/platform may not be readily available.
White Box Testing is like the work of a mechanic who examines the engine to see why
the car is not moving.
Black Box Testing,
Black Box Testing, also known as Behavioral Testing, is a software testing method in which the
internal structure/design/implementation of the item being tested is not known to the tester.
These tests can be functional or non-functional, though usually functional.
This method is named so because the software program, in the eyes of the tester, is like a black
box; inside which one cannot see.
This method of attempts to find errors in the following categories:
Incorrect or missing functions
Interface errors
Errors in data structures or external database access
Behavior or performance errors
Initialization and termination errors
EXAMPLE
http://www.soft.com/Products/Coverage/tcatpath.htmlhttp://www.soft.com/Products/Coverage/tcatpath.html7/29/2019 Module6 and 4
9/19
A tester, without knowledge of the internal structures of a website, tests the web pages by using a
browser; providing inputs (clicks, keystrokes) and verifying the outputs against the expected
outcome.
LEVELS APPLICABLE TO
Black Box Testing method is applicable to all levels of the software testing process:
Unit Testing
Integration Testing
System Testing
Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black box testing
method comes into use.
BLACK BOX TESTING TECHNIQUES
Following are some techniques that can be used for designing black box tests.
Equivalence partitioning
Equivalence Partitioning is a software test design technique that involves dividing input values
into valid and invalid partitions and selecting representative values from each partition as test
data.
Boundary Value Analysis
Boundary Value Analysis is a software test design technique that involves determination of
boundaries for input values and selecting values that are at the boundaries and just inside/outside
of the boundaries as test data.
Cause Effect Graphing
Cause Effect Graphing is a software test design technique that involves identifying the cases
(input conditions) and effects (output conditions), producing a Cause-Effect Graph, and
generating test cases accordingly.
BLACK BOX TESTING ADVANTAGES
Tests are done from a users point of view and will help in exposing discrepancies in the
specifications
Tester need not know programming languages or how the software has been implemented
Tests can be conducted by a body independent from the developers, allowing for anobjective perspective and the avoidance of developer-bias
Test cases can be designed as soon as the specifications are complete
BLACK BOX TESTING DISADVANTAGES
Only a small number of possible inputs can be tested and many program paths will be left
untested
http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/acceptance-testing/7/29/2019 Module6 and 4
10/19
Without clear specifications, which is the situation in many projects, test cases will be
difficult to design
Tests can be redundant if the software designer/ developer has already run a test case.
The Differences BetweenBlack Box Testing and White Box Testingare listed below.
Criteria Black Box Testing White Box Testing
Definition
Black Box Testing is a software testing
method in which the internal structure/
design/ implementation of the item being
tested is NOT known to the tester
White Box Testing is a software
testing method in which the internal
structure/ design/ implementation of
the item being tested is known to the
tester.
Levels Applicable
To
Mainly applicable to higher levels of
testing:Acceptance Testing
System Testing
Mainly applicable to lower levels of
testing:Unit Testing
Integration Testing
Responsibility Generally, independent Software Testers Generally, Software Developers
Programming
Knowledge Not Required Required
Implementation
Knowledge Not Required Required
Basis for Test
Cases Requirement Specifications Detail Design
Integration Testing
In Integration Testing, individual software modules are integrated logically and tested as a group.
A typical software project consists of multiple software modules, coded by different
programmers. Integration testing focuses on checking data communication amongst thesemodules.
http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/7/29/2019 Module6 and 4
11/19
Hence it is also termed as I & T (Integration and Testing), String Testing and sometimes
Thread Testing.
Need of Integration Testing:
Although each software module is unit tested, defects still exist for various reasons like
A Module in general is designed by an individual software developer who understanding
and programming logic may differ from other programmers. Integration testing becomesnecessary to verify the software modules work in unity
At the time of module development, there wide chances of change in requirements by the
clients. These new requirements may not be unit tested and hence integration testing
becomes necessary.
Interfaces of the software modules with the database could be erroneous
External Hardware interfaces, if any, could be erroneous
Inadequate exception handling could cause issues.
Integration Test Case:
Integration Test case differs from other test cases in the sense it focuses mainly on the
interfaces & flow of data/information between the modules. Here priority is to be given for
the integrating links rather than the unit functions which are already tested.
Sample Integration Test Cases for the following scenario:Application has 3 modules say Login
Page, Mail box and Delete mails and each of them are integrated logically.Here do not concentrate much on the Login Page testing as its already being done. But check
how its linked to the Mail Box Page.Similarly Mail Box: Check its integration to the Delete Mails Module.
Test Case
ID
Test Case ObjectiveTest Case
Description
Expected Result
1 Check the interface
link between theLogin and Mailbox
module
Enter login
credentials andclick on the
Login button
To be directed to the Mail Box
2 Check the interfacelink between the
Mailbox and Delete
Mails Module
From Mail boxselect the an
email and click
delete button
Selected email should appear in theDeleted/Trash folder
Approaches/Methodologies/Strategies of Integration Testing:
7/29/2019 Module6 and 4
12/19
The Software Industry uses variety of strategies to execute Integration testing , viz.
Big Bang Approach :
Incremental Approach: which is further divided into followingo Top Down Approach
o Bottom Up Approach
o Sandwich Approach - Combination of Top Down and Bottom Up
Below are the different strategies, the way they are executed and their limitations as well
advantages.
Big Bang Approach:
Here all component are integrated together at once, and then tested.
Advantages:
Convenient for small systems.
Disadvantages:
Fault Localization is difficult.
Given the sheer number of interfaces that need to be tested in this approach, someinterfaces links to be tested could be missed easily.
Since the integration testing can commence only after all the modules are designed,
testing team will have less time for execution in the testing phase.
Since all modules are tested at once, high risk critical modules are not isolated and testedon priority. Peripheral modules which deal with user interfaces are also not isolated and
tested on priority.
Incremental Approach:In this approach, testing is done by joining two or more modules that are logically related. Then
the other related modules are added and tested for the proper functioning. Process continues until
all of the modules are joined and tested successfully.This process is carried out by using dummy programs called Stubs and Drivers. Stubs and
Drivers do not implement the entire programming logic of the software module but just simulate
data communication with the calling module.
Stub: Is called by the Module under Test.
Driver: Calls the Module to be tested.
Incremental Approach in turn is carried out by two different Methods:
o Bottom Upo Top Down
Bottom up Integration
In the bottom up strategy, each module at lower levels is tested with higher modules until allmodules are tested. It takes help of Drivers for testing
7/29/2019 Module6 and 4
13/19
Diagrammatic Representation:
Advantages:
Fault localization is easier.
No time is wasted waiting for all modules to be developed unlike Big-bang approachDisadvantages:
Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
Early prototype is not possible
Top down Integration:
In Top to down approach, testing takes place from top to down following the control flow of the
software system.
Takes help of stubs for testing.Diagrammatic Representation:
Advantages:
Fault Localization is easier.
Possibility to obtain an early prototype.
Critical Modules are tested on priority; major design flaws could be found and fixed first.
7/29/2019 Module6 and 4
14/19
Disadvantages:
Needs many Stubs.
Modules at lower level are tested inadequately
Developing Testing Methodologies
The eight considerations listed below provide the framework for developing testing tactics. Each is described in the
following sections.
Acquire and study the test strategy
Determine the type of development project
Determine the type of software system
Determine the project scope
Identify the tactical risks
Determine when testing should occur
Build the tactical test plan
Build the unit test plans
Acquire and Study the Test Strategy
A team familiar with the business risks associated with the software normally develops the test strategy, and the
test team develops the tactics. Thus, the test team needs to acquire and study the test strategy, focusing on the
following questions:
What is the relationship of importance among the test factors?
Which of the high-level risks are the most significant?
Who has the best understanding of the impact of the identified business
risks?
What damage can be done to the business if the software fails to perform
correctly?
What damage can be done to the business if the software is not completed
on time?
7/29/2019 Module6 and 4
15/19
Determine the Type of Development Project
The type of project refers to the environment in which the software will be developed, and the methodology used.
Changes to the environment also change the testing risk. For example, the risks associated with a traditional
development effort are different from the risks associated with off-the shelf purchased software.
Determine the Project Scope
The project scope refers to the totality of activities to be incorporated into the software system being tested the
range of system requirements and specifications to be understood. The scope of the testing effort is usually
defined by the scope of the project. New system development has a much different scope from modifications
to an existing system. When defining the scope, consider the following characteristics and then expand the
list to encompass the requirements of the specific software system being tested.
New Systems Development
Automating manual business process?
Which business processes will or wont be affected?
Which business areas will or wont be affected?
Interfacing to existing systems?
Existing systems will or wont be affected?
Changes to Existing Systems
Corrective only?
Maintenance re-engineering standards?
Correction to known latent defects in addition to enhancements?
Other systems affected?
Risk of regression?
Identify the Tactical Risks
7/29/2019 Module6 and 4
16/19
Strategic risks are the high-level business risks faced by the software system. They
are decomposed into tactical risks to assist in creating the test scenarios that will
address those risks. It is difficult to create test scenarios for high-level risks.
Tactical risks are divided into three categories:
Structural Risks: These risks are associated with the application and themethods used to build it.
Technical Risks :These risks are associated with the technology used to build
and operate the application.
Size risks These risks are associated with the magnitude in all aspects of the
software.
Determine When Testing Should Occur
The previous steps have identified the type of development project, the type of
software system, the type of testing, the project scope, and the tactical risks. That
information should be used to determine the point in the development process at
which testing should occur.
For new development projects, testing can, and should, occur throughout the
phases of a project. For modifications to existing systems, any or all of these may
be applicable, depending on the scope. Examples of test activities to be performed
during these phases are:
Requirements Phase Activities
Determine test strategy
Determine adequacy of requirements
Generate functional test conditions
Design Phase Activities
Determine consistency of design with requirements
Determine adequacy of design
Determine adequacy of the test plans
Generate structural and functional test conditions
Program (Build) Phase Activities
Determine consistency with design
7/29/2019 Module6 and 4
17/19
Determine adequacy of implementation
Generate structural and functional test conditions for modules and units
Test Phase Activities
Test application system Installation Phase Activities
Place tested system into production
Maintenance Phase Activities
Modify and retest
Build the System Test Plan
Using information from the prior steps, develop a System Test Plan to describe the
testing that will occur. This plan will provide background information on the system
being tested, test objectives and risks, the business functions to be tested, and the
specific tests to be performed.
The Test Plan is the road map that will be followed when conducting testing. The
plan is then decomposed into specific tests and lower-level plans. After execution,
the results from the specific tests are rolled up to produce a Test Report.
Build the Unit Test Plans
During internal design, the system is divided into the components or units that
perform the detailed processing. Each of these units should have an individual Test
Plan. The plans can be as simple or as complex as the organization requires basedon its quality expectations.
The importance of a Unit Test Plan is to determine when unit testing is complete. It
is not cost effective to submit units that contain defects to higher levels of testing.
The extra effort spent in developing Unit Test Plans, testing units, and assuring that
units are defect free prior to integration testing can have a significant payback in
reducing overall test costs.
Verification and validation
The terms Verification and Validation are frequently used in the software testing world
Criteria Verification Validation
Definition The process of evaluating work-products
(not the actual final product) of a
development phase to determine whether
The process of evaluating software during
or at the end of the development process to
determine whether it satisfies specified
7/29/2019 Module6 and 4
18/19
they meet the specified requirements for
that phase. business requirements.
Objective
To ensure that the product is being builtaccording to the requirements and design
specifications. In other words, to ensure
that work products meet their specified
requirements.
To ensure that the product actually meets
the users needs, and that the specificationswere correct in the first place. In other
words, to demonstrate that the product
fulfills its intended use when placed in its
intended environment.
Question Are we building the product right? Are we building the rightproduct?
Evaluatio
n Items
Plans, Requirement Specs, Design Specs,
Code, Test Cases The actual product/software.
Activities
ReviewsWalkthroughs
Inspections Testing
Defect management
A Software Defect / Bug is a condition in a software product which does not meet a software
requirement (as stated in the requirement specifications) or end-user expectations (which may
not be specified but are reasonable). In other words, a defect is an error in coding or logic that
causes a program to malfunction or to produce incorrect/unexpected results.
A program that contains a large number of bugs is said to be buggy.
Reports detailing bugs in software are known as bug reports. (See Defect Report)
Applications for tracking bugs are known as bug tracking tools.
The process of finding the cause of bugs is known as debugging.
The process of intentionally injecting bugs in a software program, to estimate test
coverage by monitoring the detection of those bugs, is known as bebugging.
Causes of defects
Human factor
Communication failure
Unrealistic development timeframe
Poor design logic
http://softwaretestingfundamentals.com/defect-report/http://softwaretestingfundamentals.com/defect-report/7/29/2019 Module6 and 4
19/19
Poor coding practices
Buggy third-party tools
Lack of skilled testing
Last minute changes