+ All Categories
Home > Documents > Course Materails Final

Course Materails Final

Date post: 11-Apr-2015
Category:
Upload: knmanikandan4293
View: 1,202 times
Download: 0 times
Share this document with a friend
Description:
Manual testing, course meterial, testing notes, software testing notes, testing manual, SDLC, STLC
143
1 CONTENTS 1.1 Evolution of Software Testing .............................................................................................................. 7 1.2 What is Software Testing? .................................................................................................................... 7 1.3 Goals of Testing. ................................................................................................................................... 7 1.4 Advantages of Testing ........................................................................................................................... 7 2.1 Introduction to Software process ........................................................................................................... 9 3.1 CAPABILITY MATURITY MODEL (CMM) SM ............................................................................... 13 3.2 COMPONENTS OF CMM ................................................................................................................. 13 3.3 CMM FRAMEWORK ....................................................................................................................... 14 3.4 KEY PROCESS AREAS (KPAs) ....................................................................................................... 18 3.4.1 GOALS ......................................................................................................................................... 20 4.1 What is ISO? ....................................................................................................................................... 26 4.2 What is the ISO Process Approach?.................................................................................................... 26 What are the ISO Elements? ............................................................................................................. 27 4,3 ISO Benefits ........................................................................................................................................ 28 4.4 ISO Costs ............................................................................................................................................. 28 4.5 ISO Modules........................................................................................................................................ 29 4.5.1 General: ISO 9001 ........................................................................................................................ 29 4.5.2 Environmental: ISO 14001........................................................................................................... 29 4.5.3 Automotive: ISO/TS 16949.......................................................................................................... 30 4.5.4 Calibration and Testing labs: ISO 17025 ..................................................................................... 30 5. 1 Description ......................................................................................................................................... 32 5.2 Use ....................................................................................................................................................... 32 5.3Examples .............................................................................................................................................. 33 5.3.1Personal Improvement ................................................................................................................... 33 5.3.2 Improving Patient Compliance in Personal Health Maintenance ................................................ 34 5.3.3 Student Section: Improving Your History-Taking Skills ............................................................ 35 5.3.4 Clinician Section: Improving Your Office .................................................................................. 36 6.1 Software Development Life Cycle (SDLC) ........................................................................................ 39 6.2 Waterfall Model .................................................................................................................................. 39 6.3 Prototyping Model ............................................................................................................................... 42 6.4 Incremental Model .............................................................................................................................. 43 6.5 Spiral Model ........................................................................................................................................ 44 7.1 What is Quality? .................................................................................................................................. 47 7.2 How do I manage Quality? .................................................................................................................. 47 7.3 Definitions of Quality.......................................................................................................................... 48 7.4 Why does quality matter? .................................................................................................................... 48 7.5 Why is quality important? ................................................................................................................... 49 7.6 Quality management and software development ................................................................................ 50 7.7 Quality planning .................................................................................................................................. 50 7.8Quality attributes .................................................................................................................................. 51 7.9 What is a quality assurance system? ................................................................................................... 51 7.10 Quality control ................................................................................................................................... 52 7.11 Difference between QA & QC .......................................................................................................... 52 7.12 QA Activity ....................................................................................................................................... 53
Transcript
Page 1: Course Materails Final

1

CONTENTS

1.1 Evolution of Software Testing ..............................................................................................................7

1.2 What is Software Testing? ....................................................................................................................7

1.3 Goals of Testing. ...................................................................................................................................7

1.4 Advantages of Testing...........................................................................................................................7

2.1 Introduction to Software process...........................................................................................................9

3.1 CAPABILITY MATURITY MODEL (CMM)SM

...............................................................................13

3.2 COMPONENTS OF CMM.................................................................................................................13

3.3 CMM FRAMEWORK .......................................................................................................................14

3.4 KEY PROCESS AREAS (KPAs) .......................................................................................................18

3.4.1 GOALS.........................................................................................................................................20

4.1 What is ISO? .......................................................................................................................................26

4.2 What is the ISO Process Approach?....................................................................................................26

What are the ISO Elements? .............................................................................................................27

4,3 ISO Benefits ........................................................................................................................................28

4.4 ISO Costs.............................................................................................................................................28

4.5 ISO Modules........................................................................................................................................29

4.5.1 General: ISO 9001........................................................................................................................29

4.5.2 Environmental: ISO 14001...........................................................................................................29

4.5.3 Automotive: ISO/TS 16949..........................................................................................................30

4.5.4 Calibration and Testing labs: ISO 17025 .....................................................................................30

5. 1 Description .........................................................................................................................................32

5.2 Use.......................................................................................................................................................32

5.3Examples ..............................................................................................................................................33

5.3.1Personal Improvement...................................................................................................................33

5.3.2 Improving Patient Compliance in Personal Health Maintenance ................................................34

5.3.3 Student Section: Improving Your History-Taking Skills............................................................35

5.3.4 Clinician Section: Improving Your Office ..................................................................................36

6.1 Software Development Life Cycle (SDLC) ........................................................................................39

6.2 Waterfall Model ..................................................................................................................................39

6.3 Prototyping Model...............................................................................................................................42

6.4 Incremental Model ..............................................................................................................................43

6.5 Spiral Model ........................................................................................................................................44

7.1 What is Quality? ..................................................................................................................................47

7.2 How do I manage Quality?..................................................................................................................47

7.3 Definitions of Quality..........................................................................................................................48

7.4 Why does quality matter?....................................................................................................................48

7.5 Why is quality important? ...................................................................................................................49

7.6 Quality management and software development ................................................................................50

7.7 Quality planning ..................................................................................................................................50

7.8Quality attributes ..................................................................................................................................51

7.9 What is a quality assurance system? ...................................................................................................51

7.10 Quality control...................................................................................................................................52

7.11 Difference between QA & QC ..........................................................................................................52

7.12 QA Activity .......................................................................................................................................53

Page 2: Course Materails Final

2

8.1 V & V ..................................................................................................................................................55

8.1.1Verification:...................................................................................................................................55

8.1.2 Verification Techniques ...............................................................................................................55

8.2 Validation: ...........................................................................................................................................55

8.2.1 Validation Techniques..................................................................................................................55

9.1 Phases of Testing Life cycle................................................................................................................58

10.1`Methods of Testing ...........................................................................................................................60

10.1.1Functional or Black Box testing..................................................................................................60

10.1.2 Logical or White box Testing. ................................................................................................60

11 White Box Testing...............................................................................................................................62

11.1 The purpose of white box testing ..................................................................................................62

11.2 Types of testing under White/Glass Box Testing Strategy: ..........................................................62

11.2.1 Unit Testing:...............................................................................................................................62

11.2.1.1Statement Coverage:.............................................................................................................63

11.2.2 Branch Coverage: .......................................................................................................................63

11.2.3 Security Testing:.........................................................................................................................63

11.2.4 Mutation Testing: .......................................................................................................................63

11.2.5 Basis Path Testing ......................................................................................................................63

11.2.6 Flow Graphs ...............................................................................................................................63

11.2.7 Cyclomatic Complexity.............................................................................................................65

11.2.8 Deriving Test Cases....................................................................................................................65

11.2.9 Graphical Matrices .....................................................................................................................65

11.2.10 Graph Matrix ............................................................................................................................66

11.2.11 Control Structure Testing .........................................................................................................66

11.2.12 Condition Testing .....................................................................................................................67

11.2.13 Loop Testing.............................................................................................................................67

11.2.13.1 Simple loops:.....................................................................................................................67

11.2.13.2 Nested Loop ......................................................................................................................68

11.2.13.3 Concatenated loops: ..........................................................................................................68

11.3 Advantages of White box testing: .....................................................................................................70

11.4 Disadvantages of white box testing:..................................................................................................70

12.1 Black Box Testing: ............................................................................................................................72

12.2 Testing Strategies/Techniques...........................................................................................................72

12.3 Advantages of Black Box Testing.....................................................................................................73

12.4 Disadvantages of Black Box Testing ................................................................................................73

13.1 Levels of testing ................................................................................................................................75

13.1.1 Unit Testing................................................................................................................................75

13.1.2 Integration testing.......................................................................................................................81

13.1.2.1 Different Approach of Testing ............................................................................................82

13.1.3 System Testing: ..........................................................................................................................86

13.1.3. 1 Testing ................................................................................................................................86

13.1.3. 2 Compatibility Testing.........................................................................................................86

13.1.3. 3 Recovery Testing................................................................................................................86

13.1.3. 4 Usability Testing: ...............................................................................................................87

13.1.3. 5 Exploratory Testing:...........................................................................................................87

13.1.3. 6 Ad-hoc Testing:..................................................................................................................87

13.1.3. 7 Stress Testing: ....................................................................................................................87

13.1.3. 8 Volume Testing: .................................................................................................................87

Page 3: Course Materails Final

3

13.1.3. 9 Load Testing:......................................................................................................................87

13.1.3. 10 Regression testing.............................................................................................................87

13.1.4 Acceptance Testing ....................................................................................................................89

13.1.4.1 User Acceptance Testing:....................................................................................................90

14.1 TEST PLAN ......................................................................................................................................92

14.2 Purpose of Software Test Plan: .........................................................................................................92

14.3 Advantages of test plan .....................................................................................................................92

14.4 Process of the Software Test Plan .....................................................................................................93

14.5 Test plan template .............................................................................................................................93

14. 5.1 Test plan identifier.....................................................................................................................94

14. 5.2 References .................................................................................................................................94

14. 5.3 Objective of the plan .................................................................................................................94

14. 5.4 Test items (functions) ................................................................................................................94

14. 5.5 Software risk issues ...................................................................................................................94

14. 5.6 Features to be tested ..................................................................................................................95

14. 5.7 Features not to be tested ............................................................................................................95

14. 5.8 Approach (strategy) ...................................................................................................................95

14. 5.9 .Item pass/fail criteria ................................................................................................................96

14. 5.10 Entry & exit criteria.................................................................................................................96

14. 5.10 .a Entrance Criteria..............................................................................................................96

14. 5.10 .b Exit Criteria .....................................................................................................................97

14. 5.11 Suspension Criteria and Resumption Requirements: ..............................................................97

14. 5.12 Risks and Contingencies .........................................................................................................97

15.1 Test Case Template .........................................................................................................................102

15.2 Test Case Design Techniques..........................................................................................................103

15.2.1 Equivalence Class Partitioning.................................................................................................103

15.2.2 Boundary value analysis:..........................................................................................................104

15.2.3 Cause-Effect Graphing .............................................................................................................106

15.2.4 State- Transtition ......................................................................................................................107

15. 3 Sample Test Cases..........................................................................................................................109

Test execution..................................................................................................................................111

16.1 Test execution..................................................................................................................................112

16. 2 When to stop testing? .....................................................................................................................112

16. 3 Defect .............................................................................................................................................112

16.3.1 Defect Fundamental .................................................................................................................112

16.4 Report a Bug....................................................................................................................................113

16.4.1 Template to add a Bug..............................................................................................................114

16.4.2 - Contents of a Bug report ........................................................................................................115

16.5 Defect Severity ................................................................................................................................116

16.5.1 Critical ......................................................................................................................................116

16.5.2 Major ........................................................................................................................................116

16.5.3 Average ....................................................................................................................................117

16.5.4Minor .........................................................................................................................................117

16.5.5 Exception..................................................................................................................................117

16.6 Defects Prioritized.......................................................................................................................117

16.6 .1 Urgent ......................................................................................................................................117

16.6 .2 High .........................................................................................................................................118

16.6 .3 Medium ...................................................................................................................................118

Page 4: Course Materails Final

4

16.6 .4 Low..........................................................................................................................................118

16.6 .5 Defer ........................................................................................................................................118

16. 7 Bug Life Cycle ...............................................................................................................................118

16.8 Bug Statuses Used During a Bug Life Cycle ..................................................................................121

16. 8.1 Statuses associated with a bug: ...............................................................................................121

16. 8.1 a New:..................................................................................................................................121

16. 8.1 b Assigned: ..........................................................................................................................122

16. 8.1 c Open: ................................................................................................................................122

16. 8.1 d Fixed:................................................................................................................................122

16. 8.1 e Pending Retest: .................................................................................................................122

16. 8.1 f Retest:................................................................................................................................122

16. 8.1 g Closed:..............................................................................................................................122

16. 8.1 h Reopen:.............................................................................................................................122

16. 8.1 i Pending Rejected:..............................................................................................................122

16. 8.1 j Rejected:............................................................................................................................122

16. 8.1 k Postponed: ........................................................................................................................122

16. 8.1 l Deferred:............................................................................................................................123

16.9 Defect report template .............................................................................................................123

16.10 Defect tracking ..............................................................................................................................124

16. 10 .1 Different Phases of Defect Tracking ....................................................................................124

16. 10 1 a Requirements phase........................................................................................................124

16. 10 1 b Design and analysis phase..............................................................................................124

16. 10 1 c Programming phases.......................................................................................................125

16. 10 1 d Maintenance and enhancement phases...........................................................................125

17 .1 what is test metric? .........................................................................................................................127

17.1aExample of Base Metrics:...........................................................................................................127

17.1. b Example of Derived Metrics: ..................................................................................................127

17.3 Objective of test metrics................................................................................................................127

17.4 Why testing metrics? .....................................................................................................................128

17.5 The benefits of having testing metrics ............................................................................................128

17.6 Deciding on the Metrics to Collect..................................................................................................129

17.7 Types of test metrics........................................................................................................................132

17.7.1 Product test metrics ..................................................................................................................133

17.7.2 Project test metrics: ..................................................................................................................136

17.7.3 Process test metrics ..................................................................................................................137

Page 5: Course Materails Final

5

MANUAL TESTING

Page 6: Course Materails Final

6

Introduction to Software Testing

Page 7: Course Materails Final

7

1.1 Evolution of Software Testing

The ability to produce software in a cost-effective way is the key factor that determines the

effective functioning of the modern systems. To produce cost effective software a number of activities were in practice.

The attitude towards Software testing underwent a major positive change in the recent years. In

1950’s when machine languages were used testing is nothing but debugging.1960’s Compilers were developed, testing started to be considered as a separate activity from debugging. In 1970 ‘s when Software engineering concepts were introduced ,software testing began to evolve as technical discipline Over the last two decades there has been an increased focus on the better, faster, cost-effective, secured software. This has increased the acceptance of testing as a technical discipline and also a career choice

1.2 What is Software Testing?

The process of exercising the software or part of the software by a set of inputs to check whether the results are obtained as required. Software is the process used to identify the correctness, completeness and quality of developed software. It is a means of evaluating the system or a system component to determine that it meets the requirement of the customer.

1.3 Goals of Testing.

• Determine whether the system meets the requirements. • Determine whether the system meets the specification • Find the bugs • Ensure Quality

1.4 Advantages of Testing

• Detect defects early , • Reduces the cost of defect fixing • Prevent the detection of bugs by the customer • Ensure that the product works as to the satisfaction of the customer

Page 8: Course Materails Final

8

SOFTWARE PROCESS

Page 9: Course Materails Final

9

2.1 Introduction to Software process

Capability Maturity Model (CMM), has found its way from Carnegie Melon University’s (CMU) software engineering Institute (SEI) to major Software developers all over the world. Some consider it as an answer to Software Industry’s chaotic problems, and some consider it just another exhaustive framework that requires too much to do and too little to show for it. This article is not intended to be a comprehensive introduction to CMM; the interested readers should read official CMM documentation available from SEI’s web site to get a comprehensive discussion of CMM. This article is intended to show that CMM is not a framework that advocates magical and revolutionary new ideas, but it is in fact a tailored compilation of the best practices in Software engineering.

The intention of this article is to introduce CMM as a logical and obvious evolution of the Software Engineering practices. The article does not require any prior knowledge of CMM; however it is assumed that the reader is cognizant of issues involved in Software development.

Before we move any further, we must define one term that is central to almost every industry – Process. This term has also found its rightful place in Software Industry. It was Deming who popularized this term. Japanese have managed a miraculous industrial revolution based on the simple concept of a Process. “Process is a mean by which people, procedures, methods, equipment, and tools are integrated to produce a desired end result” [quoted from CMM for Software, version 2B]. Humphrey in his Book Introduction to the PSP, (1997) defines a process in Software Development context as “Process defines a way to do a project, Projects typically produces a product, and Product is something you produce for a co-worker, an employer, or a customer.”

Now that we know what Process means, how can we use this knowledge to achieve success? The answer lies in the following three-step strategy:

1- Analyze the current process, by which your organization executes its projects,

2- Figure out the strengths and weaknesses of the current process,

3- Improve upon your Process’s strengths and remove its weaknesses.

Bingo you have a simple recipe for success!

The above seemingly simple steps, have baffled the Software Industry for years. Different software developers have adopted different techniques to implement the three-step recipe, with varying degree of success.

Having noted down the above “three-step approach to success”, we would now concentrate on mastering each of the above three steps.

Let us start by considering the normal course of events that follow when a software project is undertaken. We will only outline the steps, without going into the details of each; since our purpose is to highlight the most common events and not there particular details, as they may vary depending on the contract and the nature of the project.

Page 10: Course Materails Final

10

Step-1 – The Requirements:

The client gives a set of Requirements of the product to the contracting company (referred to as “the Company”). The first step is to discuss these requirements with the client. The discussion will focus on removing any ambiguity, conflict, or any other issues related to the product in question. The outcome of this discussion will ideally be a “Well-defined set of functionalities that the product will achieve”.

Step-2 – The Planning (cost, time estimates):

The next step is to plan the project. Given the required set of functionalities the million dollar question is “How much time and Dollars will the Company requires completing the project?” Based on these estimates, resources (both human and non-human) will be allocated to the project. Various milestones will be defined, to facilitate project monitoring. Plans will also be made to outsource any part of the project, if deemed necessary.

Step-3 – On with the Project:

Assuming that the plans have been made, team formed, and estimates in place now the Company is ready to start actual work on the project.

Step-4 – How is the Project doing (continuous monitoring):

Once the project is under way, the project will continuously monitor their progress against the Plans and milestones made in Step-2.

Step-5 – How are the sub-contractors Doing:

In Step-2, if the Company decided to outsource or sub-contract a part of the project, then the sub-contractors will also be managed and their progress monitored closely. This will ensure that no delays occur due to lapses caused by the sub-contractor.

Step-6 Software Quality Assurance:

In Step-4 the Company monitored the project for any cost overrun, or any schedule slippage; but that’s not all that need to be monitored. An in-budget, with-in-time project may still have serious problems. In Step-4 the Company ensured that the project is going according to the schedule, and is with in budget, but is it doing what it is suppose to do?. That is, are all the tasks completed according to the Company’s standards and according to, the Requirements agreed in Step-1?. In Step-6, the Company will ensure that no work is done in violation of any standard and any system Requirements.

Step-7 handling the inevitable changes:

A software project, usually involves different teams working on different aspects of the project, e.g. one team may be coding a module, while another may be working on writing the users manual. Although Teams work on a certain aspect of the project, but the project is eventually going to be delivered as a single product. It’s evident that all the teams MUST co-ordinate their work, to produce a well-integrated final product. In Step-2, the plan was well laid, and all the involved personnel were assigned their share of work. But some changes will almost always have to be made. These changes may affect more than one team. Therefore it is necessary for the Company to ensure that all the bits-and-pieces of the project remain well coordinated. The Company must

determine if a change made to one piece of the product also necessitates a change to one or more other pieces, and if it does then those changes must be made accordingly. In Software terms this is called “Configuration Management”.

One can come up with many other activities that a software company would normally follow. But we would stop here and will focus only on the above-mentioned activities.

It is obvious that the above mentioned activities are performed by almost all the software companies; then what is it that makes a company Microsoft and another company go bellies up ?.

Page 11: Course Materails Final

11

The answer is simple: Not all the companies observe the above steps with the same vigor. These steps are all very simple to understand but extremely difficult to execute effectively.

The purpose of the above discussion was to enable the readers to appreciate the need for a guideline, or a road map that software companies can follow to produce quality software, with in budget and with in time. One such roadmap is called Capability Maturity Model (CMM).

Page 12: Course Materails Final

12

CAPABILITY MATURITY MODEL

(CMM)

Page 13: Course Materails Final

13

3.1 CAPABILITY MATURITY MODEL (CMM)SM

Capability Maturity Model, as already mentioned, is the outcome of decades of research and study of successful and unsuccessful projects. The major philosophy of CMM is very similar to life itself. When a child is born it is at a very "initbial" level of maturity. The child grows up, learns and attains a higher level of maturity. This keeps on going until he/she becomes a fully mature adult; and even after that the learning goes on.

According to CMM, a software company also goes (or should go) through similar maturity evolutions. The CMM maturity levels are discussed later.

Readers should notice that CMM is NOT a software development life cycle model. Instead it is a strategy for improving the software process irrespective of the actual life-cycle model used [Schach 1996].

Lets dive right into the intricacies of CMM.

3.2 COMPONENTS OF CMM

Given below is a brief explanation of various components of CMM. This explanation has been extracted from SEI's official documents. This section is followed by more detailed explanation of each component.

Maturity levels

A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. The five maturity levels provide the top-level structure of the CMM.

Process capability

Software process capability describes the range of expected results that can be achieved by following a software process. The software process capability of an organization provides one means of predicting the most likely outcomes to be expected from the next software project the organization undertakes.

Key process areas

Each maturity level is composed of key process areas. Each key process area identifies a cluster of related activities that, when performed collectively, achieve a set of goals considered important for establishing process capability at that maturity level. The key process areas have been defined to reside at a single maturity level. For example, one of the key process areas for Level 2 is Software Project Planning.

Goals

The goals summarize the key practices of a key process area and can be used to determine whether an organization or project has effectively implemented the key process area. The goals signify the scope, boundaries, and intent of each key process area. An example of a goal from the Software Project Planning key process area is "Software estimates are documented for use in planning and tracking the software project." See "Capability Maturity Model for Software, Version 1.1" [Paulk93a] and Section 4.5, Applying Professional Judgment, of this document for more information on interpreting the goals.

Page 14: Course Materails Final

14

Common features

The key practices are divided among five Common Features sections: Commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation. The common features are attributes that indicate whether the implementation and institutionalization of a key process area is effective, repeatable, and lasting.The Activities Performed common feature describes implementation activities. The other four common features describe the institutionalization factors, which make a process part of the organizational culture.

Key practices

Each key process area is described in terms of key practices that, when implemented, help to satisfy the goals of that key process area. The key practices describe the infrastructure and activities that contribute most to the effective implementation and institutionalization of the key process area. For example, one of the practices from the Software Project Planning key process area is "The project's software development plan is developed according to a documented procedure."

As mentioned earlier the above description of various components of CMM has been taken out of SEI's official

documents. The readers need not worry if they don't understand some or all of what has been written above. I will

explain each component in details in the sections below.

3.3 CMM FRAMEWORK

MATURITY LEVELS

CMM defines five levels of maturity:

Level 1: Initial Level

This is the lowest of the maturity levels; you may consider it as the immature level. At this level the software process is

not documented and is not fixed. Everything in these companies is done on an ad-hoc basis. The projects are usually

late, over budgeted and have quality issues. This does not mean that a company at this level can not do successful

projects. As a matter of fact the author himself works for a company that is somewhere between Level 1 and Level 2

and despite this it has a very impressive track record for producing quality software, with in budget and with in time.

Companies at Level 1 do manage to produce good software mainly because of their personnel immense competency.

These companies are characterized by heroes - individuals with good programming, communications and peoples

skills. It is for these individual heroes that the companies at Level 1 manage to complete successful projects. Most of

the companies around the world are at Level 1. These companies make their decisions on the spur of the moment,

rather than anticipating problems and fixing them before they occur. Software developers in these companies are

Page 15: Course Materails Final

15

usually over-worked, over-burdened and spend major portion of their time in re-working, or fixing bugs. The success of

a project depends totally on the team working on the project and on the project manager's abilities rather than on the

company's processes. As the team changes or some key individuals of the team leave - the project usually fall flat on its

face.

Level 2: Repeatable Level

At this level basic software project management practices are in place. A project planning, monitoring and

measurements are properly done according to certain well-defined processes. Typical measurements include tracking

of costs and schedule. The results of these measurements are used in future projects to make a better and realistic

project plan. Projects have a stronger chance of being successful, and if unsuccessful the mistakes are recorded and

thus are avoided in the future projects. The key point is that without measurements it is impossible to foresee and

detect the problems before they get out of hand.

Level 3: Defined Level

At this level the process of software development is fully documented. Both the managerial and technical aspects are

fully defined and continued efforts are made to better the process. At this level CASE tools are used for development of

software. If a Level 1company tries to follow activities involved in Level 3, the result usually are disastrous. This is

because in CMM a preceding level lays the ground work for the next level. In order to be able to achieve Level 3, one

must first achieve Level 2.

An example of a documented process could be "the process for identifying software defects/bugs". This process may

be documented by using checklist for identification of common defects; the check list may contain entries like "All

variables initialized, all pointers initialized, all pointers deleted, all exceptions caught" etc. The process of defect

identification may also include the total count of defects, and the categories of each software defects. A company may

use any method to documents its processes. CMM lays no compulsion on how a process should be documented. The

only compulsion is that the process should be documented in such a manner that a new recruit to the company can

easily do his/her job by reading the documentation.

Level 4: Managed Level

Page 16: Course Materails Final

16

Level 3, provides a way to document the processes; Level 4 allows that documentation to be used in a meaningful

manner. Level 4, involves software matrices, and statistical quality control techniques. In Level 3, I gave an example of

documenting a Software Defects/bugs identification process. Imagine that the total count of defects per thousand lines

of code turn out to be 500. Level 4 would have activities aimed at identifying the root cause(s) of these bugs, and would

set goals to decrease the defect number to a reasonable level.

Level 5: Optimizing Level

The software environment changes all the time. Technology changes and so do the techniques. Level 5 deals with the

ongoing changes and with ways to improve the current processes to meet the changing environment. In essence Level

5 provides a positive feedback loop. Level 5 is about continuous improvement. A company at Level 5 uses statistical

methods to incorporate future changes and to be receptive to ideas and technologies for continuous growth.

Page 17: Course Materails Final

17

The above discussion would make sense only to the readers who already know about CMM. For others the above lines

just add to confusion. Once again I remind the readers that "patience has its virtue", CMM is a vast subject, and a few

lines can not even begin to explain it. The rest of the article further break the above levels down, with a hope that this

would help the readers in understanding CMM. So if the above discussion has left you confused and has not added

much to your understanding of CMM then keep reading on, as the best is yet to come :)

Page 18: Course Materails Final

18

3.4 KEY PROCESS AREAS (KPAs) Each level (Level 1,2, 3, 4, 5) have been divided into certain KPAs. For a company to achieve a certain maturity level it

must fulfill all the KPAs of the desired maturity level. Since every company is at least at Level 1, there is no Key

Process Areas for Level 1 - meaning that Software Company does not need to do anything to be at level 1. You may

think of Key Process Areas as "TO DOs of a maturity level" or a Task list that must be performed. A Key Process Area

contains a group of common activities that a company must perform to fully address that Key process Area. Given

below is the list of KPAs for each Maturity Level.

Level 1 – Initial

Level 2 - Repeatable

• Requirements Management • Software Project Planning • Software Project Tracking & Oversight • Software Subcontract Management • Software Quality Assurance • Software Configuration Management

Level 3 - Defined

• Organizational Process Focus • Organizational Process Definition • Training Program • Integrated Software Management • Software Product Engineering • Intergroup Coordination • Peer Reviews

Level 4 - Managed

• Quantitative Process Management • Software Quality Management

Level 5 - Optimizing

• Defect Prevention • Technology Change Management • Process Change Management

Page 19: Course Materails Final

19

There are 18 KPAs in CMM. So what should the reader make out of the above KPAs ?. A detailed book on CMM would

explain what each KPA means. But with in the space and scope restriction of this article I could not delve deep into

each KPA. Just by reading the KPAs, readers would realize that some of the KPAs would immediately make sense

while others would be difficult to understand. For example the "Peer Reviews" KPA of Level 3 is easily understood and

so are most of the KPAs of Level 2. However KPAs like "Organizational Process focus, Process definition, Integrated

Software Management etc." are difficult to understand without some explanation. There is a reason why some of the

KPAs are easily understood while others take considerable effort. Those KPAs that are usually done by many

companies (namely the KAPs of Level-2) are the ones that are easily understood - while the other KPAs alienate us -

not because they are some abstract terms being churned out in the labs of CMU, but simply because most of the

companies in the world do not follow the activities encompassed by these KPAs. And that is why CMM is such a

Page 20: Course Materails Final

20

wonderful roadmap to follow. It tells us exactly what successful, big software companies have been doing to achieve

success.

Unfortunately the scope of this article restricts me from explaining the above KPAs in detail.

What CMM tells us by virtue of the above KPAs is: For a company to level with the best it MUST address all the 18

KPAs. Failing to address one or more of the above KPAs would result in a relatively immature company - hence

resulting in a decreased productivity and increased risk.

3.4.1 GOALS Looking at the KPAs an obvious question comes to mind. How can a company be sure that it has successfully

addressed a KPA?. CMM assigns GOALS to each KPA. In order to successfully address a KPA a company must

achieve ALL the goals associated with that KPA. Given below is the complete list of GOALS associated to each of the

above 18 KPAs.

Level 2 - Repeatable

• Requirements Management o GOAL 1:

System requirements allocated to software are controlled to establish a baseline for software engineering and management use.

o GOAL 2:

Software plans, products, and activities are kept consistent with the system requirements allocated to software.

• Software Project Planning o GOAL 1:

Software estimates are documented for use in planning and tracking the software project.

• o GOAL 2:

Software project activities and commitments are planned and documented.

o GOAL 3:

Affected groups and individuals agree to their commitments related to the software project.

Page 21: Course Materails Final

21

• Software Project Tracking & Oversight o GOAL 1:

Actual results and performances are tracked against the software plans.

o GOAL 2:

Corrective actions are taken and managed to closure when actual results and performance deviate significantly from the software plans.

o GOAL 3:

Changes to software commitments are agreed to by the affected groups and individuals.

• Software Subcontract Management o GOAL 1:

The prime contractor selects qualified software subcontractors.

o GOAL 2:

The prime contractor and the software subcontractor agree to their commitments to each other.

o GOAL 3:

The prime contractor and the software subcontractor maintain ongoing communications.

o GOAL 4:

The prime contractor tracks the software subcontractor's actual results and performance against its commitments.

• Software Quality Assurance o GOAL 1:

Software quality assurance activities are planned.

o GOAL 2:

Adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively.

o GOAL 3:

Affected groups and individuals are informed of software quality assurance activities and results.

o GOAL 4:

Noncompliance issues that cannot be resolved within the software project are addressed by senior management.

Page 22: Course Materails Final

22

• Software Configuration Management o GOAL 1:

Software configuration management activities are planned.

o GOAL 2:

Selected software work products are identified, controlled, and available.

o GOAL 3:

Changes to identified software work products are controlled.

o GOAL 4:

Affected groups and individuals are informed of the status and content of software baselines.

Level 3 - Defined

• Organizational Process Focus o GOAL 1: o GOAL 2:

• Organizational Process Definition o GOAL 1: o GOAL 2:

• Training Program o GOAL 1: o GOAL 2:

• Integrated Software Management o GOAL 1: o GOAL 2:

• Software Product Engineering o GOAL 1: o GOAL 2:

• Intergroup Coordination o GOAL 1: o GOAL 2:

• Peer Reviews o GOAL 1: o GOAL 2:

Level 4 - Managed

Page 23: Course Materails Final

23

• Quantitative Process Management o GOAL 1: o GOAL 2:

• Software Quality Management o GOAL 1: o GOAL 2:

Level 5 - Optimizing

• Defect Prevention o GOAL 1: o GOAL 2:

• Technology Change Management o GOAL 1: o GOAL 2:

• Process Change Management o GOAL 1: o GOAL 2:

• Common Features • Key Practices

The interrelationship of the terms discussed above can be best shown by the following diagram:

Page 24: Course Materails Final

24

The Structure of Capability Maturity Model

Page 25: Course Materails Final

25

ISO

Page 26: Course Materails Final

26

4.1 What is ISO?

ISO or the International Organization for Standardization is a non-governmental organization that was established in 1947. ISO includes a network of 146 national standards bodies (as of 12/31/02) from the world’s leading industrial nations. One of the main goals of ISO is to develop worldwide standardization by promoting adoption of international quality standards. By doing so, barriers of trade are eliminated.

ISO has created 13,736 standards as of 12/31/02 in a variety of industries. Examples of standards ISO has created include the standardized codes for country names, currencies and languages, standardized format of worldwide telephone and banking cards, as well as sizes and colors of road signs, and automobile bumper heights.

ISO includes 2,937 technical working bodies (as of 12/31/02), in which some 30,000 experts from industry, labor, government, and standardization bodies in all parts of the world develop and revise standards. ISO has created standards for the automotive, manufacturing, mechanics, packaging, and health care fields amongst many others.

4.2 What is the ISO Process Approach?

The ISO standards are structured around the Process Approach concept. Two of the eight quality management principles are key to understanding this principle:

• Process Approach - Understand and organize company resources and activities to optimize how the organization operates.

• System Approach to Management - Determine sequence and interaction of processes and manage them as a system. Processes must meet customer requirements.

Therefore, when company resources and activities are optimally organized, and managed as a system, the desired result is achieved more efficiently.

In order to effectively manage and improve your processes, use the Plan-Do-Check-Act or PDCA cycle as a guide. First, you Plan by defining your key processes and establishing quality standards for those processes. Next, you Do by implementing the plan. Thirdly, you Check by using measurements to assess compliance with your plan, and finally, you Act by continuously improving your product performance.

Page 27: Course Materails Final

27

What are the ISO Elements?

ISO standards are documented rules and guidelines for implementing a quality system into your company. Specific technical specifications and/or other specific criteria may also be included depending on the standard you select.

The ISO 9001 standard is a model of a quality system, describing the processes and resources required for registration of a company's quality system. This ISO System diagram shows the management system and processes that are part of the ISO quality management standard. A brief summary of the key requirements is detailed below.

• QMS - Document processes necessary to ensure product or service is of high quality and conforms to customer requirements.

• Management Responsibility - Provide a vision. Show commitment. Focus on the customer. Define policy. Keep everyone informed.

• Resource Management - Assign the right person to the job. Create and maintain positive workspace. • Product Realization - Clearly understand customer, product, legal and design requirements. Ensure

specifications are followed. Check your suppliers. • Measurement, Analysis & Improvement - Identify current and potential problems. Monitor and measure

customer satisfaction. Perform internal audits. Fix problems.

Implementing ISO in your company is a management decision that requires consideration of your organization’s operations, strategy, staff and, most importantly, your customers.

ISO standards are now readily being applied by organizations in industries ranging from manufacturers and labs to auto suppliers and pharmaceuticals. In many instances, the choice to implement an ISO standard into a company is not only the result of a company seeking to improve quality, efficiency, and profitability, but also as a result of ISO implementation being:

• Mandated by certain Industry Leaders, as the Big Three (DaimlerChrysler, Ford and GM) has required of automotive suppliers (See ISO/TS 16949 for more information on deadlines)

• Required by your Customers, especially internationally-focused businesses • Required by overseas regulatory bodies for suppliers of quality-sensitive products, e.g. medical devices • Necessary to maintain market presence and a competitive advantage

Page 28: Course Materails Final

28

For whatever reason your company decides to pursue or update its ISO certification, you need to consider the benefits and costs involved with this process.

4,3 ISO Benefits

ISO standards are a guide that can help transform your company’s quality system into an effective system that meets and exceeds customer expectations. Your company will start to realize these benefits as you implement and adhere to the quality standards, and you will see the internal and external benefits accrue over time.

Internally, processes will be aligned with customer expectations and company goals, therefore forming a more organized operating environment for your management and employees. Product and service quality will improve which decreases defects and waste. Process improvements will help to motivate employees and increase staff involvement. Products and services will be continually improved. All of these internal benefits will continually drive better financial results, hence creating more value for your business.

As for the external benefits, ISO certification shows your customers and suppliers worldwide that your company desires their confidence, satisfaction and continued business. Your company also has the opportunity to increase its competitive advantage, retain and build its customer list, and more easily respond to market opportunities around the world.

4.4 ISO Costs

Although the costs of implementation can be offset with increased sales, reduced defects and improved productivity throughout the organization, the investment of implementing and maintaining an ISO quality system needs to be considered.

Many factors should be considered when calculating your company’s ISO implementation costs. The time, effort and money your organization puts into ISO registration depends on the number of employees, locations, the ISO standard selected for registration and the current state of your quality system and processes. Typical costs include:

• Management and employee time and effort

Page 29: Course Materails Final

29

• Upgrading and creating documentation • Training employees • Registration fees • Maintenance

As with implementation of any new tool, the key to minimizing costs is to arm yourself with knowledge about the process, and then to design a sensible plan that has realistic objectives, adequate resources and a practical time schedule. Having a leader or consultant to guide you through the process and manage deadlines can also help you to control costs and achieve your goals more quickly. In addition, if you have multiple locations or departments, costs can be minimized by leveraging the information you learn and the resources you use as you move through the implementation and maintenance process.

4.5 ISO Modules

Overman & Associates specializes in the following ISO modules:

4.5.1 General: ISO 9001

ISO 9001 defines the rules and guidelines for implementing a quality management system into organizations of any size or description. The standard includes process-oriented quality management standards that have a continuous improvement element. Strong emphasis is given to customer satisfaction. ISO 9001 registered companies can give their customer important assurances about the quality of their product and/or service.

If your company is currently registered to the ISO 9001:1994 standard, you must update your quality system to the ISO 9001:2000 standard by December 15, 2003. Additionally, companies registered to the discontinued ISO 9002 or ISO 9003 must also transition to the ISO 9001:2000 standard by December 15, 2003 to maintain a valid certification.

4.5.2 Environmental: ISO 14001

ISO 14001 defines Environmental Management best practices for global industries. The standard is structured like the ISO 9001 standard. ISO 14001 gives Management the tools to control environmental aspects, improve environmental performance and comply with regulatory standards. The standards apply uniformly to organizations of any size or description.

Page 30: Course Materails Final

30

4.5.3 Automotive: ISO/TS 16949

ISO/TS 16949 defines global quality standards for the automotive supply chain. These QMS standards are gradually replacing the multiple national specifications now used by the sector. Main focus of these standards is on COPS, (Customer Oriented Processes) and how each key process relates to the company strategy.

Depending on your place in the automotive supply chain or the current standard to which you subscribe, ISO/TS 16949 compliance dates vary:

• For DaimlerChrysler’s Tier 1 suppliers worldwide, the transition from QS-9000 to ISO/TS 16949 must be complete by July 1, 2004

• For the Big Three’s Tier 1 suppliers worldwide, the transition from QS-9000 to ISO/TS 16949 must be complete by December 14, 2006

• For other OEMs, evidence suggests that the transition deadline most likely will be in accordance with a 2006 deadline

Additionally, ISO 9001/2/3:1994 registered companies are required to upgrade their system to the ISO 9001:2000 standard by December 15, 2003. If you are one of many automotive suppliers currently registered to both QS-9000 (the Standard which the ISO/TS 16949 standard is based), and ISO 9001/2/3:1994, your company should also consider a transition to ISO/TS 16949 by December 15, 2003. For practical reasons, it may be difficult, confusing and costly to meet both the QS-9000 and the revised ISO 9001 standard requirements and then have to upgrade your system to ISO/TS 16949 shortly thereafter.

4.5.4 Calibration and Testing labs: ISO 17025

ISO 17025 contains specific calibration and testing lab requirements in addition to the ISO 9001 quality standards. The central focus of these standards is on calculation of measurement uncertainty as well as assuring quality and repeatability of measurement results. ISO 17025 applies to independent and in-house labs.

Page 31: Course Materails Final

31

PDCA

Page 32: Course Materails Final

32

5. 1 Description

The PDCA (or PDSA) Cycle was originally conceived by Walter Shewhart in 1930's, and later adopted by W. Edwards Deming. The model provides a framework for the improvement of a process or system. It can be used to guide the entire improvement project, or to develop specific projects once target improvement areas have been identified.

5.2 Use

The PDCA cycle is designed to be used as a dynamic model. The completion of one turn of the cycle flows into the beginning of the next. Following in the spirit of continuous quality improvement, the process can always be reanalyzed and a new test of change can begin. This continual cycle of change is represented in the ramp of improvement. Using what we learn in one PDCA trial, we can begin another, more complex trial.

Plan - a change or a test, aimed at improvement.

� In this phase, analyze what you inted to improve, looking for areas that hold opportunities for change. The first step is to choose areas that offer the most return for the effort you put in-the biggest bang for your buck. To identify these areas for change consider using a Flow chart<link> or Pareto chart<link>.

Do - Carry out the change or test (preferably on a small scale).

� Implement the change you decided on in the plan phase.

Check or Study - the results. What was learned? What went wrong?

� This is a crucial step in the PDCA cycle. After you have implemented the change for a short time, you must determine how well it is working. Is it really leading to improvement in the way you had hoped? You must decide on several measures with which you can monitor the level of improvement. Run Charts can be helpful with this measurement.

Act - Adopt the change, abandon it, or run through the cycle again.

� After planning a change, implementing and then monitoring it, you must decide whether it is worth continuing that particular change. If it consumed too much of your time, was difficult to adhere to, or even led to no improvement, you may consider aborting the change and planning a new one. However, if the change led to a desirable improvement or outcome, you may consider expanding the trial to a different area, or slightly

Page 33: Course Materails Final

33

increasing your complexity. This sends you back into the Plan phase and can be the beginning of the ramp of improvement

5.3Examples

Personal Improvement Example 1: The student with poor grades Improving Patient Compliance in Personal Health Maintenance Example 2: The businesswoman who wants to lose weight Student Section: Improving Your History-Taking Skills Example 3: Feedback for the medical student Clinician Section: Improving Your Office Example 4: The Medical Student who made a difference

5.3.1Personal Improvement

The PDCA cycle is a valuable process that can be applied to practically anything. In this chapter, we discuss cases related to patient care and medical student performance, but the PDCA cycle can be used in everything from making a meal to walking your dog. An immediate concern of yours may be improving your study skills.

Example 1: The Student with Poor Grades Isabel is a first-year medical student who has just taken her first set of examinations and is very unhappy with the results.

• What is she trying to accomplish? Isabel knows that she needs to improve her studying skills in order to gain a better understanding of the material.

• How will she know that a change is an improvement? Isabel considers the most important measure of her study skills to be her exam grades. However, she does not want to risk another exam period just to find out that her skills are still not good. She decides that a better way to measure improvement is by taking old exams.

• What changes can she make that will result in improvement? Isabel thinks that she has spent too little time studying. She feels that the best way to improve her study skills is by putting in more hours.

Cycle 1 Plan: Isabel decides to add an additional thirty hours per week to her already busy schedule. She resolves that she must socialize less, get up earlier, and stay up later. At the end of the week she will take an old exam to see how she is progressing. Do: By the end of the week, Isabel finds that she was able to add only fifteen hours of studying. When she takes the exam she is dismayed to find that she does no better. Check: The fifteen extra hours of studying has made Isabel feel fatigued. In addition, she finds that her ability to concentrate during those hours is rather limited. She has not exercised all week and has not seen any of her friends. This forced isolation is discouraging her. Act: Isabel knows that there must be another way. She needs to design a better, more efficient way to study that will allow her time to exercise and socialize.

Cycle 2 Plan: Isabel contacts all her medical school friends who she knows are doing well yet still have time for outside lives. Many of these friends have similar advice that Isabel thinks she can use. Based on her findings, she decides to always attend lectures, to rewrite her class notes in a format she can understand and based on what the professor has emphasized, and to use the assigned text only as a reference. Do: Isabel returns to her original schedule of studying. However, instead of spending a majority of her time poring over the text, she rewrites and studies her notes. She goes to the text only when she does not understand her notes. When Isabel takes one of the old exams, she finds that she has done better, but she still sees room for improvement. Check: Isabel now realizes that she had been spending too much time reading unimportant information in the required text. She knows that her new approach works much better, yet she still feels that she needs more studying time. She is unsure what to do, because she doesn't want to take away from her social and physically active life. Act: Isabel decides to continue with her new studying approach while attempting to find time in her busy day to study more.

Page 34: Course Materails Final

34

Cycle 3 Plan: In her search for more time to study, Isabel realizes that there are many places that she can combine exercising and socializing with studying. First, she decides to study her rewritten notes while she is exercising on the Stairmaster. Next, she intends to spend part of her socializing time studying with her friends. Do: Isabel's friends are excited about studying together, and their sessions turn into a fun and helpful use of everyone's time. Isabel has found that she enjoys studying while she exercises. In fact, she discovers that she remains on the Stairmaster longer when she's reading over her notes. When Isabel takes her exams this week, she is happy to find that her grades are significantly higher. Check: Isabel now knows that studying does not mean being locked up in her room reading hundreds of pages of text. She realizes that she can gain a lot by studying in different environments while focusing on the most important points. Act: Isabel chooses to continue with the changes she has made in her studying habits. What Isabel initially thought would be an improvement turned out to only discourage her further. Many people who are in Isabel's place do not take the time to study their changes and continue them even though they lead down a disheartening path. By using the PDCA cycle, Isabel was able to see that her initial change did not work and that she had to find one that would better suit her. With perseverance and the willingness to learn, Isabel was able to turn a negative outcome into a positive improvement experience.

5.3.2 Improving Patient Compliance in Personal Health Maintenance

Designing and implementing a patient's plan for health care is a dynamic process. Therefore, it is not uncommon for even the best-intentioned care plans to fail on the first attempt. When this happens, the provider and patient must carefully reconsider, reevaluate, and redesign the health improvement plan to make it more compatible with the patient's lifestyle and needs. The PDCA cycle aids in this reevaluation process by providing a systematic approach to improvement.

Example 2: The Business Woman Who Wants to Lose Weight Mrs. T is a 55-year-old white woman, a successful buyer. She is 10 pounds overweight, suffers from high blood pressure, and lacks muscle tone.

• What is she trying to accomplish? Mrs T. and her doctor are trying to find and implement a viable exercise regimen for her. The goal is to design an exercise schedule that the patient can maintain despite traveling four days a week on business.

• How will she know that a change is an improvement? Improvement will be measured by how frequently she exercises and for how long, and whether her blood pressure decreases.

• What changes can she make that will result in improvement? The doctor and patient need to design a plan that she enjoys as well as one that she can (and will) follow, even when she is traveling.

Cycle 1 Plan: Ride an exercise bike four days a week for twenty minutes. To continue her exercise program while traveling, Mrs. T will make reservations only at hotels equipped with gyms. She will also lease an exercise bike for her home. Do: Mrs. T tries to exercise four days a week for twenty minutes. The patient finds that the exercise bike is too difficult and makes her back sore. She can ride for only three minutes before she gets dizzy and has to stop. Mrs. T finds that at hotels, it is hard to get time on the bike, since there are usually many people who want to use it. Check: Mrs. T exercised only one day a week and could go for only three minutes. The patient is not motivated to use the exercise bike because she doesnÌt enjoy it. Also, the hassle about using bikes at hotels is a big hindrance. Mrs. T needs to find an exercise that permits her to set her own pace and her own hours. Act: Mrs. T and her doctor decide to find a different program.

Cycle 2 Plan: Mrs. T will try a treadmill instead of the exercise bike. Do: Mrs. T tries to exercise four days a week for twenty minutes, but can go for only about five minutes before she gets bored. Also, she feels sick after getting off the treadmill. There was no problem finding an available treadmill at the hotels. Check: Mrs. T exercised twice a week for five minutes. However, the patient did not enjoy it. She enjoys the walking but has trouble with motion sickness. Act: Mrs. T will continue to walk but will walk outside to avoid inconvenient gym hours and the motion sickness. The patient considers purchasing a dog, knowing that this will provide greater motivation to walk and make it more enjoyable.

Page 35: Course Materails Final

35

Cycle 3 Plan: Mrs. T will get a dog and walk it every morning she is home. When she is away, she will try to take a short sight-seeing trip on foot, while her husband takes care of their dog at home. Do: Mrs. T exercises as frequently as possible. She finds walking her dog very enjoyable and does it every day she is home (approximately three days a week) for about forty-five minutes. When she is away, she tries to take a walking tour of the city. This isn't always possible but occurs about 50 percent of the time. Check: Mrs. T exercises three to six days a week for at least twenty minutes. She finds walking the dog most enjoyable because of the early-morning fresh air. Her blood pressure has become less elevated as well. Act: Now that she has found a program she enjoys, Mrs. T decides to commit herself to this new exercise regimen: walking the dog and sight-seeing by foot. By directly considering Mrs. T's needs as well as Mrs. T's likes and dislikes, the physician and the patient were able to design and implement an unconventional but highly effective exercise program that improved both the physical and the emotional wellness of the patient.

5.3.3 Student Section: Improving Your History-Taking Skills

In the first year of medical school, many students are taught to take histories from patients. Some students are comfortable with this process, but others feel like they're barely keeping their heads above water. Whether you are the former or the latter, it would be beneficial to get feedback on your strengths and weaknesses so that you can become a better history taker. The PDCA cycle does just that. It allows medical students to gather knowledge about their interviewing skills and then walks them through different tests of change to see whether the desired improvement really works.

Example 3: Feedback for the Medical Student Jake is a first-year medical student at Dartmouth Medical School (DMS). He visits a local primary care provider's office twice a month, where he works on interviewing different patients. Although he is comfortable talking to patients, he is unsure whether he's asking them the right questions. Sometimes he is at a loss for things to ask, and there are moments of awkward silence. The provider that Jake works with, Dr. Eastman, is a kind man who teaches Jake a lot about medicine but never gives Jake feedback on how he is doing.

• What is he trying to accomplish? Jake would like to improve his history-taking skills. • How will he know that a change is an improvement? Jake knows that he needs more information concerning

his history-taking skills. The only way he can get that information is through feedback from others in the medical field. He decides that the most important measure of his performance should come from Dr. Eastman.

• What changes can he make that will result in improvement? Jake is unsure how to answer this question. He feels confident in his ability to take a patient history. The only weakness he feels is a lack of questions to ask.

Cycle 1 Plan: Jake asks Dr. Eastman to sit in on at least two interviews so that he can receive immediate feedback. On any interview that Dr. Eastman doesn't sit in on, Jake will see the patient first and report all his findings. Do: Dr. Eastman is very busy the next time Jake visits him, and he sits in on only one interview. However, he has his nurse practitioner, Ms. Irvine, observe Jake for two additional interviews. Because Dr. Eastman is so busy, Jake doesn't have time to report his findings to him. Check: The feedback that Dr. Eastman and Ms. Irvine gave Jake was very different. Dr. Eastman told Jake that he was doing a good job but that he forgot to ask a couple of questions in the HPI. Ms. Irvine said that Jake needed to work on asking open-ended questions and pausing to let the patient think. In addition, she mentioned that he completely left out the social history. Act: Jake decides to make some changes that will affect both his history taking and the feedback he is receiving. He needs more feedback from both Dr. Eastman and Ms. Irvine, in addition to other sources such as his classmates and the doctors he works with at school.

Cycle 2 Plan: Jake decides to continue receiving regular feedback from both Dr. Eastman and Ms. Irvine. He specifically asks Dr. Eastman what questions he may have missed while interviewing and what the doctor thinks of his interviewing style. Jake also works with other medical students at mock interviewing. He tries to find a group of four so that two can watch and critique while Jake interviews the fourth student. Finally, DMS tests its students' interviewing skills twice a year

Page 36: Course Materails Final

36

during observed structural clinical encounters (OSCEs). In this process, medical students are videotaped while they interview patients (paid actors). Jake just went through his first OSCE a month ago. He received feedback from the mock patient he interviewed, but he also wants feedback from some of the physicians who run the OSCE program. He sets up a time to meet with them to watch his video. Do: It takes only two weeks for Jake to receive more feedback. Dr. Eastman seems more comfortable criticizing Jake now that he knows what he wants. Also, Jake and his fellow classmates have a lot of fun doing the mock interviews. Check: Jake receives a lot more feedback from Dr. Eastman, who notes that Jake tends to rush patients and ask closed-ended (yes or no) questions. "Take the time to let them tell their story," Ms. Irvine tells him. In the OSCE videotape, Jake and the physician who watched it with him notice that he needs to work on his skills taking blood pressures, that he missed the social history, and that he didnÌt ask any questions regarding the patient's habits. In addition, the videotape reveals Jake's poor habit of rushing the patient and asking closed-ended questions. In the mock interviews with his peers, Jake notices that he is slowing down and does a better job covering the social history aspect of the interview. Act: Jake decides to continue receiving regular feedback from Dr. Eastman and Ms. Irvine. He also continues to meet with his peers to work on his interviewing skills and receive criticism from them. Jake works on all the weaknesses he discovers in these learning sessions when he sees real patients in Dr. Eastman's office. Jake's major improvements came from his ability to study his changes in the check phase of the PDCA cycle. In this phase, Jake was able to recognize that Dr. Eastman and Ms. Irvine provided different kinds of feedback. This knowledge led him to a second PDCA cycle in which he experimented with using more and different health care professionals to test his history-taking performance. As Jake proceeds with each cycle, he will gain more knowledge and continue to improve his history-taking skills.

5.3.4 Clinician Section: Improving Your Office

As a first-year medical student, your role can extend far beyond just practicing your history-taking skills. You have an untainted perspective that attacks problems with a freshness that your office is probably unaccustomed to and will probably treasure. But simply throwing out ideas for change every time one pops into your head is not the way to effect change; instead, use the PDCA cycle.

Let us see how it works in an office setting like yours.

Example 4: The Medical Student Who Made a Difference Tucker is a first-year medical student who follows a preceptor in a small family practice office. At a recent lunch break at this office, Tucker listened in as the four physicians complained about the high volume of patients they were referring to specialists. What are they trying to accomplish? Improvement is certainly needed in this referral process. How will they know that a change is an improvement? The major measure that this practice is interested in is the number and type of referrals. Another metric the practice is concerned about is financial productivity. What changes can they make that will result in improvement? Tucker knew that there were opportunities for improvement here, so he decided to apply the PDCA cycle.

Cycle 1 Plan: Tucker asked his preceptor for all her referrals in the past six months. After stratifying the referrals by specialty, Tucker realized that 70 percent of the patients went to the orthopedics department at the local tertiary care center, mostly for sprained ankles and knee trauma. He also noted that a number of the initial calls to the family practice came when the office was closed, on weekends and after 5 p.m. Tucker presented this information to his preceptor, and together they realized that the practice might benefit from a change in its delivery of orthopedic care. Their plan was simple: have the orthopedics department at the local hospital train the four physicians in the practice how to treat sprained ankles and some knee trauma. Since the local hospital physicians are on a salaried status, not fee-for-service, there is no disincentive for this training. Do: The family practitioners arranged for a one-week, after-hours training session in these two areas of high-volume injuries. They decided that they would test this change for two months to determine whether they would be able to reduce the number of referrals and maintain their patients' continuum of care at the practice. They also decided to stay open until 9 p.m. every Wednesday and from 10 a.m. to 1 p.m. every Sunday as an open clinic. One physician, one nurse, and one administrator would staff each open clinic.

Page 37: Course Materails Final

37

Check: The practice is interested in the number and type of referrals, as well as financial productivity. After two months of implementing this change, the number of orthopedic referrals fell by 30 percent compared with the same period in previous years. By staying open longer, treating more patients, and referring less, the profits at the practice were 18 percent higher than they were during those two months in any previous year. Further, although they had no formal metric for patient satisfaction, all four physicians received positive feedback for the orthopedic care they were delivering and for their new convenient open clinic. Act: Clearly, this change resulted in major improvement. The physicians decided to institute this change permanently. Because of its success, the physicians are considering applying this technique to other specialties to which they refer patients.

As demonstrated by this case study, the PDCA cycle can be applied to any situation. By employing the PDCA cycle, the family practice first carefully assessed what needed to be changed and then implemented an effective improvement plan. Implementing an improvement plan that is hastily selected rarely leads to effective change. This family practice did not fall into the trap of shooting without properly aiming.

Page 38: Course Materails Final

38

SDLC

Page 39: Course Materails Final

39

6.1 Software Development Life Cycle (SDLC)

The software development life cycle (SDLC) is the entire process of formal, logical steps taken to

develop a software product. The phases of SDLC can vary somewhat but generally include the

following:

conceptualization; requirements and cost/benefits analysis; detailed specification of the software requirements; software design; programming; testing; user and technical training; and finally, maintenance.

There are several methodologies or models that can be used to guide the software development lifecycle. Some of these include:

- Linear or waterfall model (which was the original SDLC method); - Rapid application development (RAD) - Prototyping model - Incremental model - Spiral Model

6.2 Waterfall Model The waterfall model derives its name due to the cascading effect from one phase to the other as is illustrated in Figure1.1. In this model each phase well defined starting and ending point, with identifiable deliveries to the next phase.

Note that this model is sometimes referred to as the linear sequential model or the software life cycle.

Page 40: Course Materails Final

40

The model consist of six distinct stages, namely:

1. In the requirements analysis phase

(a) The problem is specified along with the desired service objectives (goals)

(b) The constraints are identified

2. In the specification phase the system specification is produced from the detailed definitions of (a) and (b) above. This document should clearly define the product function.

Note that in some text, the requirements analysis and specifications phases are combined and represented as a single phase.

3. In the system and software design phase, the system specifications are translated into a software representation. The software engineer at this stage is concerned with:

� Data structure

� Software architecture

� Algorithmic detail and

� Interface representations

Page 41: Course Materails Final

41

The hardware requirements are also determined at this stage along with a picture of the overall system architecture. By the end of this stage the software engineer should be able to identify the relationship between the hardware, software and the associated interfaces. Any faults in the specification should ideally not be passed ‘down stream’

4. In the implementation and testing phase stage the designs are translated into the software domain

� Detailed documentation from the design phase can significantly reduce the coding effort.

� Testing at this stage focuses on making sure that any errors are identified and that the software meets its required specification.

5. In the integration and system testing phase all the program units are integrated and tested to ensure that the complete system meets the software requirements. After this stage the software is delivered to the customer [Deliverable – The software product is delivered to the client for acceptance testing.]

6. The maintenance phase the usually the longest stage of the software. In this phase the software is updated to:

� Meet the changing customer needs

� Adapted to accommodate changes in the external environment

� Correct errors and oversights previously undetected in the testing phases

� Enhancing the efficiency of the software

Observe that feed back loops allow for corrections to be incorporated into the model. For example a problem/update in the design phase requires a ‘revisit’ to the specifications phase. When changes are made at any phase, the relevant documentation should be updated to reflect that change.

Advantages

� Testing is inherent to every phase of the waterfall model

� It is an enforced disciplined approach

� It is documentation driven, that is, documentation is produced at every stage

Disadvantages

The waterfall model is the oldest and the most widely used paradigm. However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid format. Namely:

� It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project progresses.

� As The client usually only has a vague idea of exactly what is required from the software product, this WM has difficulty accommodating the natural uncertainty that exists at the beginning of the project.

� The customer only sees a working version of the product after it has been coded. This may result in disaster if any undetected problems are precipitated to this stage.

Page 42: Course Materails Final

42

6.3 Prototyping Model The Prototyping Model is a systems development method (SDM) in which a prototype (an early approximation of a final system or product) is built, tested, and then reworked as necessary until an acceptable prototype is finally achieved from which the complete system or product can now be developed. This model works best in scenarios where not all of the project requirements are known in detail ahead of time. It is an iterative, trial-and-error process that takes place between the developers and the users.

There are several steps in the Prototyping Model:

1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the departments or aspects of the existing system.

2. A preliminary design is created for the new system. 3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down

system, and represents an approximation of the characteristics of the final product. 4. The users thoroughly evaluate the first prototype, noting its strengths and weaknesses, what needs to be

added, and what should to be removed. The developer collects and analyzes the remarks from the users. 5. The first prototype is modified, based on the comments supplied by the users, and a second prototype of the

new system is constructed. 6. The second prototype is evaluated in the same manner as was the first prototype. 7. The preceding steps are iterated as many times as necessary, until the users are satisfied that the prototype

represents the final product desired. 8. The final system is constructed, based on the final prototype.

The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime.

Page 43: Course Materails Final

43

6.4 Incremental Model

This model combines the elements of the waterfall model with the iterative philosophy of prototyping. However, unlike prototyping the IM focuses on the delivery of an operational product at the end of each increment.

An example of this incremental approach is observed in the development of word processing applications where the following services are provided on subsequent builds:

1. Basic file management, editing and document production functions

2. Advanced editing and document production functions

3. Spell and grammar checking

4. Advance page layout

The first increment is usually the core product which addresses the basic requirements of the system. This maybe either be used by the client or subjected to detailed review to develop a plan for the next increment. This plan addresses the modification of the core product to better meet the needs of the customer, and the delivery of additionally functionality. More specifically, at each stage

· The client assigns a value to each build not yet implemented

· The developer estimates cost of developing each build

· The resulting value-to-cost ratio is the criterion used for selecting which build is delivered next

Essentially the build with the highest value-to-cost ratio is the one that provides the client with the most functionality (value) for the least cost. Using this method the client has a usable product at all of the development stages.

Incremental Model

Iterative: many releases (increments) – First increment: core functionality – Successive increments: add/fix functionality – Final increment: the complete product • Each iteration: a short mini-project with a separate lifecycle – e.g., waterfall • Increments may be built sequentially or in parallel Iterative & Incremental Model • Outcome of each iteration: tested, integrated, executable system • Iterations length is short and fixed – e.g. 2 weeks, 4 weeks, 6 weeks • Takes many iterations (e.g. 10-15) • Does not try to “freeze” the requirements and design speculatively – Rapid feedback, early insight, opportunity to modify requirements and design – Later iterations: reqs/design become stable

Page 44: Course Materails Final

44

Incremental model

A D C T M

increment #1version

#1

A D C T M

increment #2version

#2

A D C T M

increment #3version

#3

time

featu

res

6.5 Spiral Model

The spiral model is a software development model combining elements of both design and

prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up

concepts.

The spiral model was defined by Barry Boehm. This model was not the first model to discuss

iteration, but it was the first model to explain why the iteration matters. As originally

envisioned, the iterations were typically 6 months to 2 years long. This persisted until around

2000.

Each phase starts with a design goal (such as a user interface prototype as an early phase)

and ends with the client (which may be internal) reviewing the progress thus far.

Analysis and engineering efforts are applied to each phase of the project, with an eye toward

the end goal of the project.

So, for a typical shrink-wrap application, this might mean that you have a rough-cut of user

elements (without the pretty graphics) as an operable application, add features in phases, and,

at some point, add the final graphics.

The Spiral model is not used today (2004) as such. However, it has influenced the modern

day concept of agile software development. Agile software development tends to be rather

more extreme in their approach than the spiral model.

Page 45: Course Materails Final

45

Page 46: Course Materails Final

46

QUALITY

Page 47: Course Materails Final

47

7.1 What is Quality?

Quality is the customer’s perception of how a good or service is fit for their purpose and how it satisfies stated and implicit specifications.

Quality in an organization is best achieved by Management creating a Quality Management System (QMS). A QMS is a formalized system that documents the company structure, management and employee responsibilities, and the procedures required to deliver a quality product or service. Four quality tools should be utilized when creating a QMS: Quality Manual, Standard Operating Procedures (SOPs), work instructions and supporting documentation as flowcharts and quality records. All four tools must be consistent, coherent and work together to increase the perceived value of the good or service.

7.2 How do I manage Quality?

Quality Management is effectively managing your company QMS to achieve maximum customer satisfaction at the lowest overall cost. Quality Management (QM) is a continuous process that requires inputs of time, effort and commitment from all company resources.

Eight QM principles form the foundation for effective quality management:

1. Customer Focus - Understand your customer’s needs. Measure customer satisfaction. Strive to exceed their expectations.

2. Leadership - Management establishes the strategy and leads the company toward achieving its objectives. Management creates an environment that encourages staff to continuously improve and work towards satisfying the customer.

3. People Involvement - Train your staff effectively. Teamwork and full employee involvement makes quality a reality.

4. Continuous Improvement - Continue to make things better. 5. Process Approach - Understand and organize company resources and activities to optimize how the

organization operates. 6. Factual Approach to Decision Making - Make decisions based on the facts. Data must be gathered,

analyzed and assessed against the objectives. 7. System Approach to Management - Determine sequence and interaction of processes and manage them as

a system. Processes must meet customer requirements. 8. Mutually Beneficial Supplier Relationships - Work with your suppliers to produce a win-win outcome.

The quality of a product or service refers to the perception of the degree to which the product or service meets

the customer's expectations.

Quality is essentially about learning what you are doing well and doing it better. It also means finding out what

you may need to change to make sure you meet the needs of your service users.

Quality is defined by the customer. A quality product or service is one that meets customer requirements. Not

all customers have the same requirements so two contrasting products may both be seen as quality products by their

users. For example, one house-owner may be happy with a standard light bulb - they would see this as a quality

Page 48: Course Materails Final

48

product. Another customer may want an energy efficient light bulb with a longer life expectancy - this would be their

view of quality. Quality can therefore be defined as being fit for the customer's purpose.

There are three main ways in which a business can create quality:

One key distinction to make is there are two common applications of the term Quality as form of activity or

function within a business. One is Quality Assurance which is the "prevention of defects", such as the deployment of a

Quality Management System and preventative activities like FMEA. The other is Quality Control which is the "detection

of defects", most commonly associated with testing which takes place within a Quality Management System typically

referred to as Verification and Validation.

Quality is about:

• knowing what you want to do and how you want to do it

• learning from what you do

• using what you learn to develop your organization and its services

• seeking to achieve continuous improvement

• satisfying your stakeholders - those different people and groups with an interest in your organization.

7.3 Definitions of Quality

1. Customer-Based Fitness for use, meeting customer expectations.

2. Manufacturing-Based Conforming to design, specifications, or requirements. Having no defects.

3. Product-Based The product has something that other similar products do not that adds value.

4. Value-Based The product is the best combination of price and features.

5. Transcendent It is not clear what it is, but it is something good...

Typically, these are the stages that organizations implementing a quality system aim to follow:

• Agree on standards. These concern the performance that staff, trustees and users expect from the

organization

• Carry out a self-assessment. This means that you compare how well you are doing against these

expectations.

• Draw up an action plan. This will include what needs to be done, who will do it, how it will be done, and when

• Implement. Do the work

• Review. At this stage, you check what changes have been made and whether they have made the difference

you were hoping to achieve.

7.4 Why does quality matter? These are some of the demands on voluntary organizations. They need to show that:

• they meet the often conflicting needs and demands of their service users, and that users are satisfied with the

Page 49: Course Materails Final

49

quality of services offered

• they provide users with efficient, consistent services

• the organization is making a real difference

• they can work effectively with limited resources or short-term project funding.

7.5 Why is quality important? The most successful organizations are those that give customers what they want. Satisfied customers are loyal

to those suppliers they feel best understand their requirements. As a result they will make repeat purchases and will

recommend a business to their friends.

There are two main types of customers for a business:

• end customers - people like you and me, looking to buy an iPod or plasma screen television

• organizational customers - for example, a company recording audio CDs would buy in blank CDs, record music

to them and sell them on as a finished product.

Quality, in the eye of the consumer, means that a product must provide the benefits required by the consumer

when it was purchased. If all the features and benefits satisfy the consumer, a quality product has been bought. It is

consumers, therefore, who define quality.

Quality as defined by the consumer, he argued, is more important than price in determining demand for most

goods and services. Consumers will be prepared to pay for the best quality. Value is thus added by creating those

quality standards required by consumers.

Consumer quality standards involve:

• Creating consumer satisfaction

• Exceeding consumer expectations

• Delighting the consumer

Page 50: Course Materails Final

50

7.6 Quality management and software development

Management responsibility Quality system

Control of non-conforming

products

Design control

Handling, storage, packaging

and delivery

Purchasing

Purchaser-supplied products Product identification and

traceability

Process control Inspection and testing

Inspection and test equipment Inspection and test status

Contract review Corrective action

Document control Quality records

Internal quality audits Training

Servicing Statistical techniques

7.7 Quality planning • A quality plan sets out the desired product qualities and how these are assessed and defines the most

significant quality attributes.

• The quality plan should define the quality assessment process.

• It should set out which organizational standards should be applied and, where necessary, define new

standards to be used.

Quality plan

• Quality plan structure

• Product introduction

• Product plans

• Process descriptions

• Quality goals

• Risks and risk management

• Quality plans should be short, succinct documents

• If they are too long, no-one will read them

Page 51: Course Materails Final

51

7.8Quality attributes

Three stages in the development of quality:

1. Quality assurance

2. Quality control

3. Total Quality Management.

The process which is described as Total Quality Management (TQM) involves taking quality to new heights.

7.9 What is a quality assurance system? Quality assurance is the process of verifying or determining whether products or services meet or exceed

customer expectations. Quality assurance is a process-driven approach with specific steps to help define and attain

goals. This process considers design, development, production, and service.

When the term 'quality assurance system' is used, it means a formal management system you can use to

strengthen your organization. It is intended to raise standards of work and to make sure everything is done consistently.

A quality assurance system sets out expectations that a quality organization should meet. Quality assurance is the

system set up to monitor the quality and excellence of goods and services.

Safety Understandability Portability

Security Testability Usability

Reliability Adaptability Reusability

Resilience Modularity Efficiency

Robustness Complexity Learnability

Page 52: Course Materails Final

52

Quality assurance demands a degree of detail in order to be fully implemented at every step.

• Planning, for example, could include investigation into the quality of the raw materials used in manufacturing,

the actual assembly, or the inspection processes used.

• The Checking step could include customer feedback, surveys, or other marketing vehicles to determine if

customer needs are being exceeded and why they are or are not.

• Acting could mean a total revision in the manufacturing process in order to correct a technical or cosmetic

flaw.

Quality assurance verifies that any customer offering, regardless if it is new or evolved, is produced and offered

with the best possible materials, in the most comprehensive way, with the highest standards. The goal to exceed

customer expectations in a measurable and accountable process is provided by quality assurance.

7.10 Quality control Quality control is a process employed to ensure a certain level of quality in a product or service. It may include

whatever actions a business deems necessary to provide for the control and verification of certain characteristics of a

product or service. The basic goal of quality control is to ensure that the products, services, or processes provided meet

specific requirements and are dependable, satisfactory, and fiscally sound.

Essentially, quality control involves the examination of a product, service, or process for certain minimum levels

of quality. The goal of a quality control team is to identify products or services that do not meet a company’s specified

standards of quality. If a problem is identified, the job of a quality control team or professional may involve stopping

production temporarily. Depending on the particular service or product, as well as the type of problem identified,

production or implementation may not cease entirely.

Quality control can cover not just products, services, and processes, but also people. Employees are an

important part of any company. If a company has employees that don’t have adequate skills or training, have trouble

understanding directions, or are misinformed, quality may be severely diminished. When quality control is considered in

terms of human beings, it concerns correctable issues.

7.11 Difference between QA & QC Quality control is concerned with the product, while quality assurance is process–oriented. Basically, quality

control involves evaluating a product, activity, process, or service. By contrast, quality assurance is designed to make

sure processes are sufficient to meet objectives. Simply put, quality assurance ensures a product or service is

manufactured, implemented, created, or produced in the right way; while quality control evaluates whether or not the

end result is satisfactory.

Page 53: Course Materails Final

53

(1) Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is

adequate to ensure a system will meet its objectives.

(2) Quality Control: A set of activities designed to evaluate a developed work product.

The difference is that QA is process oriented and QC is product oriented.

Testing therefore is product oriented and thus is in the QC domain. Testing for quality isn't assuring quality, it's

controlling it.

Quality Assurance makes sure you are doing the right things, the right way. Quality Control makes sure the results of

what you've done are what you expected.

7.12 QA Activity The mission of the QA Activity is fourfold. QA improves the quality of specifications, through guidelines and

reviews of specifications at critical stages of their development. QA promotes wide deployment and proper

implementation of these specifications through articles, tutorials and validation services. QA communicates the value of

test suites and helps Working Groups produce quality test suites. And QA designs effective processes that, if followed,

will help groups achieve these goals.

The overall mission of the QA Activity is to improve the quality of specification implementation in the field. In

order to achieve this, the QA activity will work on the quality of the specifications themselves, making sure that each

specification has a conformance section, primer, is clear, unambiguous and testable, and maintains consistency

between specifications, promote the development of good validators, test tools, and harnesses for implementors and

end user to use.

The QA Activity was initiated to address these demands and improve the quality of specifications as well as

their implementation. In particular, the Activity has a dual focus:

(1) To solidify and extend current quality practices regarding the specification publication process,

validation tools, test suites, and test frameworks.

(2) To share with the Web community their understanding of issues related to ensuring and promoting

quality, including conformance, certification and branding, education, funding models, and

relationship with external organizations.

QA activities ensure that the process is defined and appropriate. Methodology and standards development are

examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being

defined at the proper level of detail.

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right

requirements. Testing is one example of a QC activity, but there are others such as inspections.

Page 54: Course Materails Final

54

Validation and Verification

Page 55: Course Materails Final

55

8.1 V & V

In the process of testing two terms that needs significance and understanding are

1. Verification

2. Validation

8.1.1Verification:

"Are we building the product right?” i.e., does the product conform to the specifications? It is one aspect of testing a product's fitness for purpose.

The verification process consists of static and dynamic parts. E.g., for a software product one can inspect the source

code (static) and run against specific test cases (dynamic). Validation usually can only be done dynamically, i.e., the

product is tested by putting it through typical usages and atypical usages ("Can we break it?").

8.1.2 Verification Techniques

There are many different verification techniques and are said to Static testing.

Static testing - Testing that does not involve the operation of the system or component. Some of these

techniques are performed manually while others are automated. Static testing can be further divided into

2 categories - techniques that analyze consistency and techniques that measure some program property.

Consistency techniques - Techniques that are used to insure program properties such as correct syntax, correct

parameter matching between procedures, correct typing, and correct requirements and specifications translation.

Measurement techniques - Techniques that measure properties such as error proneness, understandability, and well-

structuredness.

8.2 Validation:

"Are we building the right product?", i.e., does the product do what the user really requires?

Validation is the complementary aspect. Often one refers to the overall checking process as V & V.

8.2.1 Validation Techniques There are also numerous validation techniques, including formal methods, fault injection, and dependability

analysis. Validation usually takes place at the end of the development cycle, and looks at the complete system as

opposed to verification, which focuses on smaller sub-systems.

Page 56: Course Materails Final

56

• Formal methods - Formal methods is not only a verification technique but also a validation technique.

Formal methods means the use of mathematical and logical techniques to express, investigate, and

analyze the specification, design, documentation, and behavior of both hardware and software.

• Fault injection - Fault injection is the intentional activation of faults by either hardware or software

means to observe the system operation under fault conditions.

• Hardware fault injection - Can also be called physical fault injection because we are actually

injecting faults into the physical hardware.

• Software fault injection - Errors are injected into the memory of the computer by software

techniques. Software fault injection is basically a simulation of hardware fault injection.

• Dependability analysis - Dependability analysis involves identifying hazards and then proposing

methods that reduces the risk of the hazard occurring.

`

• Hazard analysis - Involves using guidelines to identify hazards, their root causes, and possible

counter measures.

• Risk analysis - Takes hazard analysis further by identifying the possible consequences of each

hazard and their probability of occurring.

Verification ensures the product is designed to deliver all functionality to the customer; it typically involves

reviews and meetings to evaluate documents, plans, code, requirements and specifications;

Validation ensures that functionality, as defined in requirements, is the intended behavior of the product;

validation typically involves actual testing and takes place after verifications are completed.

Page 57: Course Materails Final

57

Testing Lifecycle

Page 58: Course Materails Final

58

9.1 Phases of Testing Life cycle

Testing lifecyclce ensures that all the relevant requirements ( inputs) are obtained,planning is adequately

carried out , the test cases are designed and executed as per plan.It also ensures that the results are

obtained,reviewed and monitered.

Test Requirements

Final Reporting

Test Design.

Test Environment

Test Execution

Defect Analysis & Tracking

Test Planning

Page 59: Course Materails Final

59

Testing Methods

Page 60: Course Materails Final

60

10.1`Methods of Testing There are two primary methods of testing . They are

1. Functional or Black Box testing

2. Logical or White box Testing.

10.1.1Functional or Black Box testing Black Box Testing is testing without the knowledge of the internal workings of the system.

We are checking the functionality of the application . In this method of testing, a set of known inputs are

given to the system and it is tested for whether it is producing the desired output without knowing how the

output is produced.

10.1.2 Logical or White box Testing. White box testing – Checking the logical and structure of program .This method of testing focuses on the

internal knowledge of the system .Developers will do the white box testing.

Page 61: Course Materails Final

61

WHITE BOX TESTING

Page 62: Course Materails Final

62

11 White Box Testing

11.1 The purpose of white box testing Initiate a strategic initiative to build quality throughout the life cycle of a software product or service.

Provide a complementary function to black box testing.

Perform complete coverage at the component level.

Improve quality by optimizing performance.

White box testing is a test case design approach that employs the control architecture of the procedural design

to produce test cases. Using white box testing approaches, the software engineering can produce test cases that

(3) guarantee that all independent paths in a module have been exercised at least once.

(4) Exercise all logical decisions.

(5) Execute all loops at their boundaries and in their operational bounds.

(6) Exercise internal data structures to maintain their validity.

11.2 Types of testing under White/Glass Box Testing Strategy:

11.2.1 Unit Testing: The developer carries out unit testing in order to check if the particular module or unit of code is working fine.

The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a

particular functionality is built.

Page 63: Course Materails Final

63

11.2.1.1Statement Coverage: In this type of testing the code is executed in such a manner that every statement of the application is executed

at least once. It helps in assuring that all the statements execute without any side effect.

11.2.2 Branch Coverage: No software application can be written in a continuous mode of coding, at some point we need to branch out

the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in

the code and making sure that no branching leads to abnormal behavior of the application.

11.2.3 Security Testing: Security Testing is carried out in order to find out how well the system can protect itself from unauthorized

access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs

sophisticated testing techniques.

11.2.4 Mutation Testing: A kind of testing in which, the application is tested for the code that was modified after fixing a particular

bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality

effectively.

Besides all the testing types given above, there are some more types which fall under both Black box and

White box testing strategies such as: Functional testing (which deals with the code in order to check its functional

performance), Incremental integration testing (which deals with the testing of newly added code in the application),

Performance and Load testing (which helps in finding out how the particular code manages resources and give

performance etc.) etc.

11.2.5 Basis Path Testing Basic path testing is a white box testing techniques that allows the test case designer to produce a logical

complexity measure of procedural design and use this measure as an approach for outlining a basic set of execution

paths. Test cases produced to exercise each statement in the program at least one time during testing.

11.2.6 Flow Graphs The flow graph can be used to represent the logical flow control and therefore all the execution paths that need

testing. To illustrate the use of flow graphs consider the procedural design depicted in the flow chart below. This is

mapped into the flow graph below where the circles are nodes that represent one or more procedural statements and

the arrow in the flow graph called edges represent the flow control. Each node that includes a condition is known as a

predicate node, and has two or more edges coming from it.

Page 64: Course Materails Final

64

Flow chart

1

8

6

9

2,3

7 4,5

10

11

Region

Node

Predicate

Node

1

2

3

4 6

5 7 8

11

10

9

Edges

Page 65: Course Materails Final

65

11.2.7 Cyclomatic Complexity As we have seen before McCabe’s cyclomatic complexity is a software metric that offers an indication of the

logical complexity of a program. When used in the context of the basis path testing approach, the value is determined

for cyclomatic complexity defines the number of independent paths in the basis set of a program and offer upper

bounds for number of tests that ensures all statements have been executed at least once. An independent path is any

path through the program that introduces at least one new group of processing statements or new condition. A set of

independent paths for the example flow graph are:

Path 1: 1-11

Path 2: 1-2-3-4-5-10-1-11

Path 3: 1-2-3-6-8-9-10-11

11.2.8 Deriving Test Cases The basis path testing method can be applied to a detailed procedural design or to source code. Basis path

testing can be seen as a set of steps.

• Using the design or code as the basis, draw an appropriate flow graph.

• Determine the cyclomatic complexity of the resultant flow graph.

• Determine a basis set of linear independent paths

• Prepare test cases that will force execution of each path in the basis set.

Date should be selected so that conditions at the predicate nodes is tested. Each test case is executed and contrasted

with the expected result. Once all test cases have been completed, the tester can ensure that all statements in the

program are executed at least once.

11.2.9 Graphical Matrices The procedure involved in producing the flow graph and establishing a set of basis paths can be mechanized.

To produce a software tool that helps in basis path testing, a data structure, called a graph matrix, can be quite helpful.

A graph matrix is a square matrix whose size is the same as the identified nodes, and matrix entries match the edges

between nodes. A basic flow graph and its associated graph matrix is shown below.

1

2

5 3

4

a

d

b

c

g

E

f

Page 66: Course Materails Final

66

Connection to node Node 1 2 3 4 5 1 A 2 b 3 d, c f 4 5 E g

11.2.10 Graph Matrix

In the graph and matrix each node is represented with a number and each edge a letter. A letter is entered

into the matrix related to connection between the two nodes. By adding a link weight for each matrix entry the graph

matrix can be used to examine program control structure during testing. In its basic form the link weight is 1 or 0. The

link weights can be given more interesting characteristics:

• The probability that a link will be executed.

• The processing time expanded during traversal of a link

• The memory required during traversal of a link

Represented in this form the graph matrix is called a connection matrix.

Connection to node

Node 1 2 3 4 5 Connections 1 1 1-1=0 2 1 1-1=0 3 1,1 1 3-1=2 4 0 5 1 1 2-1=1

Cyclomatic complexity is 2+1=3

11.2.11 Control Structure Testing

Although basis path testing is simple and highly effective, it is not enough in itself. Next we consider variations on

control structure testing that broaden testing coverage and improve the quality of white box testing.

Page 67: Course Materails Final

67

11.2.12 Condition Testing Condition testing is a test case design approach that exercises the logical conditions contained in a program

module. A simple condition is a Boolean variable or a relational expression, possibly with one NOT operator. A

relational expression takes the form

21 E>operatorrelational<E -

where 21 EE and are arithmetic expressions and relational operator is one of the following <, =,

≤,≠ , (nonequality) >, or ≥. A compound condition is made up of two or more simple conditions, Boolean operators,

and parentheses. We assume that Boolean operators allowed in a compound condition include OR, AND and NOT.

The condition testing method concentrates on testing each condition in a program. The purpose of condition

testing is to determine not only errors in the conditions of a program but also other errors in the program. A number of

condition testing approaches have been identified. Branch testing is the most basic. For a compound condition, C, the

true and false branches of C and each simple condition in C must be executed at least once.

Domain testing needs three and four tests to be produced for a relational expression. For a relational

expression of the form

21 E>operatorrelational<E -

Three tests are required the make the value of 1E greater than, equal to and less than 2E , respectively.

11.2.13 Loop Testing

Loops are the basis of most algorithms implemented using software. However, often we do consider them

when conducting testing. Loop testing is a white box testing approach that concentrates on the validity of loop

constructs. Four loops can be defined: simple loops, concatenate loops, nested loops, and unstructured loops.

11.2.13.1 Simple loops: The follow group of tests should be used on simple loops, where n is the maximum number of allowable passes through

the loop:

• Skip the loop entirely.

• Only one pass through the loop.

• Two passes through the loop.

• M passes through the loop where m<n.

• n-1, n, n+1 passes through the loop.

Page 68: Course Materails Final

68

11.2.13.2 Nested Loop Nested loop: For the nested loop the number of possible tests increases as the level of nesting grows. This would

result in an impractical number of tests. An approach that will help to limit the number of tests:

• Start at the innermost loop. Set all other loops to minimum values.

• Conduct simple loop tests for the innermost loop while holding the outer loop at their minimum iteration

parameter value.

• Work outward, performing tests for the next loop, but keeping all other outer loops at minimum values and

other nested loops to “typical” values.

• Continue until all loops have been tested.

11.2.13.3 Concatenated loops:

Concatenated loops can be tested using the techniques outlined for simple loops, if each of the loops is

independent of the other. When the loops are not independent the approach applied to nested loops is

recommended.

Page 69: Course Materails Final

69

Example program.

Page 70: Course Materails Final

70

11.3 Advantages of White box testing:

i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data

can help in testing the application effectively.

ii) The other advantage of white box testing is that it helps in optimizing the code

iii) It helps in removing the extra lines of code, which can bring in hidden defects.

11.4 Disadvantages of white box testing:

i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of

testing, which increases the cost.

ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems,

resulting in failure of the application.

Page 71: Course Materails Final

71

Black Box Testing

Page 72: Course Materails Final

72

12.1 Black Box Testing:

Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example,

when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the

expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black

box testing can be considered testing with respect to the specifications, no other knowledge of the program is

necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer

bias toward his own work. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral

Testing" and "Closed Box Testing".

In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement

specifications of the system and as a user, should know, how the system should behave in response to the particular

action.

Various testing types that fall under the Black Box Testing strategy are: functional testing, stress testing,

recovery testing, volume testing, User Acceptance Testing (also known as UAT), system testing, Sanity or Smoke

testing, load testing, Usability testing, Exploratory testing, ad-hoc testing, alpha testing, beta testing etc.

These testing types are again divided in two groups:

a) Testing in which user plays a role of tester and

b) User is not required.

12.2 Testing Strategies/Techniques

• Black box testing should make use of randomly generated inputs (only a test range should be specified by the

tester), to eliminate any guess work by the tester as to the methods of the function

• Data outside of the specified input range should be tested to check the robustness of the program

• Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest

allowable inputs produce proper output

• The number zero should be tested when numerical data is to be input

• Stress testing should be performed (try to overload the program with inputs to see where it reaches its

maximum capacity), especially with real time systems

• Crash testing should be performed to see what it takes to bring the system down

• Test monitoring tools should be used whenever possible to track which tests have already been performed and

the outputs of these tests to avoid repetition and to aid in the software maintenance

• Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing,

and state testing.

• Finite state machine models can be used as a guide to design functional tests

• According to Beizer the following is a general order by which tests should be designed:

Page 73: Course Materails Final

73

• Clean tests against requirements.

• Additional structural tests for branch coverage, as needed.

• Additional tests for data-flow coverage as needed.

• Domain tests not covered by the above.

• Special techniques as appropriate--syntax, loop, state, etc.

• Any dirty tests not covered by the above.

12.3 Advantages of Black Box Testing

• more effective on larger units of code than glass box testing

• tester needs no knowledge of implementation, including specific programming languages

• tester and programmer are independent of each other

• tests are done from a user's point of view

• will help to expose any ambiguities or inconsistencies in the specifications

• test cases can be designed as soon as the specifications are complete

12.4 Disadvantages of Black Box Testing

• only a small number of possible inputs can actually be tested, to test every possible input stream would take

nearly forever

• without clear and concise specifications, test cases are hard to design

• there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer

has already tried

• may leave many program paths untested

• cannot be directed toward specific segments of code which may be very complex (and therefore more error

prone)

• most testing related research has been directed toward glass box testing

Page 74: Course Materails Final

74

Levels of testing

Page 75: Course Materails Final

75

Levels of testing

13.1 Levels of testing

There are four levels of testing. They are

• Unit Testing.

• Integration Testing.

• System Testing.

• Acceptance testing

13.1.1 Unit Testing

Introduction to Unit Testing

Unit testing. Isn't that some annoying requirement that we're going to ignore? Many developers get very

nervous when you mention unit tests. Usually this is a vision of a grand table with every single method listed, along

with the expected results and pass/fail date. It's important, but not relevant in most programming projects.

The unit test will motivate the code that you write. In a sense, it is a little design document that says, "What will this bit

of code do?" Or, in the language of object oriented programming, What will these clusters of objects do?"

The crucial issue in constructing a unit test is scope. If the scope is too narrow, then the tests will be trivial and the

objects might pass the tests, but there will be no design of their interactions. Certainly, interactions of objects are the

crux of any object oriented design.

Likewise, if the scope is too broad, then there is a high chance that not every component of the new code will

get tested. The programmer is then reduced to testing-by-poking-around, which is not an effective test strategy.

Need for Unit Test

How do you know that a method doesn't need a unit test? First, can it be tested by inspection? If the code is

simple enough that the developer can just look at it and verify its correctness then it is simple enough to not require a

unit test. The developer should know when this is the case.

Unit tests will most likely be defined at the method level, so the art is to define the unit test on the methods that

cannot be checked by inspection. Usually this is the case when the method involves a cluster of objects. Unit tests that

isolate clusters of objects for testing are doubly useful, because they test for failures, and they also identify those

segments of code that are related. People who revisit the code will use the unit tests to discover which objects are

related, or which objects form a cluster. Hence: Unit tests isolate clusters of objects for future developers.

Another good litmus test is to look at the code and see if it throws an error or catches an error. If error handling

is performed in a method, then that method can break. Generally, any method that can break is a good candidate for

having a unit test, because it may break at some time, and then the unit test will be there to help you fix it.

The danger of not implementing a unit test on every method is that the coverage may be incomplete. Just because we

don't test every method explicitly doesn't mean that methods can get away with not being tested. The programmer

Page 76: Course Materails Final

76

should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of

all the code. The careful programmer will know that their unit testing is complete when they have verified that their unit

tests cover every cluster of objects that form their application.

Life Cycle Approach to Testing

Testing will occur throughout the project lifecycle i.e., from Requirements till User Acceptance Testing. The main Objective to Unit Testing are as follows : • To execute a program with the intent of finding an error.;

• To uncover an as-yet undiscovered error ; and

• Prepare a test case with a high probability of finding an as-yet undiscovered error.

Concepts in Unit Testing:

• The most 'micro' scale of testing;

• To test particular functions or code modules.

• Typically done by the programmer and not by testers.

• As it requires detailed knowledge of the internal program design and code.

• Not always easily done unless the application has a well-designed architecture with tight code;

Types of Errors detected The following are the Types of errors that may be caught

• Error in Data Structures

• Performance Errors

• Logic Errors

• Validity of alternate and exception flows

• Identified at analysis/design stages

Unit Testing – Black Box Approach • Field Level Check

• Field Level Validation

• User Interface Check

• Functional Level Check

Unit Testing – White Box Approach

Statement coverage

Decision coverage

Condition coverage

Multiple condition coverage (nested conditions)

Page 77: Course Materails Final

77

Condition/decision coverage

Path coverage

Unit Testing – Field level checks

• Null / Not Null Checks

• Uniqueness Checks

• Length Checks

• Date Field Checks

• Numeric Checks

• Negative Checks

Unit Testing – Field Level Validations • Test all Validations for an Input field

• Date Range Checks (From Date/To Date’s)

• Date Check Validation with System date

Unit Testing – User Interface Checks

• Readability of the Controls

• Tool Tips Validation

• Ease of Use of Interface Across

• Tab related Checks

• User Interface Dialog

• GUI compliance checks

Unit Testing - Functionality Checks

• Screen Functionalities

• Field Dependencies

• Auto Generation

• Algorithms and Computations

• Normal and Abnormal terminations

• Specific Business Rules if any..

Unit Testing - Other measures

Function coverage

Loop coverage

Page 78: Course Materails Final

78

Race coverage

Execution of Unit Tests

• Design a test case for every statement to be executed.

• Select the unique set of test cases.

• This measure reports whether each executable statement is encountered.

• Also known as: line coverage, segment coverage and basic block coverage.

• Basic block coverage is the same as statement coverage except the unit of code measured is each sequence

of non-branching statements.

Example of Unit Testing:

int invoice (int x, int y) {

int d1, d2, s;

if (x<=30) d2=100;

else d2=90;

s=5*x + 10 *y;

if (s<200) d1=100;

else if (s<1000) d1 = 95;

else d1 = 80;

return (s*d1*d2/10000);

}

Page 79: Course Materails Final

79

Unit Testing Flow:

Page 80: Course Materails Final

80

Advantage of Unit Testing

• Can be applied directly to object code and does not require processing source code. • Performance profilers commonly implement this measure.

Disadvantage of Unit Testing

• Insensitive to some control structures (number of iterations) • Does not report whether loops reach their termination condition • Statement coverage is completely insensitive to the logical operators (|| and &&).

Method for Statement Coverage

-Design a test-case for the pass/failure of every decision point

-Select unique set of test cases

• This measure reports whether Boolean expressions tested in control structures (such as the if-statement and

while-statement) evaluated to both true and false.

• The entire Boolean expression is considered one true-or-false predicate regardless of whether it

contains logical-and or logical-or operators.

• Additionally, this measure includes coverage of switch-statement cases, exception handlers, and interrupt

handlers

• Also known as: branch coverage, all-edges coverage, basis path coverage, decision-decision-path testing

• "Basis path" testing selects paths that achieve decision coverage.

Advantage:

Simplicity without the problems of statement coverage

Disadvantage

• This measure ignores branches within boolean expressions which occur due to short-circuit operators.

Method for Condition Coverage:

-Test if every condition (sub-expression) in decision for true/false

-Select unique set of test cases.

• Reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if

they occur.

• Condition coverage measures the sub-expressions independently of each other. • Reports whether every possible combination of boolean sub-expressions occurs. As with condition coverage,

the sub-expressions are separated by logical-and and logical-or, when present.

• The test cases required for full multiple condition coverage of a condition are given by the logical operator truth

table for the condition.

Disadvantage:

• Tedious to determine the minimum set of test cases required, especially for very complex Boolean expressions

Page 81: Course Materails Final

81

• Number of test cases required could vary substantially among conditions that have similar complexity

• Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision

coverage.

• It has the advantage of simplicity but without the shortcomings of its component measures

• This measure reports whether each of the possible paths in each function have been followed.

• A path is a unique sequence of branches from the function entry to the exit.

• Also known as predicate coverage. Predicate coverage views paths as possible combinations of logical

conditions

• Path coverage has the advantage of requiring very thorough testing

Function coverage:

• This measure reports whether you invoked each function or procedure.

• It is useful during preliminary testing to assure at least some coverage in all areas of the software.

• Broad, shallow testing finds gross deficiencies in a test suite quickly.

Loop coverage

This measure reports whether you executed each loop body zero times, exactly once, twice and more than

twice (consecutively).

For do-while loops, loop coverage reports whether you executed the body exactly once, and more than once.

The valuable aspect of this measure is determining whether while-loops and for-loops execute more than once,

information not reported by others measure.

Race coverage This measure reports whether multiple threads execute the same code at the same time.

Helps detect failure to synchronize access to resources.

Useful for testing multi-threaded programs such as in an operating system.

13.1.2 Integration testing

Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of Software testing

in which individual software modules are combined and tested as a group. It follows unit testing and precedes system

testing.

Testing performed to expose faults in the interfaces and in the interaction between integrated components.

Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates,

applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system

ready for system testing.

Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been

tested are combined into a component and the interface between them is tested. A component, in this sense, refers to

an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which

are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually

expand the process to test your modules with those of other groups. Eventually all the modules making up a process

are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs

Page 82: Course Materails Final

82

rather than all at once. Integration testing identifies problems that occur when units are combined. By using a test plan

that requires you to test each unit and ensure the viability of each before combining units, you know that any errors

discovered when combining units are likely related to the interface between units. This method reduces the number of

possibilities to a far simpler level of analysis.

Purpose

The purpose of integration testing is to verify functional, performance and reliability requirements placed on

major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces

using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated

usage of shared data areas and inter-process communication is tested, individual subsystems are exercised through

their input interface. All test cases are constructed to test that all components within assemblages interact correctly, for

example, across procedure calls or process activations, and is done after the testing single module i.e. unit testing. The

overall idea is a "building block" approach, in which verified assemblages are added to a verified base which is then

used to support the Integration testing of further assemblages.

The different types of integration testing are big bang, top-down and bottom-up.

13.1.2.1 Different Approach of Testing • Big Bang approach

• Incremental approach

• Top Down approach

• Bottom Up approach

13.1.2.1.1 Big Bang (project management)

A big bang project is one that has no staged delivery. The customer must wait, sometimes months, before

seeing anything from the development team. At the end of the wait comes a "big bang". A common argument against

big bang projects is that there are no check points during the project where the customers expectations can be tested,

thus risking that the final delivery is not what the customer had in mind.

All components are tested in isolation, and will be mixed together when we first test the final system.

Disadvantages:

• Requires both stubs and drivers to test the independent components.

• When failure occurs, it is very difficult to locate the faults.

After the modification, we have to go through the testing, locating faults, modifying faults again.

Page 83: Course Materails Final

83

13.1.2.1..2

You can do integration testing in a variety of ways but the following are common strategies:

The top-down approach to integration testing requires the highest-level modules be test and integrated first.

This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers.

However, the need for stubs complicates test management and low-level utilities are tested relatively late in the

development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited

functionality.

The bottom-up approach requires the lowest-level units be tested and integrated first. These units are

frequently referred to as utility modules. By using this approach, utility modules are tested early in the development

process and the need for stubs is minimized. The downside, however, is that the need for drivers complicates test

management and high-level logic and data flow are tested late. Like the top-down approach, the bottom-up approach

also provides poor support for early release of limited functionality.

13.1.2.1..2.aTop-down and design

Top-down and bottom-up are strategies of information processing and knowledge ordering, mostly involving

software, and by extension other humanistic and scientific System theories.

In the top-down model an overview of the system is formulated, without going into detail for any part of it. Each

part of the system is then refined by designing it in more detail. Each new part may then be refined again, defining it in

yet more detail until the entire specification is detailed enough to validate the model.

Page 84: Course Materails Final

84

1. The main control module is used as a test driver and stubs are substituted for all components directly

subordinate to the main module.

2. Depending on integration approach, subordinate stubs are replaced once a time with actual

components.

3. Tests are conducted as each component is integrated.

4. Stubs are removed and integration moves downward in the program structure.

Advantage Can verify major control or decision points early in the testing process. Disadvantage Stubs are required when perform the integration testing, and generally, develop stubs is very difficult.

13.1.2.1..2.b Bottom-up

In bottom-up design, first the individual parts of the system are specified in great detail. The parts are then

linked together to form larger components, which are in turn linked until a complete system is formed. This strategy

often resembles a "seed" model, whereby the beginnings are small, but eventually grow in complexity and

completeness.

Major steps

1. Low-level components will be tested individually first. 2. A driver(a control program for testing) is written to coordinate test case input and output. 3. The driver is removed and integration moves upward in the program structure. 4. Repeat the process until all components are included in the test.

M1

M2 M3 M4

M5 M6 M7

M8

Page 85: Course Materails Final

85

Advantage Compared with stubs, drivers are much easier to develop. Disadvantage Major control and decision problems will be identified later in the testing process.

Advantages of top-down programming:

• Programming team stays focused on the goal

• Everyone knows his or her job.

• By the time the programming starts there are no questions.

• Code is easier to follow, since it is written methodically and with purpose.

Disadvantages of top-down programming:

• Top-down programming may complicate testing, since nothing executable will even exist until near the end of

the project.

• Bottom-up programming may allow for unit testing, but until more of the system comes together none of the

system can be tested as a whole, often causing complications near the end of the project, "Individually we

stand, combined we fall."

• All decisions depend on the starting goal of the project, and some decisions cannot be made depending on

how specific that description is.

Page 86: Course Materails Final

86

13.1.3 System Testing:

System testing is testing conducted on a complete, integrated system to evaluate the system's compliance

with its specified requirements. System testing falls within the scope of Black box testing and as such, should require no

knowledge of the inner design of the code or logic. System testing should be performed by testers who are trained to

plan, execute, and report on application and system code. They should be aware of scenarios that might not occur to

the end user, like testing for null, negative, and format inconsistent values.

System testing is actually done to the entire system against the Functional Requirement Specifications (FRS)

and/or the System Requirement Specification (SRS). Moreover, the System testing is an investigatory testing phase,

where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the

believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the

software/hardware requirements specification.

Types of System Testing:

• Sanity Testing

• Compatibility Testing

• Recovery Testing

• Usability Testing

• Exploratory Testing

• Adhoc Testing

• Stress Testing

• Volume Testing

• Load Testing

• Performance Testing

• Security Testing

13.1.3. 1 Testing Testing the major working functionality of the system whether the system is working fine for the major testing

effort. This testing is done before the testing phase and after the coding. The tests performed during sanity testing are

13.1.3. 2 Compatibility Testing Testing how well software performs in a particular hardware/software/operating system/network/etc.

environment.

13.1.3. 3 Recovery Testing Recovery testing is basically done in order to check how fast and better the application can recover against

any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications.

Page 87: Course Materails Final

87

13.1.3. 4 Usability Testing:

This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the

application stands an important consideration and needs to be specific for the specific type of user.

13.1.3. 5 Exploratory Testing:

This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

13.1.3. 6 Ad-hoc Testing:

This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in

deciding the scope and duration of the various other testing and it also helps testers in learning the application prior

starting with any other testing.

13.1.3. 7 Stress Testing:

The application is tested against heavy load such as complex numerical values, large number of inputs, large

number of queries etc. which checks for the stress/load the applications can withstand.

13.1.3. 8 Volume Testing:

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the

application (which is being tested) in order to check the extreme limitations of the system.

13.1.3. 9 Load Testing:

The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what

point the web-site/application fails or at what point its performance degrades.

13.1.3. 10 Regression testing Regression testing means “repeating a test already run successfully, and comparing the new results with the

earlier valid results”. This process is useful when you run a test on your project and then correct the project code.

Regression testing is based on the idea of reusing a test and acceptance standard, rather than forgetting about them

once the test is successful.

On each iteration of true regression testing, all existing, validated tests are run, and the new results are

compared to the already-achieved standards. And normally, one or more additional tests are run, debugged and rerun

until the project successfully passes the test.

Page 88: Course Materails Final

88

Regression tests begin as soon as there is anything to test at all. The regression test suite grows as the project

moves ahead and acquires new or rewritten code. Soon it may contain thousands of small tests, which can only be run

in sequence with the help of an automated test management tool like Test Complete.

The selective retesting of a software system that has been modified to ensure that any bugs have been fixed

and that no other previously working functions have failed as a result of the reparations and that newly added features

have not created problems with previous versions of the software. Also referred to as verification testing, regression

testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a

program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified

code still complies with its specified requirements and that unmodified code has not been affected by the maintenance

activity.

Quality is usually appraised by a collection of regression tests forming a suite of programs that test one or

more features of the system.

The advantage to this procedure is that if there is a malfunction in one of the regression tests, you know it

resulted from a code edit made since the last run.

Purpose

The standard purpose of regression testing is to avoid getting the same bug twice. When a bug is found, the

programmer fixes the bug and adds a test to the test suite. The test should fail before the fix and pass after the fix.

When a new version is about to be released, all the tests in the regression test suite are run and if an old bug

reappears, this will be seen quickly since the appropriate test will fail.

How is a regression test performed in Test Complete?

• First, test and debug the application.

• Next, add something to the tested application.

• Then design a test for features added to the new build.

• Run both the old and the new tests over the new build.

• Fix and rerun until everything is clean.

This means that a new test or tests are added to your test project or project suite for each new build or for each

new feature in a build. Then, these tests are added to the test sequence of the project.

Benefits of Regression Testing

There are lots of benefits to doing regression testing.

(1)It increases our chances of detecting bugs caused by changes to an application - either enhancements or bug

fixes. Note that we don't guarantee that there are no side effects. We'll talk later about what you need to guarantee that

Page 89: Course Materails Final

89

you've detected any side effects.

(2) It can also detect undesirable side effects caused by changing the operating environment. For example,

hardware changes, or upgrades to system software such as the operating system or the database management

system.

(3) The Regression Test Set is also useful for a new way of doing integration testing. This new method is much faster

and less confusing than the old way of doing integration testing - but you need a Regression Test Set to do it.

Summary:

• Regression testing means rerunning tests of things that used to work to make sure that a change didn't break something else.

• The set of tests used is called the Regression Test Set, or RTS for short.

• It's enormously helpful when you change an application, change the environment, and during integration of pieces.

• Regression testing is a simple concept, but it needs to be done just right to work in the real world.

13.1.3. 11 System Integration Testing

• System Integration Testing is the integration testing of two or more system components.

• Specifically, system integration testing is the testing of software components that have been distributed across

multiple platform to produce to produce failures.

• E.g., client, web server, application server, and database server.

13.1.4 Acceptance Testing In software engineering, acceptance testing is formal testing conducted to determine whether a system satisfies its

acceptance criteria and thus whether the customer should accept the system.

The main types of software testing are:

Component.

Interface.

System.

Acceptance.

Release.

Page 90: Course Materails Final

90

Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole

system is checked but the important difference is the change in focus:

Systems Testing checks that the system that was specified has been delivered.

Acceptance Testing checks that the system delivers what was requested.

• The customer, and not the developer should always do acceptance testing. The customer knows what is required

from the system to achieve value in the business and is the only person qualified to make that judgment.

Hence the goal of acceptance testing should verify the overall quality, correct operation, scalability,

completeness, usability, portability, and robustness of the functional components supplied by the Software system.

Factors influencing Acceptance Testing The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order

to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based

upon the User Requirements Specification to which the system should conform.

13.1.4.1 User Acceptance Testing:

In this type of testing, the software is handed over to the user in order to find out if the software meets the user

expectations and works as it is expected to.

13.1.4.1 a Alpha Testing:

In this type of testing, the users are invited at the development center where they use the application and the

developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is

noted and rectified by the developers.

13.1.4.1 b Beta Testing:

In this type of testing, the software is distributed as a beta version to the users and users test the application at

their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

Page 91: Course Materails Final

91

TEST PLAN

Page 92: Course Materails Final

92

14.1 TEST PLAN

• The test plan keeps track of possible tests that will be run on the system after coding.

• The test plan is a document that develops as the project is being developed.

• Record tests as they come up

• Test error prone parts of software development.

• The initial test plan is abstract and the final test plan is concrete.

• The initial test plan contains high level ideas about testing the system without getting into the details of exact

test cases.

• The most important test cases come from the requirements of the system.

• When the system is in the design stage, the initial tests can be refined a little.

• During the detailed design or coding phase, exact test cases start to materialize.

• After coding, the test points are all identified and the entire test plan is exercised on the software.

14.2 Purpose of Software Test Plan:

• To achieve 100% CORRECT code. Ensure all Functional and Design Requirements are implemented as

specified in the documentation.

• To provide a procedure for Unit and System Testing.

• To identify the documentation process for Unit and System Testing.

• To identify the test methods for Unit and System Testing.

14.3 Advantages of test plan

• Serves as a guide to testing throughout the development.

• We only need to define test points during the testing phase.

• Serves as a valuable record of what testing was done.

• The entire test plan can be reused if regression testing is done later on.

• The test plan itself could have defects just like software!

In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including

• Scope of testing

• Schedule

• Test Deliverables

• Release Criteria

• Risks and Contingencies

Page 93: Course Materails Final

93

14.4 Process of the Software Test Plan

• Identify the requirements to be tested. All test cases shall be derived using the current Design Specification.

• Identify which particular test(s) you're going to use to test each module.

• Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data

and test cases are adequate to verify proper operation of the unit.

• Identify the expected results for each test.

• Document the test case configuration, test data, and expected results. This information shall be submitted via

the on-line Test Case Design(TCD) and filed in the unit's Software Development File(SDF). A successful Peer

Technical Review baselines the TCD and initiates coding.

• Perform the test(s).

• Document the test data, test cases, and test configuration used during the testing process. This information

shall be submitted via the on-line Unit/System Test Report(STR) and filed in the unit's Software Development

File(SDF).

• Successful unit testing is required before the unit is eligible for component integration/system testing.

• Unsuccessful testing requires a Program Trouble Report to be generated. This document shall describe the

test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It

shall be used as a basis for later technical analysis.

• Test documents and reports shall be submitted on-line. Any specifications to be reviewed, revised, or updated

shall be handled immediately.

Deliverables: Test Case Design, System/Unit Test Report, Problem Trouble Report(if any).

14.5 Test plan template • Test Plan Identifier

• References

• Introduction of Testing

• Test Items

• Software Risk Issues

• Features to be Tested

• Features not to be Tested

• Approach

• Item Pass/Fail Criteria

• Entry & Exit Criteria

• Suspension Criteria and Resumption Requirements

• Test Deliverables

• Remaining Test Tasks

• Environmental Needs

• Staffing and Training Needs

• Responsibilities

• Schedule

• Planning Risks and Contingencies

Page 94: Course Materails Final

94

• Approvals

• Glossary

14. 5.1 Test plan identifier Master test plan for the Line of Credit Payment System.

14. 5.2 References List all documents that support this test plan.

Documents that are referenced include:

• Project Plan

• System Requirements specifications.

• High Level design document.

• Detail design document.

• Development and Test process standards.

• Methodology guidelines and examples.

• Corporate standards and guidelines.

14. 5.3 Objective of the plan Scope of the plan

In relation to the Software Project plan that it relates to. Other items may include, resource and budget

constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and

possibly the process to be used for change control and communication and coordination of key activities.

14. 5.4 Test items (functions) These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list

of what is to be tested. This can be developed from the software application inventories as well as other sources of

documentation and information.

This can be controlled on a local Configuration Management (CM) process if you have one. This information

includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are

supported). It may also include key delivery schedule issues for critical elements.

Remember, what you are testing is what you intend to deliver to the client.

This section can be oriented to the level of the test plan. For higher levels it may be by application or functional

area, for lower levels it may be by program, unit, module or build.

14. 5.5 Software risk issues Identify what software is to be tested and what the critical areas are, such as:

• Delivery of a third party product.

• New version of interfacing software.

• Ability to use and understand a new package/tool, etc.

Page 95: Course Materails Final

95

• Extremely complex functions.

• Modifications to components with a past history of failure.

• Poorly documented modules or change requests.

There are some inherent software risks such as complexity; these need to be identified.

• Safety.

• Multiple interfaces.

• Impacts on Client.

• Government regulations and rules.

Another key area of risk is a misunderstanding of the original requirements. This can occur at the

management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be

tested.

The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the

software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a

particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and

clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.

One good approach to define where the risks are is to have several brainstorming sessions.

14. 5.6 Features to be tested This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical

description of the software, but a users view of the functions.

Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low.

These types of levels are understandable to a User. You should be prepared to discuss why a particular level was

chosen.

14. 5.7 Features not to be tested This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a

configuration management/version control view. This is not a technical description of the software, but a user's view of

the functions.

Identify why the feature is not to be tested, there can be any number of reasons.

• Not to be included in this release of the Software.

• Low risk has been used before and was considered stable.

• Will be released but not tested or documented as a functional part of the release of this version of the software.

14. 5.8 Approach (strategy) This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master,

acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes

should be identified.

Page 96: Course Materails Final

96

• Are any special tools to be used and what are they?

• Will the tool require special training?

• What metrics will be collected?

• Which level is each metric to be collected at?

• How is Configuration Management to be handled?

• How many different configurations will be tested?

• Hardware

• Software

• Combinations of HW, SW and other vendor packages

• What levels of regression testing will be done and how much at each test level?

• Will regression testing be based on severity of defects detected?

• How will elements in the requirements and design that do not make sense or are un testable be

processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

Specify if there are special requirements for the testing.

• Only the full component will be tested.

• A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

(7) MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is

available.

(8) SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.

How will meetings and other organizational processes be handled?

14. 5.9 .Item pass/fail criteria Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed.

14. 5.10 Entry & exit criteria

14. 5.10 .a Entrance Criteria The Entrance Criteria specified by the system test controller, should be fulfilled before System Test can

commence. In the event, that any criterion has not been achieved, the System Test may commence if Business Team

and Test Controller are in full agreement that the risk is manageable.

• All developed code must be unit tested. Unit and Link Testing must be completed and signed off by

development team.

Page 97: Course Materails Final

97

• System Test plans must be signed off by Business Analyst and Test Controller.

• All human resources must be assigned and in place.

• All test hardware and environments must be in place, and free for System test use.

• The Acceptance Tests must be completed, with a pass rate of not less than 80%.

14. 5.10 .b Exit Criteria The Exit Criteria detailed below must be achieved before the Phase 1 software can be recommended for

promotion to Operations Acceptance status. Furthermore, I recommend that there be a minimum 2 days effort Final

Integration testing AFTER the final fix/change has been retested.

• All High Priority errors from System Test must be fixed and tested

• If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable

by Business Analyst and Business Expert

14. 5.11 Suspension Criteria and Resumption Requirements:

This is a particular risk clause to define under what circumstances testing would stop and restart.

Resumption Criteria

In the event that system testing is suspended resumption criteria will be specified and testing will not re-

commence until the software reaches these criteria.

• Project Integration Test must be signed off by Test Controller and Business Analyst.

• Business Acceptance Test must be signed off by Business Expert.

14. 5.12 Risks and Contingencies

This defines all other risk events, their likelihood, impact and counter measures to over come them.

Summary The goal of this exercise is to familiarize students with the process of creating test plans.

This exercise is divided into three tasks as described below.

• Devise a test plan for the group.

• Inspect the plan of another group who will in turn inspect yours.

• Improve the group's own plan based on comments from the review session

Task 1: Creating a Test Plan The role of a test plan is to guide all testing activities. It defines what is to be tested and what is to be

overlooked, how the testing is to be performed (described on a general level) and by whom. It is therefore a managerial

document, not technical one - in essence, it is a project plan for testing. Therefore, the target audience of the plan

should be a manager with a decent grasp of the technical issues involved.

Experience has shown that good planning can save a lot of time, even in an exercise, so do not underestimate

the effort required for this phase.

Page 98: Course Materails Final

98

The goal of all these exercises is to carry out system testing on Word Pad, a simple word processor. Your task

is to write a thorough test plan in English using the above-mentioned sources as guidelines. The plan should be based

on the documentation of Word Pad

Task 2: Inspecting a Test Plan

The role of a review is to make sure that a document (or code in a code review) is readable and clear and that

it contains all the necessary information and nothing more. Some implementation details should be kept in mind:

• The groups will divide their roles themselves before arriving at the inspection. A failure to follow the roles

correctly will be reflected in the grading. However, one of the assistants will act as the moderator and will not

assume any other roles.

• There will be only one meeting with the other group and the moderator. All planning, overview and preparation

is up to the groups themselves. You should use the suggested check lists in the lecture notes while preparing.

Task 3 deals with the after-meeting activities.

• The meeting is rather short, only 60 minutes for a pair (that is, 30 minutes each). Hence, all comments on the

language used in the other group's test plan are to be given in writing. The meeting itself concentrates on the

form and content of the plan.

Page 99: Course Materails Final

99

Task 3: Improved Test Plan and Inspection Report

After the meeting, each group will prepare a short inspection report on their test plan listing their most typical

and important errors in the first version of the plan together with ideas for correcting them. You should also answer the

following questions in a separate document:

• What is the role of the test plan in designing test cases?

• What were the most difficult parts in your test plan and why?

Furthermore, the test plan is to be revised according to the input from the inspection.

Page 100: Course Materails Final

100

Test Case

Page 101: Course Materails Final

101

Test Case:

A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to

exercise a particular program path or to verify compliance with a specific requirement.

In software engineering, a test case is a set of conditions or variables under which a tester will determine if a

requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a

requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at

least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub

requirement must have at least one test case. Some methodologies recommend creating at least two test cases for

each requirement. One of them should perform positive testing of requirement and other should perform negative

testing.

If the application is created without formal requirements, then test cases are written based on the accepted normal

operation of programs of a similar class.

What characterises a formal, written test case is that there is a known input and an expected output, which is worked

out before the test is executed. The known input should test a precondition and the expected output should test a post

condition.

Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would

evaluate if the results can be considered as a pass. This happens often on new products' performance number

determination. The first test is taken as the base line for subsequent test / product release cycles.

Written test cases include a description of the functionality to be tested taken from either the requirements or use cases,

and the preparation required to ensure that the test can be conducted.

Written test cases are usually collected into Test suites.

A variation of test cases are most commonly used in acceptance testing. Acceptance testing is done by a group of end-

users or clients of the system to ensure the developed system meets their requirements. User acceptance testing is

usually differentiated by the inclusion of happy path or positive test cases.

Page 102: Course Materails Final

102

15.1 Test Case Template

Manual Testing v1.1 146

Test Case Template

Project Name : Name of the projectProject Version : v1.0

Test Case ID : TC_ProjectName_ModuleName_001 Test Case Name : Yahoo Messenger Login

Test Case Version : v1.1Status : Design / Review / Complete

Designer : EdwinCreation Date : 05/06/2006

Execution Status : Design

Step Step Step Step NameNameNameName

Test DescriptionTest DescriptionTest DescriptionTest Description Expected ResultExpected ResultExpected ResultExpected Result Actual Actual Actual Actual ResultResultResultResult

Step 1 Enter invalid User ID and valid Password in Yahoo messenger login screen

Invalid login error messenger should be displayed

Step 2 Enter valid User ID and invalid Password in Yahoo messenger login screen

Invalid login error messenger should be displayed

Step 3 Select the check box in the Yahoo messenger login screen

Depends on the check box selected, it should perform the functionality

Step 4 Click the “Forgot your password?” link the window

Application should redirect you to Forget password screen

Page 103: Course Materails Final

103

15.2 Test Case Design Techniques

■ Black Box testing Techniques

– Equivalence Class Partitioning

– Boundary Value Analysis

– Cause-Effect Diagram

– State-Transtition.

15.2.1 Equivalence Class Partitioning A definition of Equivalence Partitioning from our software testing dictionary:

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This

usually does not include combinations of input, but rather a single state value based by class. For example, with a given

function there may be several classes of input that may be used for positive testing. If function expects an integer and

receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or

any other input class other than integer is provided, this would be considered a negative test assertion or condition.

WHAT IS EQUIVALENCE PARTITIONING?

Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions

called equivalence classes are identified such that each member of the class causes the same kind of processing and

output to occur.

In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that

are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously,

it would handle all cases erroneously.

WHY LEARN EQUIVALENCE PARTITIONING?

Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably.

It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.

DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING

To use equivalence partitioning, you will need to perform two steps

� Identify the equivalence classes

� Design test cases

STEP 1: IDENTIFY EQUIVALENCE CLASSES

Page 104: Course Materails Final

104

Take each input condition described in the specification and derive at least two equivalence classes for it. One class

represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the

invalid class)

Following are some general guidelines for identifying equivalence classes:

a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify

one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and

inputs which are too high. For example, if an item in inventory can have a quantity of - 9999 to + 9999, identify the

following classes:

1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to 9999). This is written as (- 9999

< = QTY < = 9999)

2. the invalid class (QTY is less than -9999), also written as (QTY < -9999)

3. the invalid class (QTY is greater than 9999) , also written as (QTY >9999)

b) If the requirements state that the number of items input by the system at some point must lie within a certain range,

specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few

inputs and one invalid class where there are too many inputs.

For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product. The

equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and

less than or equal to 4, also written as (1 < = no. of purchase orders < = 4) the invalid class (no. of purchase

orders> 4) the invalid class (no. of purchase orders < 1)

c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the

same way, identify a valid class for values in the set and one invalid class representing values outside of the set.

Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer

• One partition: number of inputs

• Classes “x<4”, “4<=x<=24”, “24<x”

• Chosen values: 3,4,5,14,23,24,25

15.2.2 Boundary value analysis:

What is boundary value analysis in software testing?

Concepts: Boundary value analysis is a methodology for designing test cases that concentrates software

testing effort on cases near the limits of valid ranges Boundary value analysis is a method which refines equivalence

partitioning. Boundary value analysis generates test cases that highlight errors better than equivalence partitioning.

The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points

when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis

Page 105: Course Materails Final

105

broadens the portions of the business requirement document used to generate tests. Unlike equivalence partitioning, it

takes into account the output specifications when deriving test cases.

How do you perform boundary value analysis?

Once again, you'll need to perform two steps:

1. Identify the equivalence classes.

2. Design test cases.

But the details vary. Let's examine each step.

Step 1: identify equivalence classes

Follow the same rules you used in equivalence partitioning. However, consider the output specifications as

well. For example, if the output specifications for the inventory system stated that a report on inventory should indicate

a total quantity for all products no greater than 999,999, then you d add the following classes

to the ones you found previously:

6. The valid class ( 0 < = total quantity on hand < = 999,999 )

7. The invalid class (total quantity on hand <0)

8. The invalid class (total quantity on hand> 999,999 )

Step 2: design test cases

In this step, you derive test cases from the equivalence classes. The process is similar to that of

equivalence partitioning but the rules for designing test cases differ. With equivalence partitioning, you may select any

test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to

the edges of the range. The detailed rules for generating test cases follow:

Rules for test cases

1. If the condition is a range of values, create valid test cases for each end of the range and invalid test

cases just beyond each end of the range. For example, if a valid range of quantity on hand is -9,999

through 9,999, write test cases that include:

1. the valid test case quantity on hand is -9,999,

2. the valid test case quantity on hand is 9,999,

3. the invalid test case quantity on hand is -10,000 and

4. the invalid test case quantity on hand is 10,000

You may combine valid classes wherever possible, just as you did with equivalence partitioning, and, once again, you

may not combine invalid classes. Don�t forget to consider output conditions as well. In our inventory example the

output conditions generate the following test cases:

1. the valid test case total quantity on hand is 0,

2. the valid test case total quantity on hand is 999,999

3. the invalid test case total quantity on hand is -1 and

4. the invalid test case total quantity on hand is 1,000,000

Page 106: Course Materails Final

106

2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two

valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the

acceptable range.

3. Design tests that highlight the first and last records in an input or output file.

4. Look for any other extreme input or output conditions, and generate a test for each of them.

Definition of Boundary Value Analysis from our Software Testing Dictionary:

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on

"corner cases" or values that are usually out of range as defined by the specification. This means that if function

expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is

usually performed after positive functional validation has completed (successfully) using requirements specifications

and user documentation

15.2.3 Cause-Effect Graphing Cause-Effect Graphing (CEG) is used to derive test cases from a given natural language specification to validate its

corresponding implementation.

The CEG technique is a black-box method, i.e, it considers only the desired external behavior of a system. As well, it is

the only black-box test design technique that considers combinations of causes of system behaviors.

1. A cause represents a distinct input condition or an equivalence class of input conditions. A cause can be interpreted

as an entity which brings about an internal change in the system. In a CEG, a cause is always positive and atomic.

2. An effect represents an output condition or a system transformation which is observable. An effect can be a state or

a message resulting from a combination of causes. 3Constraints represent external constraints on the system.

Page 107: Course Materails Final

107

15.2.4 State- Transtition

When changes occurring in the state based behavior or attributes of an object or the various links that

the object has with other objects can be represented by this technique. State models are ideal for

describing the behavior of a single object .State transtition is State-based behavior of the instances of

a class.

For example

Operation of an Elevator

Elevator has to go to all the 5 floors in a building. Consider each floor as one state.

Let the lift be initially at the 0th floor (initial state), now a request comes from the 5th floor it has to

respond to that request and the lift has move to 5th floor (next state) and now a request also comes

from the 3rd floor( another state) it has to respond to this request also. Like wise the requests may

come from other floors also.

Each floor means a different state, the lift has to take care of the request from all the states and has to

transit to all the state in sequence the request comes.

State Transtition Diagram

Page 108: Course Materails Final

108

Page 109: Course Materails Final

109

15. 3 Sample Test Cases

1.1.1.1.1.10 Search

Test List :

1.1.1.1.1.10.1 Test Name : CDL_TCD_USH_TCS_001 Subject : Search Status : Review Designer : Edwin Creation Date : 05/09/2003 Type : AUTOMATED Description : User screen GUI check Execution Status : Passed

Steps : Step Name Description Expected Result

Step 1 Login to Citadel using correct GID Number, password and 'Citadel' server selected from drop-down menu

Search Screen should appear if login successful

Step 2 Check the GUI items in the User search screen

1.The User search screen should display "Reports, Change Password and Log Off" button at the top right corner of the screen. 2. A Identifier Drop Down box should be displayed at the top left side of the screen. 3. A Folder Check box and text field should be displayed next to the Identifier drop down list. 4. Major Folder list box should be displayed. 5. Minor Folder list box should be displayed below the Major folder 6. Document Type Code text field should be displayed along with a button " Doc Code List" 5. Document Type Description text field should be displayed 6. Document Date field along with display options list box should be displayed 7. Scan Date field along with display options list box should be displayed 7. Buttons namely "Import, Search and Reset" should be displayed below the above fields.

Page 110: Course Materails Final

110

1.1.1.1.1.10.2 Test Name : CDL_TCD_USH_TCS_002 Subject : Search Status : Review Designer : Edwin Creation Date : 05/09/2003 Type : AUTOMATED Description : Identifier Drop Down check and Folder selection Execution Status : Passed

Steps :

Step Name Description Expected Result

Step 1 Login to Citadel using correct GID Number, password and 'Citadel' server selected from drop-down menu

Main Search Screen should appear if login successful

Step 2 Click the Identifier drop down button Banker Last Name, CAS ID, CAS Last Name,SPN, SPN Name, Polaris Doc Number, Processor Name, Specialists Name, Fiduciary Manager, Investment Manager, Portfolio Manager, Sales Manager, Account Number and Account Title should be displayed in the drop down list

Step 3 Select any of the below option from the dop down list 1. Polaris Doc Number 2. Processor Name

Folder check box shouldn't be selected

Step 4 Select any of the below option from the dop down list 1. Banker Last Name 2. Specialists Name 3. Fiduciary Manager 4. Investment Manager 5. Portfolio Manager 6. Sales Manager

Folder check box should be selected

Step 5 Select any of the below option from the dop down list 1. Account Number 2. Account Title 3. CAS ID 4. CAS Name 5. SPN 6. SPN Name

The folder check box should not have any effect on this item selection, it still retain the previous selection of the folder

Page 111: Course Materails Final

111

Test execution

Page 112: Course Materails Final

112

16.1 Test execution When the test design activity has finished, the test cases are executed. Test execution is the phase that follows

after everything discussed to this point, with test strategies, test planning, test procedures designed and developed, and

the test environment operational, it is time to execute the tests created in the preceding phases.

Once development of the system is underway and software builds become ready for testing, the testing team

must have a precisely defined work flow for executing tests, tracking defects found, and providing information, or

metrics, on the progress of the testing effort.

16. 2 When to stop testing? Testing is potentially endless. We can not test till all the defects are unearthed and removed -- it is simply

impossible. At some point, we have to stop testing and ship the software. The question is when.

Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models. There is two

types of approach,

• Pessimistic Approach.

• Optimistic Approach.

The pessimistic and unfortunately most often used approach is to stop testing whenever some or any of the

allocated resources -- time, budget, or test cases -- are exhausted.

The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from

continuing testing cannot justify the testing cost. This will usually require the use of reliability models to evaluate and

predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure

data gathering -- modeling -- prediction.

16. 3 Defect Defects are commonly defined as "failure to conform to specifications," e.g., incorrectly implemented

specifications and specified requirement(s) missing from the software. A bug in software product is any exception that

can hinder the functionality of either the whole software or part of it.

16.3.1 Defect Fundamental Defect is termed as flaw or deviation from the requirements. Basically, test cases/scripts are run in order to find

out any unexpected behavior of the software product under test. If any such unexpected behavior or exception occurs,

it is called as a bug.

Discussions within the software development community consistently recognize that most failures in software

products are due to errors in the specifications or requirements—as high as 80 percent of total defect costs.

Page 113: Course Materails Final

113

Defect is termed as variance from a desired attribute. These attributes include complete and correct

requirements and specifications, designs that meet requirements and programs that observe requirements and

business rules.

16.4 Report a Bug Before you report a bug, please review the following guidelines:

It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails

during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same.

The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case

and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that

is failed.

At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project, Summary,

Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in Version, Closed in Version,

Expected Date of Closure, Actual Date of Closure, Severity, Priority and Bug ID etc.) are filled and detailed description

of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test

case are attached to the bug for reference by the developer.

After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the

failed test case. This Bug ID helps in associating the bug with the failed test case.

After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process

progresses.

If more than one tester are testing the software application, it becomes a possibility that some other tester

might already have reported a bug for the same defect found in the application. In such situation, it becomes very

important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has

to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if

there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.

If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four

columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual

results are written for each step and the test case is failed for the step at which the test case fails.

Those are the three basic elements of a bug report: what you did, what you wanted to happen, and what actually

happened. You need to tell us exactly what you did, what you expected to have happen, and what actually happened.

This file containing test case and the screen shots taken are sent to the developers for reference. As the

tracking process is not automated, it becomes important keep updated information of the bug that was raised till the

time it is closed.

Always search the bug database first

The odds are high that if you've found a problem, someone else has found it, too. If you spend a few minutes of

your time making sure that you're not filing a duplicate bug, that's a few more minutes someone can spend helping to fix

that bug rather than sorting out duplicate bug reports.

Page 114: Course Materails Final

114

If you don't understand an error message, ask for help. Don't report an error message you don't understand as

a bug. There are a lot of places you can ask for help in understanding what is going on before you can claim that an

error message you do not understand is a bug. Be brief, but don't leave any important details out.

There are some general guidelines as given below:

• Remember the three basics: what you did, what you expected to happen, and what happened.

• When you provide code that demonstrates the problem, it should almost never be more than ten lines long.

Anything longer probably contains a lot of code that has nothing to do with the problem, which makes it more

difficult to figure out the real problem.

• If the product is crashing, include a trace log or stack dump ( be sure to copy and paste all of the cryptic error

codes and line numbers included in the results )

Don't report bugs about old versions.

Every time a new version is released, many bugs are fixed. If you're using a version of a product that is more

than two revisions older than the latest version, you should upgrade to the latest version to make sure the bug you are

experiencing still exists.

Only report one problem in each bug report. If you have encountered two bugs that don't appear to be related,

create a new bug report for each one. This makes it easier for different people to help with the different bugs.

16.4.1 Template to add a Bug

Page 115: Course Materails Final

115

16.4.2 - Contents of a Bug report

Complete list of contents of a bug/error/defect that are needed at the time of raising a bug during software

testing. These fields help in identifying a bug uniquely.

When a tester finds a defect, he/she needs to report a bug and enter certain fields, which helps in uniquely

identifying the bug reported by the tester. The contents of a bug are as given below:

Project: Name of the project under which the testing is being carried out.

Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project

identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which

the bug has been reported.

Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the

actual results. At the end of the summary, the step at which the test case fails is described along with the actual result

obtained and expected result.

Summary: This field contains some keyword information about the bug, which can help in minimizing the number of

records to be searched.

Detected By: Name of the tester who detected/reported the bug.

Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of

developer group leader, who then delegates the task to member of his team, and changes the name accordingly.

Test Lead: Name of leader of testing team, under whom the tester reports the bug.

Detected in Version: This field contains the version information of the software application in which the bug was

detected.

Closed in Version: This field contains the version information of the software application in which the bug was fixed.

Date Detected: Date at which the bug was detected and reported.

Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug.

Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed

and retested successfully.

Page 116: Course Materails Final

116

Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally

Medium, Low, High, Urgent are the type of severity that are used.

Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is

high severity and 5 is the lowest.

Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first

time reported by the tester, further the status is changed to Assigned, Open, Retest, Pending Retest, Pending Reject,

Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process.

Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely.

Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in

explaining the testing he had done and it also helps developers in re-creating the similar testing condition.

Test Case Failed: This field contains the test case that is failed for the bug.

Any of above given fields can be made mandatory, in which the tester has to enter a valid data at the time of

reporting a bug.

16.5 Defect Severity The urgency with which a defect has to be repaired is derived from the severity of the defect, which could be

defined as follows:

• Critical.

• Major.

• Average.

• Minor.

• Exception.

16.5.1 Critical The defect results in the failure of the complete software system, of a subsystem, or of a software unit

(program or module) within the system. A defect that prevents the user from moving ahead in the application, a "show

stopper" is classified as "Critical," e.g., performing an event causes a general protection fault in the application.

Performance defects may also be classified as "Critical" for certain software that must meet predetermined performance

metrics.

16.5.2 Major The defect results in the failure of the complete software system, of a subsystem, or of a software unit

(program or module) within the system. There is no way to make the failed component(s), however, there are

Page 117: Course Materails Final

117

acceptable processing alternatives which will yield the desired result. An overly long processing time may be classified

as "Major" because although it does not prevent the user from proceeding, it is performance deficiency.

16.5.3 Average The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent

results, or the defect impairs the systems usability. If the user is able to formulate work-around where there are defects,

these defects may be classified as "Average." Defects with severity "Average" will be repaired when the higher-category

defects have been repaired and if time permits.

16.5.4Minor The defect does not cause a failure, does not impair usability, and the desired processing results are easily

obtained by working around the defect. Certain graphical user interface defects, such as placement of push buttons on

the window, may be classified as "Minor" since this does not impede the application functionality. Although defect

priority indicates how quickly the defect must be repaired, its severity is determined by the importance of that aspect of

the application in relation to the software requirements.

16.5.5 Exception The defect is the result of non-conformance to a standard, is related to the aesthetics of the system, or is a

request for an enhancement. Defects at this level may be deferred or even ignored.

16.6 Defects Prioritized

Once a defect is logged and described, appropriate resources must be allocated for its resolution. To do this,

the defect must be analyzed and prioritized according to its severity. Each defect is given a priority based on its

criticality. Usually, it is practical to have four priority levels:

• Urgent.

• High Priority.

• Medium.

• Low Priority.

• Defer.

16.6 .1 Urgent

Further development and/or testing cannot occur until the defect has been repaired. The system cannot be

used until the repair has been affected. Like system crash or error message forcing to close the window. Tester's ability

to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is

affected by the incident and it is significant to business processes.

Page 118: Course Materails Final

118

A misstatement of a requirement or a serious design flaw must be resolved immediately, before the developer

translates it into codes that are implemented in the software—it is much cheaper to amend a requirement document

than to make program code changes.

16.6 .2 High The defect must be resolved as soon as possible because it is impairing development/and or testing activities.

System use will be severely affected until the defect is fixed.

The critical path for development is another determinant of defect priority. If one piece of the functionality must

work before the next piece is added, any functional defects of the first piece will be given the "High" priority level.

For example: A query engine retrieved transactions matching user-specified criteria upon which further processing was

performed. If the query engine had been defective, no further development (or testing) would have been practical.

Therefore, all functional defects of the query engine were prioritized as "High".

16.6 .3 Medium The defect should be resolved in the normal course of development activities. It can wait until a new build or

version is created.

16.6 .4 Low The defect is an irritant which should be repaired but which can be repaired after more serious defect have

been fixed. The wrong font size for a label may be classified as "Low Priority".

16.6 .5 Defer The defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved

at all.

16. 7 Bug Life Cycle

The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’),

rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

(Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various

statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed).

Page 119: Course Materails Final

119

There are seven different life cycles that a bug can passes through:

Cycle I:

1) A tester finds a bug and reports it to Test Lead.

2) The Test lead verifies if the bug is valid or not.

3) Test lead finds that the bug is not valid and the bug is ‘Rejected’.

Cycle II:

Page 120: Course Materails Final

120

1) A tester finds a bug and reports it to Test Lead.

2) The Test lead verifies if the bug is valid or not.

3) The bug is verified and reported to development team with status as ‘New’.

4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending

Reject’ before passing it back to the testing team.

5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’.

Cycle III:

1) A tester finds a bug and reports it to Test Lead.

2) The Test lead verifies if the bug is valid or not.

3) The bug is verified and reported to development team with status as ‘New’.

4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a

developer to it marking the status as ‘Assigned’.

5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.

6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for

retest.

7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.

8) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ‘Closed’.

Cycle IV:

1) A tester finds a bug and reports it to Test Lead.

2) The Test lead verifies if the bug is valid or not.

3) The bug is verified and reported to development team with status as ‘New’.

4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a

developer to it marking the status as ‘Assigned’.

5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.

6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for

retest.

7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.

8) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens

the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing.

Cycle V:

1) A tester finds a bug and reports it to Test Lead.

2) The Test lead verifies if the bug is valid or not.

Page 121: Course Materails Final

121

3) The bug is verified and reported to development team with status as ‘New’.

4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing,

but fails in that and asks for help from testing team.

5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it

‘Rejected’.

Cycle VI:

1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug

is postponed for indefinite time and it is marked as ‘Postponed’.

Cycle VII:

1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’.

This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

16.8 Bug Statuses Used During a Bug Life Cycle

Any software development process is incomplete if the most important phase of Testing of the developed

product is excluded. Software testing is a process carried out in order to find out and fix previously undetected

bugs/errors in the software product. It helps in improving the quality of the software product and make it secure for client

to use.

Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned

various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and

Closed.

16. 8.1 Statuses associated with a bug:

16. 8.1 a New: When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader

(Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester

logs the bug and the status of ‘New’ is assigned to the bug.

Page 122: Course Materails Final

122

16. 8.1 b Assigned: After the bug is reported as ‘New’, it comes to the Development Team. The development team verifies if the

bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of ‘Assigned’ is

assigned to it.

16. 8.1 c Open: Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to indicate that

he/she is working on it to find a solution.

16. 8.1 d Fixed: Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as

‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team.

16. 8.1 e Pending Retest: After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending Retest’ is

assigned to it.

16. 8.1 f Retest: The testing team leader changes the status of the bug, which is previously marked with ‘Pending Retest’ to

‘Retest’ and assigns it to a tester for retesting.

16. 8.1 g Closed: After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester closes it and

marks it with ‘Closed’ status.

16. 8.1 h Reopen: If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises

once again, then the tester reopens the bug and again sends it back to the developer marking its status as ‘Reopen’.

16. 8.1 i Pending Rejected: If the developers think that a particular behavior of the system, which the tester reports as a bug has to be

same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending Reject’.

16. 8.1 j Rejected: If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per

the explanation from the development, he/she rejects the bug and marks its status as ‘Rejected’.

16. 8.1 k Postponed: Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur

because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the

bug is marked with ‘Postponed’ status.

Page 123: Course Materails Final

123

16. 8.1 l Deferred: In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is

marked with ‘Deferred’ status

16.9 Defect report template

Page 124: Course Materails Final

124

16.10 Defect tracking

Defect tracking is the process of finding defects in a product, (by inspection, testing, or recording feedback

from customers), and making new versions of the product that fix the defects. Defect tracking is important in software

engineering as complex software systems typically have tens or hundreds of thousands of defects: managing,

evaluating and prioritizing these defects is a difficult task: defect tracking systems are computer database systems that

store defects and help people to manage them.

The purpose of Defect Tracking is to help engineering management achieve their goal of producing quality

products on time and to budget.

• Planning and Estimation

• Tracking

• Control

• Process Implementation and Change

• Accessibility

Implementing an Effective Defect Tracking Process

Effective defect tracking begins with a systematic process. A structured tracking process begins with initially

logging the defects, investigating the defects, then providing the structure to resolve them. Defect analysis and

reporting offer a powerful means to manage defects and defect depletion trends, hence quality costs.

16. 10 .1 Different Phases of Defect Tracking Successful verification throughout the development cycle requires clearly defined system specification and

software application business rules.

16. 10 1 a Requirements phase Defect tracking focuses on validating that the defined requirements meet the needs and the user's expectation

about functionality. Sometimes, system-specific constraints would cause the deletion of certain business requirements.

16. 10 1 b Design and analysis phase Efforts should focus on identifying and documenting that the application design meets the business rules or

field requirements as defined by the business or user requirements.

For example, does the design correctly represent the expected user interface? Will it enforce the defined

business rules? Would a simpler design reduce coding time and documentation of user manuals and training? Does the

design have other effects on the reliability of the program?

Page 125: Course Materails Final

125

Coding was well under way when incomplete system specifications caused transfer of data on the bridge to

fail. The failure was not due to coding errors but to specification errors that were translated into program codes. Had the

deficiency been discovered before coding began, we could have saved the substantial time and money required to

repair the programs.

16. 10 1 c Programming phases Defect tracking must emphasize ensuring that the programs accomplish the defined application functionality

given by the requirements and design.

For example, has any particular coding caused defects in other parts of the application or in the database? Is a

particular feature visibly wrong?

16. 10 1 d Maintenance and enhancement phases During the maintenance phase, effort is spent tracking ongoing user issues with the software.

During enhancement phases (there could be multiple releases), defect tracking is focused on establishing that

the previous release is still stable when the enhancements have been added. The Figure given below represents the

philosophy of defect tracking throughout the software development process.

"Quality comes not from inspection, but improvement of the development process."

Page 126: Course Materails Final

126

TESTING METRICS

Page 127: Course Materails Final

127

17 .1 what is test metric? Test metrics is a process for analyzing the current level of maturity while testing and predict future trends,

finally meant for enhancing the testing activities which were missed in current testing will be added in next build to

improve or enhance the Testing Process.

Metrics are the numerical data which will help us to measure the test effectiveness.

Metrics are produced in two forms

1. Base Metrics and

2. Derived Metrics.

17.1aExample of Base Metrics:

# Test Cases

# New Test Cases

# Test Cases Executed

# Test Cases Unexecuted

# Test Cases Re-executed

# Passes

# Fails

# Test Cases Under Investigation

# Test Cases Blocked

# 1st Run Fails

#Test Case Execution Time

# Testers

17.1. b Example of Derived Metrics:

% Test Cases Complete

% Test Cases Passed

% Test Cases Failed

% Test Cases Blocked

% Test Defects Corrected

17.3 Objective of test metrics The objective of Test Metrics is to capture the planned and actual quantities the effort, time and resources

required to complete all the phases of Testing of the software Project.

Test metrics usually covers 3 things:

Page 128: Course Materails Final

128

1. Test coverage

2. Time for one test cycle.

3. Convergence of testing

17.4 Why testing metrics?

As, we all know, a major percentage of software projects suffer from quality problems. Software testing

provides visibility into product and process quality. Test metrics are key ”facts” that project managers can understand

their current position and to prioritize their activities to reduce the risk of schedule over-runs on software releases.

Test metrics are a very powerful management tool. They help you to measure your current performance.

Because today’s data becomes tomorrow’s historical data. its never too late to start recording key information on your

project. This data can be used to improved future work estimates and quality levels. Without historical data, estimates

will be guesses.

You cannot track the project status meaningfully unless you know the actual effort and time spent on each task

as compared to your estimates. You cannot sensibly decide whether your product is stable enough to ship unless you

track the rates at which your team is finding and fixing defects. You cannot quantify the performance of your new

development processes without some statistics on your current performance and a baseline to compare it with. Metrics

help you to better control your software projects. They enable you to learn more about the functioning of your

organization by establishing a Process Capability baseline that can be used to better estimate and predict the quality of

your projects in the future.

17.5 The benefits of having testing metrics

1. Test metrics data collection helps predict the long-term direction and scope for an organization and enables a more

holistic view of business and identifies high-level goals

2. Provides a basis for estimation and facilitates planning for closure of the performance gap

3. Provides a means for control / status reporting

4. Identifies risk areas that requires more testing

5. Provides meters to flag actions for faster, more informed decision making

Page 129: Course Materails Final

129

6. Quickly identifies and helps resolve potential problems and identifies areas of improvement

7. Test metrics provide an objective measure of the effectiveness and efficiency of testing

Key factors to bear in mind while setting up test metrics

1. Collect only the data that you will actually use/need to make informed decisions to alter your strategies, if you are not

going to change your strategy regardless of the finding, your time is better spent in testing.

2. Do not base decisions solely on data that is variable or can be manipulated. For example, measuring testers on the

number of tests they write per day can reward them for speeding through superficial tests or punish them for tracking

trickier functionality.

3. Use statistical analysis to get a better understanding of the data. Difficult metrics data should be analyzed carefully.

The templates used for presenting data should be self explanatory.

4. One of the key inputs to the metrics program is the defect tracking system in which the reported process and product

defects are logged and tracked to closure. It is therefore very important to carefully decide on the fields that need per

defect in the defect tracking systems and then generate customizable reports.

5. Metrics should be decided on the basis of their importance to stakeholders rather than ease of data collection.

Metrics that are of not interest to the stakeholders should be avoided.

6. Inaccurate data should be avoided and complex data should be collected carefully. Proper benchmarks should be

definite for the entire program

17.6 Deciding on the Metrics to Collect

There are literally thousands of possible software metrics to collect and possible things to measure about

software development. There are many books and training programs available about software metrics, which of the

many metrics are appropriate for your situation? One method is to start with one of the many available published suites

of metrics and a vision of your own management problems and goals, and then customize the metrics list based on the

following metrics collection checklist. For each metric, you must consider,

1) What are you trying to manage with this metric? Each metric must relate to a specific management

area of interest in a direct way. The more convoluted the relationship between the measurement and the

management goal, the less likely you are to be collecting the right thing.

2) What does this metric measure? Exactly what does this metric count? High-level attempts to answer

this question (such as "it measures how much we've accomplished") may be misleading. The detailed answers

Page 130: Course Materails Final

130

(such as "it reports how much we had budgeted for design tasks that first-level supervisors are reporting as

greater than 80 percent complete") is much more informative, and can provide greater insight regarding the

accuracy and usefulness of any specific metric.

3) If your organization optimized this metric alone, what other important aspects of your software design,

development, testing, deployment, and maintenance would be affected? Asking this question will provide a list

of areas where you must check to be sure that you have a balancing metric. Otherwise, your metrics program

may have unintended effects and drive your organization to undesirable behavior.

4) How hard/expensive is it to collect this information? This is where you actually get to identify whether

collection of this metric is worth the effort. If it is very expensive or hard to collect, look for automation that can

make the collection easier, or consider alternative metrics that can be substituted.

5) Does the collection of this metric interact with (or interfere with) other business processes? For

example, does the metric attempt to gather financial information on a different periodic basis or with different

granularity than your financial system collects and reports it? If so, how will the two quantitative systems be

synchronized? Who will reconcile differences? Can the two collection efforts be combined into one and provide

sufficient software metrics information?

6) How accurate will the information be after you collect it? Complex or manpower-intensive metrics

collection efforts are often short circuited under time and schedule pressure by the people responsible for the

collection. Metrics involving opinions (e.g., what percentage complete do you think you are?) are notoriously

inaccurate. Exercise caution, and carefully evaluate the validity of metrics with these characteristics.

7) Can this management interest area be measured by other metrics? What alternatives to this metric

exist? Always look for an easier-to-collect, more accurate, more timely metric that will measure relevant

aspects of the management issue of concern.

Use of this checklist will help ensure the collection of an efficient suite of software development metrics that directly

relates to management goals. Periodic review of existing metrics against this checklist is recommended.

Projects that are underestimated, over-budget, or that produce unstable products, have the potential to devastate

the company. Accurate estimates, competitive productivity, and renewed confidence in product quality are critical to the

success of the company.

Hoping to solve these problems as quickly as possible, the company management embarks on the 8-Step

Metrics Program

Step 1: Document the Software Development Process

Integrated Software does not have a defined development process. However, the new metrics coordinator

does a quick review of project status reports and finds that the activities of requirements analysis, design, code, review,

recode, test, and debugging describe how the teams spend their time. The inputs, work performed, outputs and

Page 131: Course Materails Final

131

verification criteria for each activity have not been recorded. He decides to skip these details for this "test" exercise. The

recode activity includes only effort spent addressing software action items (defects) identified in reviews.

Step 2: State the Goals

The metrics coordinator sets out to define the goals of the metrics program. The list of goals in Step 2 of the 7 -

Step Metrics Program are broader than (yet still related to) the immediate concerns of Integrated Software. Discussion

with development staff leads to some good ideas on how to tailor these goals into specific goals for the company.

1. Estimates

The development staff at Integrated Software considers past estimates to have been unrealistic as they were

established using “finger in the wind” techniques. They suggest that current plan could benefit from past experience as

the present project is very similar to past projects.

Goal: Use previous project experience to improve estimations of Productivity.

2. Productivity

Discussions about the significant effort spent in debugging center on a comment by one of the developers that

defects found early on in reviews have been faster to repair than Defects discovered by the test group. It seems that

both reviews and testing are needed, but the amount of effort to put into each is not clear.

Goal: Optimize defect detection and removal.

3. Quality

The test group at the company argues for exhaustive testing. This however, is Prohibitively expensive.

Alternatively, they suggest looking at the trends of defects discovered and repaired over time to better understand the

probable number of defects remaining.

Goal: Ensure that the defect detection rate during testing is converging towards a level that indicates that less than five

defects per KSLOC will be discovered in the next year.

Step 3: Define Metrics Required to Reach Goals and Identify Data to Collect

Working from the Step 3 tables, the metrics coordinator chooses the following metrics for the metrics program.

Goal 1: Improve Estimates

• Actual effort for each type of software in PH

• ��Size of each type of software in SLOC

• Software product complexity (type)

• Labor rate (PH/SLOC) for each type

Goal 2: Improve Productivity

• ��Total number of person hours per activity

• ��Number of defects discovered in reviews

• ��Number of defects discovered in testing

• ��Effort spent repairing defects discovered in reviews

• ��Effort spent repairing defects discovered in testing

• ��Number of defects removed per effort spent in reviews and recode

• ��Number of defects removed per effort spent in testing and debug

Goal 3: Improve Quality

• ��total number of defects discovered

Page 132: Course Materails Final

132

• ��total number of defects repaired

• ��number of defects discovered / schedule date

• number of defects repaired / schedule date

17.7 Types of test metrics 1. Product test metrics

i. Number of remarks

ii. Number of defects

ii. Remark status

iv. Defect severity

v. defect severity index

vi. Time to find a defect

vii. Time to solve a defect

viii. Test coverage

ix. defects/KLOC

2. Project test metrics

i. workload capacity ratio

ii. Test planning performance.

Iii. Test effort ratio.

iv. Defect category

3. Process test metrics

i. should be found in which phase

ii. Residual defect density

iii. Defect remark ratio

iv. Valid remark ratio

v. bad fix ratio

vi. Defect removal efficiency

vii. Phase yield

viii. Backlog development

ix. Backlog testing

x. scope changes

Page 133: Course Materails Final

133

17.7.1 Product test metrics I. Number of remarks

Definition

The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test

engineer that the application shows an undesired behavior. It may or may not result in software modification or changes

to documentation.

Purpose

One of the earliest indicators to measure once the testing commences; provides initial indications about the

stability of the software

Data to collect

Total number of remarks found.

II. Number of defects

Definition

The total number of remarks found in a given time period/phase/test type that resulted in software or

documentation modifications.

Purpose

The total number of remarks found in a given time period/phase/test type that resulted in software or

documentation modifications.

Data to collect

Only remarks that resulted in modifying the software or the documentation are counted.

III. Remark status

Definition

The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following

statuses are available: To be solved: Logged by the test engineers and waiting to be taken over by the software

engineer. To be retested: Solved by the developer, and waiting to be retested by the test engineer. Closed: The issue

was retested by the test engineer and was approved.

Purpose

Track the progress with respect to entering, solving and retesting the remarks. During this phase, the

information is useful to know the number of remarks logged, solved, waiting to be resolved and retested.

Data to collect

This information can normally be obtained directly from the defect tracking system based on the remark status.

IV. Defect severity

Definition

Page 134: Course Materails Final

134

The severity level of a defect indicates the potential business impact for the end user (business impact = effect

on the end user x frequency of occurrence).

Purpose

Provides indications about the quality of the product under test. A high-severity defect means low product

quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the

number of defects and their severity levels.

Data to collect

Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.

V. Defect severity index

Definition

An index representing the average of the severity of the defects.

Purpose

Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.

Data to collect

Two measures are required to compute the defect severity index. A number is assigned against each severity

level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals;

divide this by the total number of defects to determine the defect severity index.

VI. Time to find a defect

Definition

The effort required to find a defect.

Purpose

Shows how fast the defects are being found. This metric indicates the correlation between the test effort and

the number of defects found.

Data to collect

Divide the cumulative hours spent on test execution and logging defects by the number of defects entered

during the same period.

Page 135: Course Materails Final

135

VII. Time to solve a defect

Definition

Effort required resolving a defect (diagnosis and correction).

Purpose

Provides an indication of the maintainability of the product and can be used to estimate projected maintenance

costs.

Data to collect

Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the

same period.

VIII. Test coverage

Definition

Defined as the extent to which testing covers the product’s complete functionality.

Purpose

This metric is an indication of the completeness of the testing. It does not indicate anything about the

effectiveness of the testing. This can be used as a criterion to stop testing.

Data to collect

Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be

calculated based on the number of items that were covered vs. the total number of items.

IX. Test case effectiveness

Definition

The extent to which test cases are able to find defects

Purpose

This metric provides an indication of the effectiveness of the test cases and the stability of the software.

Data to collect

Page 136: Course Materails Final

136

Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases.

X. Defects/KLOC

Definition

The number of defects per 1,000 lines of code.

Purpose

This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to

be addressed in the next phase or the next version.

Data to collect

Ratio of the number of defects found vs. the total number of lines of code (thousands)

Formula used

Uses of defect/KLOC

Defect density is used to compare the relative number of defects in various software components. This helps

identifies candidates various for additional inspection or testing or for possible engineering or replacement. Identifying

defect prone components allows the concentration of limited resources into areas with the highest potential return on

investment.

Another use of defect density is to compare subsequent releases of a product to track the impact of defect

reduction and quality improvement activities. Normalling by size allows releasing of various sizes to be compared.

Differences between products or products lines can also be compared in this manner.

17.7.2 Project test metrics: I. Workload capacity

Definition

Ratio of the planned workload and the gross capacity for the total test project or phase. Similar.

Purpose

This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating

similar projects as well.

Data to collect

Computation of this metric often happens in the beginning of the phase or project. Workload is determined by

multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time,

determined by workload divided by gross capacity.

II. Test planning performance

Page 137: Course Materails Final

137

Definition

The planned value related to the actual value.

Purpose

Shows how well estimation was done.

Data to collect

The ratio of the actual effort spent to the planned effort

III. Test effort percentage

Definition

Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is divided among

multiple phases of the project: requirements, design, coding, testing and such. This metric can be computed by dividing

the overall test effort by the total project effort.

Purpose

The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication

of the level of investment in testing. This information can also be used to estimate similar projects in the future.

Data to collect

This metric can be computed by dividing the overall test effort by the total project effort.

IV. Defect category

Definition

An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product

include functionality, usability, documentation, performance, installation and internationalization.

Purpose

This metric can provide insight into the different quality attributes of the product.

Data to collect

This metric can be computed by dividing the defects that belong to a particular category by the total number of

defects.

17.7.3 Process test metrics I. Should be found in which phase

Definition

An attribute of the defect, indicating in which phase the remark should have been found.

Purpose

Are we able to find the right defects in the right phase as described in the test strategy? Indicates the

percentage of defects that are getting migrated into subsequent test phases.

Data to collect

Page 138: Course Materails Final

138

Computation of this metric is done by calculating the number of defects that should have been found in

previous test phases.

II. Residual defect density:

Definition

An estimate of the number of defects that may have been unresolved in the product phase.

Purpose

The goal is to achieve a defect level that is acceptable to the clients. We remove defects in each of the test

phases so that few will remain.

Data to collect

This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards,

coupled with project specifics, form the basis for estimation.

III. Defect remark ratio

Definition

Ratio of the number of remarks that resulted in software modification vs. the total number of remarks.

Purpose

Provides an indication of the level of understanding between the test engineers and the software engineers

about the product, as well as an indirect indication of test effectiveness.

Data to collect

The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for

each test type, during and at the end of test phases.

IV. Valid remark ratio

Definition

Percentage of valid remarks during a certain period.

Purpose

Indicates the efficiency of the test process.

Data to collect

Ratio of the total number of remarks that are valid to the total number of remarks found.

Formula used

Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next

phase or release.

V. Phase yield

Definition

Defined as the number of defects found during the phase of the development life cycle vs. the estimated

number of defects at the start of the phase.

Purpose

Page 139: Course Materails Final

139

Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used

to determine the estimated number of defects for the next phase.

Data to collect

Ratio of the number of defects found by the total number of estimated defects. This can be used during a

phase and also at the end of the phase.

VI. Backlog development

Definition

The number of remarks that are yet to be resolved by the development team.

Purpose

Indicates how well the software engineers are coping with the testing efforts.

Data to collect

The number of remarks that remain to be resolved.

VII. Backlog testing

Definition

The number of resolved remarks that are yet to be retested by the development team.

Purpose

Indicates how well the test engineers are coping with the development efforts.

Data to collect

The number of remarks that have been resolved.

VII. Scope changes

Definition

The number of changes that were made to the test scope.

Purpose

Indicates requirements stability or volatility, as well as process stability.

Data to collect

Ratio of the number of changed items in the test scope to the total number of items.

VIII. Defect removal efficiency

Definition

The number of defects that are removed per time unit (hours/days/weeks)

Purpose

Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the

product.

Data to collect

Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the

number of remarks. This is calculated per test type, during and across test phases.

Page 140: Course Materails Final

140

Defect removal profiles:

Rating

Automated Analysis Peer Reviews Execution Testing and

Tools

Very-Low Simple compiler syntax

checking.

No peer review.

No testing

Nominal Some compiler extensions

for static

module and inter-module

level code

analysis, syntax, type-

checking.

Basic requirements and

design

consistency, traceability

checking.

Well-defined

sequence of

preparation, review,

minimal

follow-up.

Informal review roles

and

procedures.

Basic unit test,

integration test,

system test process.

Basic test data

management,

problem tracking

support.

Test criteria based on

checklists.

Extra- High Formalized* specification

and

verification.

Advanced distributed

processing and

temporal analysis, model

checking,

symbolic execution.

*Consistency-checkable pre-

conditions

and post-conditions, but not

mathematical theorems.

Formal review roles

and

procedures for fixes,

change

control.

Extensive review

checklists, root

cause analysis.

Continuous review

process

improvement.

User/Customer

involvement,

Statistical Process

Control

.

Highly advanced tools

for test

oracles, distributed

monitoring

and analysis, assertion

checking

Integration of

automated

analysis and test tools.

Model-based test

process

management.

Formula used

Page 141: Course Materails Final

141

VII. Valid remark ratio

Definition

Percentage of valid remarks during a certain period.

Purpose

Indicates the efficiency of the test process.

Data to collect

Ratio of the total number of remarks that are valid to the total number of remarks found

Formula used

Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next

phase or release

VIII. Bad fix ratio

Definition

Percentage of the number of resolved remarks that resulted in creating new defects while resolving existing

ones.

Purpose

Indicates the effectiveness of the defect-resolution process, plus indirect indications as to the maintainability of

the software.

Data to collect

Ratio of the total number of bad fixes to the total number of resolved defects. This can be calculated per test

type, test phase or time period.

Step 4: Define data collection procedures

Page 142: Course Materails Final

142

• Defect data includes dates of defect detection and repair, and the number of defects discovered and

repaired per activity. Defect data should be available from the minutes of meetings, test reports, and code

headers. However, as Integrated software has not previously kept such data, the metrics coordinator must

assume that all defects detected in reviews were repaired in the recode activity.

• Effort data includes total person hours to complete each activity and is available in project status

reports only.

• Implementation data includes the type and size of software for each project. This data is available from

the development staff.

Procedures for data collection

The metrics co-coordinator decides he will be responsible for collecting the necessary data. He documents the

following procedures for his roles.

I. Effort data

Collect copies of the project status reports from the project managers. Determine the start and completion

dates for each person for each activity and compute the person hours accordingly. Record the data on an effort form.

Total overall persons per activity and record the total in a metrics database.

II. Implementation data

Collect copies of the source code from the developers. Count source lines of code using the same tool on all

projects to ensure consistency. Determine the type of software for each project. Enter the total size and type of software

for each project in the database.

III. Defect data

Gather defect data from the source code and project status reports mentioned above. Also, collect minutes of

review meetings from the developers, and weekly test reports from the test group.

Defects Detected: From the minutes of meetings and the weekly test reports, count the number of defects detected for

each week and enter the totals on a form.

Defects Repaired: From the comments in the code headers(which include dated references to the defects), tally the

number of defects repaired each week on a form. Enter the totals in a metrics database.

Finally, use the database to compute all metrics and generate the graphs and reports to summarize them.

Step 5: Assembles a Metrics tool

The coordinator assembles the tools necessary for the metrics program. Integrated Software has a

spreadsheet on the PC that can easily manage the data collection forms. The metrics program also needs an method

for counting lines of code. The development team has an editor which supports user defined macros, so they develop a

macro in the editor to count code consistently. The algorithm implemented ignores blank and commented lines and

includes data and executable statements.

Step 6: Creates a Metrics database

Integrated software's metrics database needs to retain the information entered directly from the forms it has

used. It must also permit the analysis of this data and calculation of the metrics specified. It needs to be able to display

metrics graphically for presentations and reports. Since Integrated Software's defect tracking system keeps history data

Page 143: Course Materails Final

143

on defects, this data will be extracted directly into a spreadsheet, where it can be used to compute and present the

defect trend metrics.

Step 7: Define the feedback mechanism

Given the small size of company the coordinator decides the results from the metric analysis should be

presented in a meeting, saving the effort of writing a detailed report. The graphs and metrics calculated will be prepared

on overhead transparencies for presentation and handouts of the slides will be provided. The data which is collected to

be analyzed and collected in the metrics program will be the company’s first baseline, which can be enhanced later as

more projects are entered into the metrics program.


Recommended