+ All Categories
Home > Education > Se 381 - lec 28 -- 34 - 12 jun12 - testing 1 of 2

Se 381 - lec 28 -- 34 - 12 jun12 - testing 1 of 2

Date post: 13-Jan-2015
Category:
Upload: babak
View: 132 times
Download: 0 times
Share this document with a friend
Description:
Software Engineering, Lectures
Popular Tags:
37
SE-381 Software Engineering BEIT-V Lecture # 28 Testing (1 of 2)
Transcript
Page 1: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

SE-381Software Engineering

BEIT-VLecture # 28

Testing (1 of 2)

Page 2: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Testing Principles [Sch96] Ch – 5

• Purpose of Testing– To detect faults – as many as

possible and as early as possible– Correction of faults at early stages is

cheaper– To produce high quality Software

• Testing as an Independent stage!• Testing Integrated to each phase

of SDLC and Acceptance Testing of the final product

Page 3: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Testing Intro (Contd.)

• Verification & Validation– Verification refers to the process of

determining whether a phase has been correctly carried out, and it takes place at the end of each phase

– Validation is the intensive evaluation process that takes place just before the product is delivered to the client – to determine whether the product as a whole satisfies its Specifications

» Conforms to IEEE Software Engineering Glossary, IEEE 610.12, 1990

Page 4: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Verification & Validation

• Barry Boehm 1984– Verification: Are we building the Product

Right?– Validation: Are we building the Right

Product?

– The former concentrates on the process and later on the product.

– V & V and Testing are used alternatively by different texts, but Schach uses Testing with broader meanings encompassing Process as well as Product

Page 5: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Types of Testing

Non-Execution Based Testing– Applicable to

earlier phases as well as Coding phase

– Mostly comprised of Walkthroughs and Inspections of Documents, Design and Code

Execution Based Testing

– Corresponds to the phases when code is available

– Applicable to Implementation, Integration and Maintenance phases

Page 6: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Software Quality Assurance

• In Engineering Disciplines– Quality is “Adherence to Specifications”

• Accordingly, the Quality of Software is – the extent to which the product satisfies

the Specifications– Its achievement needs ‘Effort’ and

‘Mechanism’– SQA group should be amongst the

‘Strongest groups’ in the Software Development setup like Quality Control Department in Industry

Page 7: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

SQA Group• MUST ensure

– Correct Product is produced– Product has been produced by following

the right process.– At the end of each phase group should

verify that the produced deliverables conform to the previous and next stage requirements

– That Software Process and organizations capability to produce quality software is improving

Page 8: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

SQA Group (Contd)– Should comprise of skilled, technical,

Senior members with expertise in varied areas of SD

– Number of its members should be proportional to the scope and amount of SD undertaken by the setup

– For small setups of <4 developers separate group will be an overhead, so the individuals should ensure SQA of parts developed / authored by others

– Separate SQA Group costs but brings in more benefits (in terms of repute and more work) resulting from the delivery of High Quality Software

Page 9: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

SQA Group (Contd.)• Managerial Independence

– The group should directly report to CE/CEO of the setup

– Since it’ll have to make important, as well as seemingly ‘detrimental’ decisions eg “Buggy in-Time delivery” or “Bug-free (or less buggy) delayed delivery”, and resolve ‘Conflicts of Interest’ among different groups, so SQA Group should have Managerial Independence

Page 10: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Non-Execution Based Testing

– Mostly comprised of document(s), Design and Code reviews, and can be conducted through

– Should be done by a Team consisting of members, most of whom were not involved in the respective phase development

– Reviewing should be done independently by the team members, with intention to find ‘Max Faults’

– Can be carried out in two ways:• Walkthroughs• Inspections or Technical / Formal Reviews

Page 11: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Inspections Versus Walkthroughs

Inspections• More formal• Checklists guide

the reviewers• 5-step Method• Takes longer then

WT• More powerful

and cost-effective tool

Walkthroughs• Informal• 2-step method

with– Preparations– Team Analysis of

the document

Page 12: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Walkthroughs• Walkthrough team should be of 4-6

members including• Representative from the developers involved in

respective (to be reviewed) phase• Manager from the phase under review• Representative from the Next phase – who have

to use this deliverable • Representative from SQA Group (Chair)• Representative from Client

– Professionally experienced can find important faults

– Senior members more effective

Page 13: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Managing Walkthroughs• The documents/material should be

circulated in advance• Each reviewer, after detailed study,

should prepare two lists of:1. Possible Faults2. Items in-comprehensible (unclear)

• Aim is to ‘detect and record’ the faults not to ‘correct’ them

• Walkthrough meetings be called and chaired by SQA group member and should not last more than two hours

Page 14: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Managing Walkthroughs .• The Phase (one under review)

Representative guides the members to walkthrough the document, this can be either– Participant-Driven

• The prepared lists (1 & 2) are presented• The Phase Rep should respond, justify or clarify

each item• After reply, each item graded as ‘Fault’ or

‘Reviewers Mistake’– Document-Driven

• Phase Rep walks the participants through the document

• Reviewers interrupt by picking up from their own prepared lists, or triggered by others response or presentation

Page 15: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Managing Walkthroughs ..• Document Driven approach

– Widely used,– Applicable to Specification, Design, Plan

and Code Walkthroughs– Prompts more faults

» IEEE Standards for Software Reviews IEEE 1028, 1988

• Performance at Walkthroughs should NOT be used as ‘Evaluation’ of Team Members

• Walkthrough meetings should not turn into ‘Point-Scoring’ sessions – otherwise these will forfeit the aim of Fault finding

Page 16: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Inspections (or Reviews w.r.t [Jal05])

• Proposed by Fagan 1976 for testing of Design and Code– A more formal approach– Comprised of 5 steps

• Overview – of the document/material by Phase Rep, and after review it’ll be distributed among reviewers

• Preparation – Reviewers list the faults, aided by provided Checklist, and remark their frequency, type etc

• Inspection – One Reviewer walks through the material with reviewers ensuring every item is covered. All identified faults be compiled by the Inspection Team leader within the day

• Rework – Phase Rep should resolve all the faults, problems and document their solutions

Page 17: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Inspections or Reviews (Contd.)

• Follow-up – The Team Leader to ensure that every single item raised is satisfactorily resolved either by fixing the document or clarifying the item.

– If more than 5% of the material inspected is Reworked then team meets again for 100% re-inspection

– Inspection Team should have at least 4 members – Moderator/Leader from SQAG, Phase Rep, Next Phase Rep, Clients Rep

– Team members should take different roles of Reader, Recorder, Moderator etc

Page 18: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

CHECKLIST FOR HIGH LEVEL OR FUNCTION-ORIENTED DESIGN

Sr. # Items Yes/No

1. Is each of the functional requirements taken into account?

2. Are there analyses to demonstrate that performance requirements can be met?

3. Are all assumptions explicitly stated, and are they acceptable?

4. Are there any limitations or constraints on the design beyond those in the requirements?

5. Are external specifications of each module completely specified?

6. Have exceptional conditions been handled?

7. Are all the data formats consistent with the requirements?

8. Are the operator and user interfaces properly addressed?

9. Is the design modular, and does it conform to local standards?

10. Are the sizes of data structures estimated?

11. Are provisions made to guard against overflow?

Page 19: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

CHECKLIST FOR DETAILED DESIGN

Sr. #

Items Yes/No

1. Does each of the modules in the system design exist in detailed design?

2. Are there analyses to demonstrate that the performance requirements can be met?

3. Are all the assumptions explicitly stated, and are they acceptable?

4. Are all relevant aspects of system design reflected in detailed design?

5. Have the exceptional conditions been handled?

6. Are all the data formats consistent with the system design?

7. Is the design structured, and does it conform to local standards?

8. Are the sizes of data structures estimated?

Page 20: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

9. Are provisions made to guard against overflow?

10. Is each statement specified in natural language easily codable?

11. Are the loop termination conditions properly specified?

12. Are the conditions in the loops OK?

13. Are the conditions in the if statements correct?

14. Is the nesting proper?

15. Is the module logic too complex?

16. Are the modules highly cohesive?

CHECKLIST FOR DETAILED DESIGN (Cont.)

Sr. #

Items Yes/No

Page 21: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Inspections Benefits• Fault Statistics recorded add to

historical data and can identify the gray areas in SD and weaknesses in organization

• 70-80% of all faults can be detected using inspections before Module Testing is started

• 40% less faults were detected in the final product when used Inspections rather than Walkthroughs

» Fagan 1976

• Programmers productivity is increased and resources needed are reduced by 25% despite Inspection teams overhead

Page 22: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Inspection Metrics• Metrics are to measure, monitor and

ultimately to control a process, and here to show effectiveness of the Inspection process– Fault Density – is number of faults per

page of Specifications, Design Document or per KLOC (1000 lines of Code) of Code inspected

– Fault Severity – further categorization of faults into Major (which lead to program failure or crash) or Minor (which are not Major) faults per unit material (page or KLOC)

– Fault Detection Efficiency – number of Major or Minor faults detected per person-hour

– Fault Detection Rate – number of Major/Minor faults detected per hour

Page 23: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Execution Based Testing• Testing is to demonstrate the

presence of faults/bugs or errors• A Successful Test case is (like the

‘Test’ in Medical terminology) the one which locates or identifies a bug in the product

“Program Testing can be very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence”

[Prof Dijkstra 1972]

Page 24: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

That is, if a product is executed with Test Data and the output is WRONG then the Product definitely contains a Fault, but if the output is CORRECT, then the product still may contain a fault.

Faults are same as ‘bugs’ and can relate to any phase of SDLC

Errors are usually programmer introduced faults

Failure is an unacceptable effect or behavior, under permissible operating conditions that occur as a consequence of a fault.

Page 25: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

What Should be Tested?

• Execution Based Testing is a process of inferring certain behavioral properties of a product based, in part, on the results of executing the product in a known environment with selected inputs

[Goodenough 1979]

– Three Key elements• Inferential Process• Known Environment(s)• Selected Input(s)

Page 26: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Inferential Process

• Testing is an Inferential Process – the product is run with known inputs expecting a desired output.

• Tester has to infer, what if anything is wrong with the product.

• Tester has Test Cases, Bug reports, Code and its .exe with him and has to identify the fault(s)

Page 27: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Known Environment(s)

• The Environment consists of Hardware, OSs, Compilers and all other programs with which the Product is to coexist and interact with or to execute in their presence

• The Environment specifically refers to the state of these components when a fault occurred

• The faults (usually assigned) to the Product may not be due to the Product but produced by any other component of the environment which is running at that time.

Page 28: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

– These may satisfy the Input specifications but may not produce the desired output

– In Real-Time (RT) systems, the exact inputs may not be easy to feed to a specific Product, as it will be one of the components of the system, whose input will be the output of some other component and may involve different A-D/D-A conversions.

– For RT systems’ testing Simulators are used and they provide the needed environment in which the system is to run and hence the Input data. Yet this will be a ‘simulated’ environment different than the one in which Product has to operate

Selected Input(s)

Page 29: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Testing - Behavioral Properties

• Testing is to confirm that whether Product functions correctly with respect to its behavioral properties, which are– Utility– Reliability– Robustness– Performance and– Correctness

[Goodenough 1979]

That is for these Behavioral Properties we test the Product.

Page 30: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Utility

Utility is the extent to which Users’ needs are met when a correct Product is used under conditions written by its SpecificationsHere we test for

• Ease of Use i.e. User friendliness• Functionality – whether Product performs

what was desired• Cost-effectiveness – otherwise it will not

be purchased, so it ought to be rightly/affordably priced

Page 31: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

ReliabilityReliability is the measure of the

frequency and criticality of Product failureWe use different metrics to measure

Reliability, these are:• Mean time between failures – how often the

product fails• Failure Severity – how ‘hard’ the effects of that

failure can be• Mean time to Repair – how long it takes on

average to repair, more important is how long it takes to correct the results of the failure. Eg crash (or failure) of email browser might delete all the emails in ‘Inbox’ folder, or can damage the ‘Address Book’, or a Db front-end might wipe out the database on which it was operating. So the repair of Product will include the correction of allied problems and would surely take much longer.

Page 32: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Robustness

Robustness is the response from the Product for any input, this response needs to be reasonable and meaningful. It is essentially a function of number of factors; range of Operating conditions, unacceptable results for valid input and acceptability of results for invalid input.

• A Robust Product should – NOT yield unacceptable results when the input

satisfies its specifications– NOT crash when the Product is given invalid input– Respond/guide the user for invalid input, eg &$*?

Characters used for string command, say while reading a filename

Page 33: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

PerformancePerformance is the extent to which

Product meets its constraints with respect to response time or space requirements. Embedded systems, used in Industry, Avionics, PDA, Palmtop devices, Mobiles etc have their own storage, size, compute power constraints.Here we test Product according to the

application domain • RT systems need to perform within a response time

range, otherwise the produced results would be useless.

• Size/display constraints may lead to smaller storage/memory sizes and minimal instruction sets, so the product need to be developed within those constraints

• Critical information needs to be received and processed in RT.

Page 34: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Correctness

Correctness is the extent of conformance of the product to the specifications, independent of its use of computing resources, when operated under permitted conditions.

[Goodenough 1979]• Simply, if product is provided all

resources it needs, then for given valid input, it should provide the valid output i.e. according to output specifications

Page 35: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Formal Mathematical Proofs• To prove that program is correct wrt its

specification are expensive to develop, so used in Safety/Security and Life critical systems

• They use the knowledge of formal semantics of the programming languages and construct theories that relate the program with its formal specs

• These theories are proven mathematically, often using complex theorem-proving programs

• They need specialist skills and have very high cost

• The proven correct also needs to be tested and verified. {Sommerville, SE 6th Ed, 2000;p436)

Page 36: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Fault Handling Techniques

Testing

Fault Handling

Fault AvoidanceFault Tolerance

Fault Detection

Debugging

ComponentTesting

IntegrationTesting

SystemTesting

VerificationConfigurationManagement

AtomicTransactions

ModularRedundancy

CorrectnessDebugging

PerformanceDebugging

ReviewsDesign

Methodology

Page 37: Se 381 -  lec 28 -- 34 - 12 jun12 - testing 1 of 2

Reference for This Lecture• Pankaj Jalote, (2004 / 2005), An

Integrated Approach to Software Engineering, 2nd / 3rd Edition; Narosa Publishing House, New Delhi – Chapter 5 / 10 – Testing pp: 403-471/409-471

• Stephen R Schach (1996); Classical and Object Oriented Software Engineering; 3rd Ed, Irwin-McGraw Hill, Boston, Ch-5 Testing Principles; pp:109-138

• Rajib Mall (2005); Fundamentals of Software Engineering, 2nd Ed, Prentice-Hall of India, New Delhi; Ch-10 Coding and Testing, pp:248-279


Recommended