+ All Categories
Home > Documents > Software Testing Week 8 lectures 1 and 2

Software Testing Week 8 lectures 1 and 2

Date post: 13-Sep-2014
Category:
View: 2,024 times
Download: 1 times
Share this document with a friend
Description:
 
Popular Tags:
87
Software Testing Week 8 lectures 1 and 2
Transcript
Page 1: Software Testing Week 8 lectures 1 and 2

Software Testing

Week 8 lectures 1 and 2

Page 2: Software Testing Week 8 lectures 1 and 2

What is quality• The definition of the term quality is an issue.• On the meaning of Quality, a surprising number

of people still think software quality is simply the absence of errors.

• Dictionary definitions are too vague to be of much help

• The only relevant definition offered by the Oxford English Dictionary (Oxford, 1993), for instance, is peculiar excellence or superiority

• Note here that quality cannot be discussed for something in isolation: comparison is intrinsic.

Page 3: Software Testing Week 8 lectures 1 and 2

Quality• Many software engineering references define

software quality as– correct implementation of the specification

• Such a definition can be used during product development, but it is inadequate for facilitating comparisons between products

• Standards organisations have tended to refer to meeting needs or expectations, e.g. the ISO defines quality as the – totality of features and characteristics of a product or

service that bears on its ability to satisfy stated or implied needs.

Page 4: Software Testing Week 8 lectures 1 and 2

Quality

• IEEE defines quality as – The degree to which a system, component, or

process meets specified requirements.and

– The degree to which a system, component, or process meets customer or user needs or expectations.

Page 5: Software Testing Week 8 lectures 1 and 2

Quality definitions• Quality has been variously defined as: • Excellence (Socrates, Plato, Aristole)• Value (Feigenbaum 1951, Abbot 1955)• Conformance to specification (Levitt 1972,

Gilmore 1974)• Fit for purpose (Juran 1974)• Meeting or exceeding, customers’ expectations

(Gronroos 1983, Parasuraman & Ziethaml & Berry 1985)

• Loss avoidance (Taguchi 1989)

Page 6: Software Testing Week 8 lectures 1 and 2

Quality definitions• In short these six definitions show different

aspects of quality.• All can be applied to software development.• We often find our products marketed for their

excellence. We want to delight our customers with our products and to build a long term business relationship.

• Many countries trade laws oblige us to sell the product only when fit for the purpose to which our customer tells us they will put it.

Page 7: Software Testing Week 8 lectures 1 and 2

Quality• When purchasing managers look at our

software, they may judge comparable products on value knowing that this may stop them buying the excellent product.

• In managing the software development, efficiency and effective development processes together help avoid losses through rework and reducing later support and maintenance budgets.

• In testing, we work to see that the product conforms to specification.

Page 8: Software Testing Week 8 lectures 1 and 2

Testing Categories• Classifications of testing with specific

goals. The testing categories are: • Functional testing• Procedures testing• Operations testing• Documentation testing• Performance testing

Page 9: Software Testing Week 8 lectures 1 and 2

Software testing• is the process used to help identify the correctness,

completeness, security, and quality of developed software.

• Testing is a process of executing a program or application with the intent of finding errors.– With that in mind, testing can never completely establish the

correctness of arbitrary computer software.– In other words, testing is criticism or comparison, that is

comparing the actual value with an expected one.– An important point is that software testing should be

distinguished from the separate discipline of software quality assurance, which encompasses all business process areas, not just testing.

Page 10: Software Testing Week 8 lectures 1 and 2

Approaches• There are many approaches to software testing,

but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure.

• One definition of testing is:– "the process of questioning a product in order to

evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester.

Page 11: Software Testing Week 8 lectures 1 and 2

Testing

• Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces

• The quality of the application can, and normally does, vary widely from system to system but some of the common quality attributes include:

• reliability, efficiency, portability, maintainability and usability

Page 12: Software Testing Week 8 lectures 1 and 2

Faults and Failures• In general, software engineers distinguish

software faults from software failures• When software does not operate as it is

intended to do, a software failure is said to occur

• Software failures are caused by one or more sections of the software program being incorrect. Each of these incorrect sections is called a software fault. The fault could be as simple as a wrong value. A fault could also be complete omission of a decision in the program.

Page 13: Software Testing Week 8 lectures 1 and 2

Faults and Failures• A failure can also be described as an error in the

correctness of the semantic of a computer program.

• A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU

• A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended

Page 14: Software Testing Week 8 lectures 1 and 2

Faults and Failures

• Faults have many causes, including misunderstanding of requirements, overlooking special cases, using the wrong variable, misunderstanding of the algorithm, and even typing mistakes.

• Software that can cause serious problems if it fails is called safety-critical software. Many applications in aircraft, medicine, nuclear power plants, and transportation involve such software.

Page 15: Software Testing Week 8 lectures 1 and 2

Testing is a BIG issue• The number of potential test cases is huge. For example,

in the case of a simple program that multiplies two integer numbers:

• if each integer is a 32-bit number (a common size for the computer representation), then there are 232 possible values for each number

• This means, the total number of possible input combinations is 264 ,which is more than 1019 ,

• If a test case can be done each microsecond (10-6 ), then it will take hundreds of thousands of years to try all of the possible test cases. Trying all possible test cases is called exhaustive testing and is usually not a reasonable approach because of the size of the task.

Page 16: Software Testing Week 8 lectures 1 and 2

Testing and SQA• Software testing may be viewed as a sub-field of

software quality assurance but typically exists independently (and there may be no SQA areas in some companies)

• In SQA, software process specialists and auditors take a broader view on software and its development

• They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.

Page 17: Software Testing Week 8 lectures 1 and 2

Confidence• Regardless of the methods used or level of

formality involved, the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate

• What constitutes an acceptable defect rate depends on the nature of the software.– An arcade video game designed to simulate flying an

airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner

Page 18: Software Testing Week 8 lectures 1 and 2

Problems• A problem with software testing is that the

number of defects in a software product can be very large and the number of configurations of the product larger still

• Bugs that occur infrequently are difficult to find in testing– A rule of thumb (heuristic) is that a system that is

expected to function without faults for a certain length of time must have already been tested for at least that length of time.

– This has severe consequences for project developers trying to write long-lived reliable software

Page 19: Software Testing Week 8 lectures 1 and 2

Common Practices• A common practice of software testing is that it

is performed by an independent group of testers after finishing the software product and before it is shipped to the customer

• This practice often results in the testing phase being used as project buffer to compensate for project delays

• Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

Page 20: Software Testing Week 8 lectures 1 and 2

Common Practice

• A further common practice is for test suites to be developed during technical support escalation procedures

• Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes

Page 21: Software Testing Week 8 lectures 1 and 2

Common belief

It is commonly believed that:

the earlier a defect is found the cheaper it is to fix

Page 22: Software Testing Week 8 lectures 1 and 2

Agile Systems

• Some emerging software disciplines such as extreme programming (XP) and the agile software development movement, adhere to a “test-driven software development“ (TDD) model.

• In this process unit tests are written first, by the programmers before any application code– (often with pair programming in the extreme

programming methodology). • Then the code is run against the unit tests

Page 23: Software Testing Week 8 lectures 1 and 2

Unit Tests in XP

• These tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites.

• The test suites are continuously updated as new failure conditions and corner cases are discovered and they are integrated with any regression tests that are developed

Page 24: Software Testing Week 8 lectures 1 and 2

Unit Tests

• The testing of the smallest single components of the software. Testing is to determine that the individual program modules perform to specification.

Page 25: Software Testing Week 8 lectures 1 and 2

Unit tests

• In Agile develoment systems, unit tests are maintained along with the rest of the software source code and generally integrated into the build process– (with inherently interactive tests being

relegated to a partially manual build acceptance process)

Page 26: Software Testing Week 8 lectures 1 and 2

Test Harness

• The software, tools, samples of data input and output and configurations are all referred to collectively as a test harness.

Page 27: Software Testing Week 8 lectures 1 and 2

Code and Fix

• Typical approach to small application development

• Can be successful in the hands of an experienced developer when the “fixes” are few in number.

• Students take this approach most of the time

Page 28: Software Testing Week 8 lectures 1 and 2

Smoke Testing:• A quick-and-dirty test that the major

functions of a piece of software work.• Originated in the hardware testing practice

of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Page 29: Software Testing Week 8 lectures 1 and 2

White box testing, clear box testing, glass box testing or structural testing

• is used in software testing to check that the outputs of a program, given certain inputs, conform to the structural specification of the program.

• The term white box (or glass box) indicates that testing is done with a knowledge of the code used to execute certain functionality

• For this reason, a programmer is usually required to perform white box tests

Page 30: Software Testing Week 8 lectures 1 and 2

White box

• Often, multiple programmers will write tests based on certain code, so as to gain varying perspectives on possible outcomes.

• With extensive knowledge of internal workings changing the design often results in breaking the test. This adds financial resistance to the change process, thus buggy products may stay buggy

Page 31: Software Testing Week 8 lectures 1 and 2

Black Box Testing• An approach to testing used by analyst

and users alike where inputs and outputs of functions are known, but internal code structure is irrelevant.

• A form of testing which identifies various inputs and maps them to specific output behaviors, without targeting specific software components or portions of the code.

Page 32: Software Testing Week 8 lectures 1 and 2

Black Box in other words• In black box testing the test engineer only accesses the

software through the same interfaces that the customer or user would, or possibly through remotely controllable, automation interfaces that connect another computer or another process into the target of the test

• For example a test harness might push virtual keystrokes and mouse or other pointer operations into a program through any inter-process communications mechanism, with the assurance that these events are routed through the same code paths as real keystrokes and mouse clicks.

Page 33: Software Testing Week 8 lectures 1 and 2

Grey Box

• In recent years the term grey box testing has come into common usage.

• The typical grey box tester is permitted to set up or manipulate the testing environment, like seeding a database, and can view the state of the product after their actions, like performing a SQL query on the database to be certain of the values of columns.

Page 34: Software Testing Week 8 lectures 1 and 2

Grey Box• It is used almost exclusively by client-server

testers or others who use a database as a repository of information

• It can also apply to a tester who has to manipulate XML files (DTD or an actual XML file) or configuration files directly

• It can also be used of testers who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results

Page 35: Software Testing Week 8 lectures 1 and 2

Defect density• One of the easiest ways to judge whether a

program is ready to release is to measure its defect density—the number of defects per line of code

• Suppose that the first version of your product, consisted of 100,000 lines of code

• You detected 650 defects prior to the software’s release, and that 50 more defects were reported after the software was released.

• The software therefore had a lifetime defect count of 700 defects and a defect density of 7 defects per thousand lines of code (KLOC).

Page 36: Software Testing Week 8 lectures 1 and 2

Walkthrough• A review of requirements, designs or code

characterised by the author of the object under review guiding the progression of the review.

Page 37: Software Testing Week 8 lectures 1 and 2

System Test

• The process of testing an integrated system to verify that it meets specified requirements. Testing to determine that the results generated by the enterprise's information systems and their components are accurate and the systems perform to specification.

Page 38: Software Testing Week 8 lectures 1 and 2

Static Test• An analysis of the form, structure and

correctness of a work product without executing the product. The opposite of a dynamic test.

Page 39: Software Testing Week 8 lectures 1 and 2

Acceptance Test• A formal test usually performed by an end-

user or customer to determine whether a system or software component is working according to its requirements and design specifications.

Page 40: Software Testing Week 8 lectures 1 and 2

All-pairs testing or pairwise testing

• is a combinatorial testing method that, for each pair of input parameters to a system (typically, a software algorithm) tests all possible discrete combinations of those parameters

• Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters

• By "parallelizing" the tests of parameter pairs. The number of tests is typically O(nm), where n and m are the number of possibilities for each of the two parameters with the most choices.

Page 41: Software Testing Week 8 lectures 1 and 2

All pairs• The reasoning behind all-pairs testing is this:

– The simplest bugs in a program are generally triggered by a single input parameter.

– The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing.

– Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing..

– .. which has as its limit the exhaustive testing of all possible inputs.

Page 42: Software Testing Week 8 lectures 1 and 2

All pairs• Many testing methods regard all-pairs testing of

a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods and less exhaustive methods which fail to exercise all possible pairs of parameters

• Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, fuzz testing, and code review.

Page 43: Software Testing Week 8 lectures 1 and 2

Fuzz testing• is often used in large software development projects that perform black box

testing.• These usually have a budget to develop test tools and fuzz testing is one of

the techniques which offers a high benefit to cost ratio.

• Fuzz testing is also used as a gross measurement of a large software system's quality.

• The advantage here is that the cost of generating the tests is relatively low. For example, third party testers have used fuzz testing to evaluate the relative merits of different operating systems and application programs.

• Fuzz testing is thought to enhance software security and software safety because it often finds odd oversights and defects which human testers would fail to find and even careful human test designers would fail to create tests for.

Page 44: Software Testing Week 8 lectures 1 and 2

Fuzz• However, fuzz testing is not a substitute for exhaustive

testing or formal methods:• It can only provide a random sample of the system's

behavior and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly

• Thus, fuzz testing can only be regarded as a proxy for program correctness, rather than a direct measure, with fuzz test failures actually being more useful as a bug-finding tool than fuzz test passes as an assurance of quality.

Page 45: Software Testing Week 8 lectures 1 and 2

Fuzz testing methods• As a practical matter, developers need to reproduce

errors in order to fix them. For this reason, almost all fuzz testing makes a record of the data it manufactures, usually before applying it to the software, so that if the computer fails dramatically, the test data is preserved.

• Modern software has several different types of inputs:

• Event driven inputs are usually from a graphical user interface, or possibly from a mechanism in an embedded system.

• Character driven inputs are from files, or data streams. • Database inputs are from tabular data, such as relational

databases.

Page 46: Software Testing Week 8 lectures 1 and 2

Fuzz Testing forms• There are at least two different forms of fuzz testing:

– Valid fuzz attempts to assure that the random input is reasonable, or conforms to actual production data.

– Simple fuzz usually uses a pseudo random number generator to provide input.

– A combined approach uses valid test data with some proportion of totally random input injected.

• By using all of these techniques in combination, fuzz-generated randomness can test the un-designed behavior surrounding a wider range of designed system states.

• Fuzz testing may use tools to simulate all of these domains.

Page 47: Software Testing Week 8 lectures 1 and 2

Event-driven fuzz• Normally this is provided as a queue of data structures. The queue

is filled with data structures that have random values. • The most common problem with an event-driven program is that it

will often simply use the data in the queue, without even crude validation.

• To succeed in a fuzz-tested environment, software must validate all fields of every queue entry, decode every possible binary value, and then ignore impossible requests.

• One of the more interesting issues with real-time event handling is that if error reporting is too verbose, simply providing error status can cause resource problems or a crash.

• Robust error detection systems will report only the most significant, or most recent error over a period of time.

Page 48: Software Testing Week 8 lectures 1 and 2

Character-driven fuzz• Normally this is provided as a stream of random

data. The classic source in UNIX is the random data generator.

• One common problem with a character driven program is a buffer overrun, when the character data exceeds the available buffer space.

• This problem tends to recur in every instance in which a string or number is parsed from the data stream and placed in a limited-size area.

Page 49: Software Testing Week 8 lectures 1 and 2

Rule of thumb (heuristic)• A heuristic evaluation is a usability testing method for

computer software that helps to identify usability problems in the user interface (UI) design.

• It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the "heuristics").

• These evaluation methods are now widely taught and practiced in the New Media sector, where UIs are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing.

Page 50: Software Testing Week 8 lectures 1 and 2

Heuristics• The main goal of heuristic evaluations is to identify any

problems associated with the design of user interfaces.• Usability consultant Jakob Nielsen developed this

method on the basis of several years of experience in teaching and consulting about usability engineering.

• Heuristic evaluations are one of the most informal methods of usability inspection in the field of human-computer interaction.

• There are many sets of usability design heuristics; they are not mutually exclusive and cover many of the same aspects of interface design.

Page 51: Software Testing Week 8 lectures 1 and 2

Usability Heuristics

• Quite often, usability problems that are discovered are categorized according to their estimated impact on user performance or acceptance

• Often the heuristic evaluation is conducted in the context of use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to be compatible with the intended users’ needs and preferences.

Page 52: Software Testing Week 8 lectures 1 and 2

Usability Heuristics

• Most heuristic evaluations can be accomplished in a matter of days. – The time required varies with the size of the artefact,

its complexity, the purpose of the review, the nature of the usability issues that arise in the review and the competence of the reviewers.

• A criticism that is often levelled at heuristic methods of evaluation is that results are highly influenced by the knowledge of the expert reviewer(s).

Page 53: Software Testing Week 8 lectures 1 and 2

Integration Testing• An approach to testing that combines

individual components into larger assemblies to expose faults in interfaces and in the interaction between integrated components

• The process of combining components into larger assemblies.

Page 54: Software Testing Week 8 lectures 1 and 2

Test Suite• The most common term for a collection of test

cases is a test suite.• The test suite often also contains more detailed

instructions or goals for each collection of test cases.

• It definitely contains a section where the tester identifies the system configuration used during testing.

• A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Page 55: Software Testing Week 8 lectures 1 and 2

Test Suite• Collections of test cases are sometimes

incorrectly termed a test plan. They may also be called a test script, or even a test scenario.

• An executable test suite is a test suite that is ready to be executed

• This usually means that there exists a test harness that is integrated with the suite and such that the test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with the system under test (SUT).

Page 56: Software Testing Week 8 lectures 1 and 2

Monkey test• In computer science a monkey test is a unit test

that runs with no specific test in mind.• The monkey in this case is the producer of any

input data (whether that be file data, or input device data).

• Examples of monkey test unit tests can vary from simple random string entry into text boxes (to ensure handling of all possible user input), to garbage files (for checking against bad loading routines that have blind faith in their data)

Page 57: Software Testing Week 8 lectures 1 and 2

Performance test• The testing conducted to evaluate the

compliance of a system or software component with specified performance requirements, such as response times, transaction rates and resource utilization.

Page 58: Software Testing Week 8 lectures 1 and 2

Regression Testing• The selective retesting to detect faults

introduced during modification of a system.

• Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Page 59: Software Testing Week 8 lectures 1 and 2

Regression testing in other words

• A regression test re-runs previous tests against the changed software to ensure that the changes made in the current software do not affect the functionality of the existing software.

• It can be performed either by hand or by software that automates the process.

• It can be performed at unit, module, system or project level.

• It often uses automated test tools to reduce the effort required to repeat a large suite of tests over many versions of the software.

Page 60: Software Testing Week 8 lectures 1 and 2

Scenario Testing• An intermediate definition is test

development in which business conditions are grouped together to represent a single set of business functions to be tested

• representing a discrete business case, characterized by a set of test cases grouped under the scenario; also called test run.

Page 61: Software Testing Week 8 lectures 1 and 2

Scenario testing

• A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics.

• It is:– (a) a story that is (b) motivating, (c) credible, (d)

complex, and (e) easy to evaluate.

Page 62: Software Testing Week 8 lectures 1 and 2

Scenario testing

• They are usually different from test cases in that test cases are single steps and scenarios cover a number of steps.

• Test suites and scenarios can be used in concert for complete system tests.

• Scenario testing is similar to, but not the same as session-based testing, which is more closely related to exploratory testing, but the two concepts can be used in conjunction.

Page 63: Software Testing Week 8 lectures 1 and 2

Test Case/Script• A set of inputs, execution preconditions, and

expected outcomes developed for a particular objective to verify compliance with specified requirements.

• After execution, it will contain actual outputs.• Test cases constitute checkpoints developed

into test scripts at which the behavior of the application is validated by comparing expected results against actual results.

Page 64: Software Testing Week 8 lectures 1 and 2

Alpha Testing• Simulated or actual operational testing at

an in-house site not otherwise involved with the software developers.

Page 65: Software Testing Week 8 lectures 1 and 2

Beta Testing• Operational testing at a site not otherwise

involved with the software developers.

Page 66: Software Testing Week 8 lectures 1 and 2

Testing Cycle• Although testing varies between organisations, there is a cycle to

testing:• Requirements Analysis: Testing should begin in the requirements

phase of the SDLC. • Design Analysis: During the design phase, testers work with

developers in determining what aspects of a design are testable and under what parameters those tests work.

• Test Planning: Test Strategy, Test Plan(s), Test Bed creation. • Test Development: Test Procedures, Test Scenarios,Test Cases,

Test Scripts to use in testing software. • Test Execution: Testers execute the software based on the plans

and tests and report any errors found to the development team. • Test Reporting: Once testing is completed, testers generate

metrics and make final reports on their test effort and whether or not the software tested is ready for release.

• Retesting the Defects

Page 67: Software Testing Week 8 lectures 1 and 2

JAD• Joint Application Development;

– a process by which an application evolves as a result of a series of instances, behind-closed-doors meetings, between the project development team and the potential application users.

Page 68: Software Testing Week 8 lectures 1 and 2

Controversy

• There is considerable controversy among testing writers and consultants about what constitutes responsible software testing.

• Members of the “context-driven school of testing believe that there are no "best practices" for testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.

• This belief directly contradicts standards such as the IEEE 829 test documentation standard, and organisations such as the US FDA who promote them.

Page 69: Software Testing Week 8 lectures 1 and 2

Some of the major controversies include:

Agile vs. Traditional• Starting around 1990, a new style of writing about testing

began to challenge what had come before. • Instead of assuming that testers have full access to

source code and complete specifications, these writers argued that testers must learn to work under conditions of uncertainty and constant change.

• Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model (CMM).

Page 70: Software Testing Week 8 lectures 1 and 2

Agile vs Traditional• The agile testing movement (which includes but

is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.

• However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be right.

• The Agile movement is a 'way of working', while CMM are a process improvement idea.

Page 71: Software Testing Week 8 lectures 1 and 2

Exploratory vs. Scripted

• Exploratory testing means simultaneous learning, test design, and test execution.

• Scripted testing means that learning and test design happens prior to test execution, and quite often the learning has to be done again during test execution.

• Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood.

Page 72: Software Testing Week 8 lectures 1 and 2

Exploratory vs. Scripted

• Some writers consider it a primary and essential practice.

• Structured exploratory testing is a compromise when the testers are familiar with the software.

• A vague test plan, known as a test charter, is written up, describing what functionalities need to be tested but not how, allowing the individual testers to choose the method and steps of testing.

Page 73: Software Testing Week 8 lectures 1 and 2

Exploratory vs. Scripted• There are two main disadvantages associated

with a primarily exploratory testing approach.– The first is that there is no opportunity to prevent

defects, which can happen when the designing of tests in advance serves as a form of structured static testing that often reveals problems in system requirements and design.

– The second is that, even with test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing approach is difficult.

Page 74: Software Testing Week 8 lectures 1 and 2

Exploratory vs. Scripted• For this reason, a blended approach of

scripted and exploratory testing is often used to reap the benefits of both while mitigating each approach's disadvantages.

Page 75: Software Testing Week 8 lectures 1 and 2

Manual vs. Automated• Some writers believe that test automation is so

expensive relative to its value that it should be used sparingly.

• Others, such as advocates of agile development, recommend automating 100% of all tests

• A challenge with automation is that automated testing requires automated test oracles – (an oracle is a mechanism or principle by which a problem in the

software can be recognized).• Such tools have value in load testing software (by

signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software.

Page 76: Software Testing Week 8 lectures 1 and 2

Manual vs Automated• The success of automated software testing

depends on complete and comprehensive test planning.

• Software development strategies such as test-driven development are highly compatible with the idea of devoting a large part of an organization's testing resources to automated testing.

• Many large software organizations perform automated testing. Some have developed their own automated testing environments specifically for internal development and not for resale.

Page 77: Software Testing Week 8 lectures 1 and 2

Metrics

• By measuring how many bugs are found and comparing them to predicted numbers (based on past experience with similar projects), certain assumptions regarding the effectiveness of testing can be made.

• While not an absolute measurement of quality, if a project is halfway complete and there have been no defects found, then changes may be needed to the procedures being employed by QA

Page 78: Software Testing Week 8 lectures 1 and 2

XP (Extreme Programming)• Extreme Programming (XP) is a deliberate and disciplined approach

to software development• XP is successful because it stresses customer satisfaction. The

methodology is designed to deliver the software the customer needs when it is needed

• XP empowers developers to respond to changing customer requirements, even late in the life cycle, with a degree of confidence.

• This methodology also emphasizes team work.– Managers, customers, and developers are all part of a team dedicated

to delivering quality software.– XP implements a simple, yet effective way to enable groupware style

development.

Page 79: Software Testing Week 8 lectures 1 and 2

XP contd.• XP improves a software project in four essential ways

– Communication– Simplicity– Feedback– and courage.

• XP programmers keep their design simple and clean.• They get feedback by testing their software starting on

day one. • They deliver the system to the customers as early as

possible and implement changes as suggested.• With this foundation XP programmers are able to

respond to changing requirements and technology.

Page 80: Software Testing Week 8 lectures 1 and 2

The Testers• Understand fundamentals• Master software testers should understand

software.– What can software do?– What external resources does it use to do it?– What are its major behaviours?– How does it interact with its environment?

• The answers to these questions have nothing to do with practice and everything to do with training. One could practice for years and not gain such understanding.

Page 81: Software Testing Week 8 lectures 1 and 2

It’s a complex environment• 4 major categories of software users, (entities within an application’s

environment that are capable of sending the application input or consuming its output).

• Note that of the four major categories of users, only one is visible to the human tester’s eye: the user interface. The interfaces to the kernel, the file system and other software components happen without scrutiny.

Page 82: Software Testing Week 8 lectures 1 and 2

Hmmm..

• Without understanding these interfaces, testers are taking into account only a very small percentage of the total inputs to their software.

• By paying attention only to the visible user interface, we are limiting what bugs we can find and what behaviors we can force

Page 83: Software Testing Week 8 lectures 1 and 2

For example

• The scenario of a full hard drive. How do we test this situation?

• Inputs through the user interface will never force the code to handle the case of a full hard drive.

• This scenario can only be tested by controlling the file system interface. Specifically we need to force the files system to indicate to the application that the disk is full.

• Controlling the UI is only one part of the solution.

Page 84: Software Testing Week 8 lectures 1 and 2

Testing as “Art” or “Craft”

• Understanding the environment in which an application works is a nontrivial endeavor that all the practice in the world will not help you accomplish.

• Understanding the interfaces that your application possesses and establishing the ability to test them requires discipline and training.

• This is not a task for artists and craftspeople.

Page 85: Software Testing Week 8 lectures 1 and 2

Understanding failure

• Master software testers should understand software failure.– How and why does software fail?– Are there symptoms of software failure that

give us clues to the health of an application? – Are some features systemically problematic?– How does one drive certain features to

failure?

Page 86: Software Testing Week 8 lectures 1 and 2

Good books on software testing

• G. J. Myers, The Art of Software Testing (Wiley, New York, 1979).

• J. A. Whittaker, How to Break Software (Addison Wesley, Reading MA, 2002)

Page 87: Software Testing Week 8 lectures 1 and 2

Questions

• Any questions on testing?


Recommended