+ All Categories
Home > Documents > Testing Material

Testing Material

Date post: 20-Jul-2016
Category:
Upload: satish-yadav
View: 220 times
Download: 0 times
Share this document with a friend
85
TESTER’S TESTER’S GUIDE GUIDE
Transcript
Page 1: Testing Material

TESTER’S TESTER’S

GUIDEGUIDE

Page 2: Testing Material

TABLE OF CONTENTS

1. POLICY........................................................................................................2. Terms to understand......................................................................................

2.1. What is software 'quality'?......................................................................2.2. What is verification? validation?............................................................2.3. What's an 'inspection'?............................................................................2.4. QA & Testing? Differences....................................................................

3. Life Cycle of Testing Process.......................................................................4. Levels of Testing...........................................................................................

Page 3: Testing Material

4.1. Unit Testing............................................................................................4.2. Integration testing...................................................................................4.3. System testing.........................................................................................4.4. Acceptance testing..................................................................................

5. Types of Testing............................................................................................5.1. Incremental integration testing...............................................................5.2. Sanity testing..........................................................................................5.3. Compatibility testing..............................................................................5.4. Exploratory testing..................................................................................5.5. Ad-hoc testing.........................................................................................5.6. Comparison testing.................................................................................5.7. Load testing............................................................................................5.8. System testing.........................................................................................5.9. Functional testing....................................................................................5.10. Volume testing......................................................................................5.11. Stress testing.........................................................................................5.12. Sociability Testing................................................................................5.13. Usability testing....................................................................................5.14. Recovery testing...................................................................................5.15. Security testing.....................................................................................5.16. Performance Testing.............................................................................5.17. End-to-end testing.................................................................................5.18. Regression testing.................................................................................5.19. Parallel testing......................................................................................5.20. Install/uninstall testing..........................................................................

Page 4: Testing Material

5.21. Mutation testing....................................................................................5.22. Alpha testing.........................................................................................5.23. Beta testing...........................................................................................

6. Testing Techniques.......................................................................................6.1. Black Box testing....................................................................................

6.1.1. Equivalence Testing.........................................................................6.1.2. Boundary testing..............................................................................6.1.3. Cause-Effect Graphing Techniques..................................................6.1.4. Error Guessing..................................................................................

6.2. White Box testing...................................................................................6.2.1. Path Testing......................................................................................6.2.2. Condition testing..............................................................................6.2.3. Loop Testing....................................................................................6.2.4. Data Flow Testing............................................................................

7. Web Testing Specifics...................................................................................7.1. Internet Software - Quality Characteristics............................................7.2. WWW Project Peculiarities....................................................................7.3. Basic HTML Testing..............................................................................7.4. Suggestions for fast loading....................................................................7.5. Link Testing............................................................................................7.6. Compatibility Testing.............................................................................7.7. Usability Testing.....................................................................................

7.7.1. Usability Tips...................................................................................7.8. Portability Testing..................................................................................7.9. Cookies Testing......................................................................................

Page 5: Testing Material

8. Testing - When is a program correct?...........................................................9. Test Plan........................................................................................................10. Test cases....................................................................................................

10.1. What's a 'test case'?...............................................................................11. Testing Coverage........................................................................................12. What if there isn't enough time for thorough testing?.................................13. Defect reporting..........................................................................................14. Types of Automated Tools..........................................................................15. Top Tips for Testers....................................................................................

Page 6: Testing Material

1. POLICYWe are committed to Continuous Improvement of Quality of Products and Customer Services by adhering to

International Standards.

2. TERMS TO UNDERSTAND2.1. What is software 'quality'?

Quality software is reasonably bug/defect-free, delivered on time and within budget, meets requirements and/or

expectations, and is maintainable.

2.2. What is verification? validation? Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and

specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation

typically involves actual testing and takes place after verifications are completed. Both validation and verification

process continue in a cycle till the software becomes defect free (Validation --> Reporting --> Fixing (and

enhancements) --> Verification).

Page 7: Testing Material

2.3. What's an 'inspection'? An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader (the author

of whatever is being reviewed), and a recorder to take notes. The subject of the inspection is typically a document such as a

requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

2.4. QA & Testing? DifferencesSoftware QA involves the entire software development PROCESS - monitoring and improving the process, making sure

that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is

oriented to 'prevention'

Testing involves operation of a system or application under controlled conditions and evaluating the results (e.g., 'if the user

is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions

should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to

determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

3. LIFE CYCLE OF TESTING PROCESS1. Planning2. Analysis3. Design4. Execution

Page 8: Testing Material

5. Cycles6. Final Testing and implementation7. Post Implementation

The following are some of the steps to consider: Obtain requirements, functional design, and internal design specifications and other necessary documents Obtain schedule requirements Determine project-related personnel and their responsibilities, reporting requirements, required standards and

processes (such as release processes, change processes, etc.) Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc. Determine test environment requirements (hardware, software, communications, etc.) Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking,

etc.) Determine test input data requirements Identify tasks, those responsible for tasks Set schedule estimates, timelines, milestones Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases

Page 9: Testing Material

Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Maintain and update test plans, test cases, test environment, and testware through life cycle

4. LEVELS OF TESTING4.1. Unit Testing

The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and some

times by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless

the application has a well-designed architecture with tight code; may require developing test driver modules or test

harnesses.

4.2. Integration testing Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code

modules, individual applications, client and server applications on a network, etc. It is not truly system testing because the

Page 10: Testing Material

components are not implemented in the operating environment. This type of testing is especially relevant to client/server

and distributed systems.

During the integration testing, the units or components can be assembled using 2 approaches:

Nonincremental integration - "big bang" approach

Disadvantage - When failure  occurs, it is very difficult to locate the faults. After the modification, we have to go

through the testing, locating faults, modifying faults again.

Incremental integration

Incremental Integration can be top-down or bottom-up:

Top-down testing starts with main and successively replaces stubs with the real modules.

o Major steps: 1. The main control module is used as a test driver and stubs are substituted for all component directly subordinate

to the main module.

2. depending on integration approach, subordinate stubs are replaced once a time with actual components.

3. Tests are conducted as each component is integrated.

4. When failures are encountered, locate faults, and perform regression testing to guarantee the modification won't

adversely affected other pieces. Then go through 2 ,3, 4 again

Page 11: Testing Material

Advantages: can verify major control or decision point early in the testing process.

Disadvantages: Stubs are required when perform the integration testing, and generally, develop stubs in very

difficulties.

Bottom-up testing builds larger module assemblies from primitive modules.

o Major steps 1. Low-level components are combined into clusters which performs a specific subfunction.

2. A driver(a control program for testing( is written to coordinate test case input and output.

3. The cluster is tested

4. Drivers are removed and clusters are combined moving upward in the program structure.

Advantages: Stubs are substituted by drivers, which are much easier to be developed.

Disadvantages: Major control and decision problem will be identified later in the testing process.

Sandwich testing is mainly top-down with bottom-up integration and testing applied to certain widely used

components

4.3. System testingThe system test phase begins once modules are integrated enough to perform tests in a whole system environment.  System

testing can occur in parallel with integration test, especially with the top-down method.

Page 12: Testing Material

4.4. Acceptance testingFinal testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited

period of time.

5. TYPES OF TESTING5.1. Incremental integration testing

Continuous testing of an application as new functionality is added; requires that various aspects of an application's

functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be

developed as needed; done by programmers or by testers.

5.2. Sanity testingTypically an initial testing effort to determine if a new software version is performing well enough to accept it for a major

testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or

destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

5.3. Compatibility testing Testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Page 13: Testing Material

5.4. Exploratory testing Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be

learning the software as they test it.

5.5. Ad-hoc testing Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before

testing it.

5.6. Comparison testing Comparing software weaknesses and strengths to competing products.

5.7. Load testing Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the

system's response time degrades or fails.

5.8. System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Page 14: Testing Material

5.9. Functional testing Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers.

This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to

any stage of testing.)

5.10. Volume testingVolume testing involves testing the software or Web application using corner cases of "task size" or input data size. The

exact volume tests performed depend on the application's functionality, its input and output mechanisms and the

technologies used to build the application. Sample volume testing considerations include, but are not limited to:

If the application reads text files as inputs, try feeding it both an empty text file and a huge (hundreds of megabytes)

text file

If the application stores data in a database, exercise the application’s functions when the database is empty and when

the database contains an extreme amount of data

If the application is designed to handle 100 concurrent requests, send 100 requests simultaneously and then send the

101st request

If a Web application has a form with dozens of text fields that allow a user to enter text strings of unlimited length,

try populating all of the fields with a large amount of text and submit the form

Page 15: Testing Material

5.11. Stress testing Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional

testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values,

large complex queries to a database system, etc.

5.12. Sociability Testing This means that you test an application in its normal environment, along with other standard applications, to make sure they

all get along together; that is, that they don't corrupt each other's files, they don't crash, they don't consume system

resources, they don't lock up the system, they can share the printer peacefully, etc.

5.13. Usability testing Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User

interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually

not appropriate as usability testers.

5.14. Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Page 16: Testing Material

5.15. Security testing Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require

sophisticated testing techniques.

5.16. Performance Testing Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing)

is defined in requirements documentation or QA or Test Plans.

5.17. End-to-end testing Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a

situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with

other hardware, applications, or systems if appropriate.

5.18. Regression testing Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-

testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this

type of testing.

Page 17: Testing Material

5.19. Parallel testingWith parallel testing, users can easily choose to run batch tests or asynchronous tests depending on the needs of their test

systems. Testing multiple units in parallel increases test throughput and lower the manufacturer's cost.

5.20. Install/uninstall testing Testing of full, partial, or upgrade install/uninstall processes.

5.21. Mutation testing A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes

('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires

large computational resources.

5.22. Alpha testing Testing of an application when development is nearing completion: minor design changes may still be made as a result of

such testing. Typically done by end-users or others, not by programmers or testers.

5.23. Beta testing Testing when development and testing are essentially completed and final bugs and problems need to be found before final

release. Typically done by end-users or others, not by programmers or testers.

Page 18: Testing Material

6. TESTING TECHNIQUES6.1. Black Box testing

Black box testing (data driven or input/output driven) is not based on any knowledge of internal design or code. Tests are

based on requirements and functionality. Black box testing attempts to derive sets of inputs that will fully exercise all the

functional requirements of a system. Black box testing is concerned only with testing the specification; it cannot guarantee

that all parts of the implementation have been tested. Thus black box testing is testing against the specification and will

discover faults of omission, indicating that part of the specification has not been fulfilled. It is not an alternative to white

box testing. This type of testing attempts to find errors in the following categories:

5. Incorrect or missing functions,

6. Interface errors,

7. Errors in data structures or external database access,

8. Performance errors, and

9. Initialization and termination errors.

White box testing should be performed early in the testing process, while black box testing tends to be applied during later

stages.

Page 19: Testing Material

6.1.1. Equivalence TestingThis method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence

partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed.

It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or

invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.

2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.

3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.

4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.

Testcase Design for Equivalence partitioning

1. Good test case reduces by more than one the number of other test cases which must be developed

2. Good test case covers a large set of other possible cases

3. Classes of valid inputs

4. Classes of invalid inputs

Consider the following data variable

Page 20: Testing Material

Name of data variable: Age

Description of data variable: the length of time that one has existed

Data variable type: Integer

Acceptable range: 25 to 60

Equivalence partitioning essentially takes the input range and attempts to clearly compartmentalize this range into sub

ranges, and then presents test cases that will specifically test for each of these sub ranges.

Now, for the Age data variable, the acceptable range is from 25 to 60, inclusive. According to the rules of the technique,

since the data variable is a range, that means one valid class, and two invalid classes will be specified. Valid class refers to

the valid sub range of input values, invalid class refers to the invalid sub ranges.

Hence, for equivalence partitioning:

Valid equivalence class: 25 to 60

First invalid equivalence class: less than 25

Second invalid equivalence class: more than 60

Hence, examples of test cases could be as follows:

5. Valid equivalence class: Age = 30. Expected result – accepted

6. First Invalid equivalence class: Age = 20. Expected result – rejected

Page 21: Testing Material

7. Second Invalid equivalence class: Age = 70. Expected result – rejected

6.1.2. Boundary testingThis method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it

selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the

output domain also. BVA guidelines include:

1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and

b respectively.

2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and

maximum numbers and values just above and below these limits.

3. Apply guidelines 1 and 2 to the output.

4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure

at its boundary.

Test case Design for Boundary value analysis:

Situations on, above, or below edges of input, output, and condition classes have high probability of success

Page 22: Testing Material

For boundary value analysis, the idea is to identify the extreme values of the data range, and then choosing test cases

around these extreme values.

Hence for the above example, for boundary value analysis, it’ll go like this: Minimum value: 25 (minima value) Maximum value: 60 (maxima value)The examples of test cases would be:

1. Test cases for minima value: Age = 24 (rejected), Age = 25 (accepted), Age = 26 (accepted)

2. Test cases for maxima value: Age = 59 (accepted), Age = 60 (accepted), Age = 61 (rejected)

6.1.3. Cause-Effect Graphing TechniquesCause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions.

There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.

2. A cause-effect graph is developed.

3. The graph is converted to a decision table.

4. Decision table rules are converted to test cases.

Page 23: Testing Material

6.1.4. Error GuessingError Guessing is the process of using intuition and past experience to fill in gaps in the test data set. There are no rules to

follow. The tester must review the test records with an eye towards recognizing missing conditions. Two familiar examples

of error prone situations are division by zero and calculating the square root of a negative number. Either of these will result

in system errors and garbled output.

Other cases where experience has demonstrated error proneness are the processing of variable length tables, calculation of

median values for odd and even numbered populations, cyclic master file/data base updates (improper handling of duplicate

keys, unmatched keys, etc.), overlapping storage areas, overwriting of buffers, forgetting to initialize buffer areas, and so

forth. I am sure you can think of plenty of circumstances unique to your hardware/software environments and use of

specific programming languages.

Error Guessing is as important as Equivalence partitioning and Boundary Analysis because it is intended to compensate for

their inherent incompleteness. As Equivalence Partitioning and Boundary Analysis complement one another, Error

Guessing complements both of these techniques.

6.2. White Box testingWhite box testing (logic driven) is based on knowledge of the internal logic of an application's code. Tests are based on

coverage of code statements, branches, paths, conditions. White box testing is concerned only with testing the software

Page 24: Testing Material

product; it cannot guarantee that the complete specification has been implemented. White box testing is testing against the

implementation and will discover faults of commission, indicating that part of the implementation is faulty. A failure of a

white box test may result in a change, which requires all black box testing to be repeated, and the re-determination of the

white box paths. White box testing is a test case design method that uses the control structure of the procedural design to

derive test cases. Test cases can be derived that

1. guarantee that all independent paths within a module have been exercised at least once,

2. exercise all logical decisions on their true and false sides,

3. execute all loops at their boundaries and within their operational bounds, and

4. exercise internal data structures to ensure their validity.

6.2.1. Path TestingA path-coverage test allows us to exercise every transition between the program statements (and so every statement and

branch as well).

1. First we construct a program graph.

2. Then we enumerate all paths.

3. Finally we devise the test cases.

Possible criteria:

Page 25: Testing Material

1. exercise every path from entry to exit;

2. exercise each statement at least once;

3. exercise each case in every branch/case.

6.2.2. Condition testingA condition test can use a combination of Comparison operators and Logical operators.

The Comparison operators compare the values of variables and this comparison produces a boolean result. The Logical

operators combine booleans to produce a single boolean result that is the result of the condition test.

e.g. (a == b) Result is true if the value of a is the same as the value of b.

Myers: take each branch out of a condition at least once.

White and Cohen: for each relational operator e1 < e2 test all combinations of e1, e2 orderings. For a Boolean condition,

test all possible inputs (!).

Branch and relational operator testing---enumerate categories of operator values.

B1 || B2: test {B1=t,B2=t}, {t,f}, {f,t}

B1 || (e2 = e3): test {t,=}, {f,=}, {t,<}, {t,>}.

Page 26: Testing Material

6.2.3. Loop TestingThis white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be

defined:

1. simple loops,

2. nested loops,

3. concatenated loops, and

4. unstructured loops.

6.2.3.1 Simple LoopsThe following tests should be applied to simple loops where n is the maximum number of allowable passes through the

loop:

1. skip the loop entirely,

2. only pass once through the loop,

3. m passes through the loop where m < n,

4. n – 1,n,n + 1 passes through the loop

Page 27: Testing Material

6.2.3.2 Nested LoopsThe testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically

increasing number of test cases. One approach for nested loops:

1. Start at the innermost loop. Set all other loops to minimum values.

2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-

of-range or excluded values.

3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested

loops to typical values.

4. Continue until all loops have been tested.

6.2.3.3 Concatenated LoopsConcatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g.

the loop counter for one is the loop counter for the other), then the nested approach can be used.

6.2.3.4 Unstructured LoopsThis type of loop should be redesigned not tested!!!

6.2.4. Data Flow TestingData flow testing selects test paths according to the locations of definitions and uses of variables in the program

Def-use chains:

Page 28: Testing Material

4. def = definition of variable

5. use = use of that variable;

6. def-use chains go across control boundaries.

7. Testing---test every def-use chain at least once.

Stubs for Testing

A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system. Stubs are mostly used for

Top-Down Testing

Stubs can be used for the following when standing in for an unfinished code: o Display a trace message o Display parameter value(s) o Return a value from a table o Return table value selected by parameter

Drivers for Testing

Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in

isolation.

Page 29: Testing Material

7. WEB TESTING SPECIFICS7.1. Internet Software - Quality Characteristics

1. Functionality - Verified content

2. Reliability - Security and availability

3. Efficiency - Response Times

4. Usability - High user satisfaction

5. Portability - Platform Independence

7.2. WWW Project Peculiarities Software Consists of large degree of components

User Interface is more complex than many GUI based Client-Server applications.

User may be unknown (no training/ user manuals)

Security threats come from anywhere

User load unpredictable

7.3. Basic HTML Testing Check for illegal elements present

Check for illegal attributes present

Check for tags close

Page 30: Testing Material

Check for the tags <HEAD>, <TITLE>, <BODY>, <DOCTYPE>

Check that all IMG tags should have ALT tag[ALT tags must be suggestive]

Check for consistency of fonts - colors and font size.

Check for spelling errors in text and images.

Check for "Non Sense" mark up

Example for Non Sense mark up<B>Hello</B> may be written as <B>H</B><B>ell</B><B>o</B>

7.4. Suggestions for fast loading Web pages weight should be reduced to lesser size as much as possible

Don’t knock door of the Database every time. Go for the alternate.

Example: If your web application has Reports, generate the content of the report in a Static HTML file in a

periodic time. When the user view the report show him the static HTML content. No need to go to the database

and retrieve the data when the user hits the report link.

Cached Query - If the data which is fetched using a query only changes periodically

Then we can cache the query for that period. This will avoid unnecessary database access.

Every IMG tags must have WIDTH and HEIGHT attributes.

IMG - Bad Example

Page 31: Testing Material

<B> Hello </B>

<IMG SRC = "FAT.GIF">

<B> World </B>

IMG - Good Example

<B> Hello </B>

<IMG SRC = "FAT.GIF" WIDTH ="120" HEIGHT="150" >

<B> World </B>

All the photographic images must be in "jpg" format

Computer created images must be in "gif" format

Background image should be less than 3.5k [the background image should be same for all the pages (except for

functional reasons)]

Avoid nested tables.

Keep table text size to a minimum (e.g. less than 5000 characters)

7.5. Link Testing You must ensure that all the hyperlinks are valid

This applies to both internal and external links

Page 32: Testing Material

Internal links shall be relative, to minimize the overhead and faults when the web site is moved to production

environment

External links shall be referenced to absolute URLs

External links can change without control - So, automate regression testing

Remember that external non- home page links are more likely to break

Be careful at links in "What's New" sections. They are likely to become obsolete

Check that content can be accessed by means of: Search engine, Site Map

Check the accuracy of Search Engine results

Check that web Site Error 404 ("Not Found") is handled by means of a user-friendly page

7.6. Compatibility Testing

Check for the site behavior across the industry standard browsers. The main issues involve how different the

browsers handle tables, images, caching and scripting languages

In cross browsers testing, check for:

Behavior of buttons

Support of Java scripts

Support of tables

Page 33: Testing Material

Acrobat, Real, Flash behavior

ActiveX control support

Java compatibility

Text size

Browser Browserversion

ActiveX

controls

VB Script

JavaScript

Javaapplets

DynamicHTML Frames CSS

1.0CSS 2.0

Internet Explorer

4.0 and later

Enabled

Enabled

Enabled

Enabled Enabled EnabledEnable

dEnabled

Internet Explorer

3.0 and later

Enabled

Enabled

Enabled

Enabled Disabled EnabledEnable

dDisabled

Netscape Navigator

4.0 and later

Disabled

Disabled

Enabled

Enabled Enabled EnabledEnable

dEnabled

Netscape Navigator

3.0 and later

Disabled

Disabled

Enabled

Enabled Disabled EnabledDisable

dDisabled

Both Internet Explorer

4.0 and later

Disabled

Disabled

Enabled

Enabled

Enabled EnabledEnabled

Enabled

Page 34: Testing Material

and NavigatorBoth Internet Explorer and Navigator

3.0 and later

Disabled

Disabled

Enabled

Enabled Disabled EnabledDisable

dDisabled

Microsoft Web TV

Unavailable

Disabled

Disabled

Disabled

Disabled Disabled Disable

dDisabled

Disabled

7.7. Usability TestingAspects to be tested with care:

Coherence of look and feel Navigational aids User Interactions Printing

With respect to Normal behavior Destructive behavior Inexperienced users

Page 35: Testing Material

7.7.1. Usability Tips

1. Define categories in terms of user goals

2. Name sections carefully

3. Think internationally

4. Identify the homepage link on every page

5. Make sure search is always available

6. Test all the browsers your audience will use

7. Differentiate visited links from unvisited links

8. Never use graphics where HTML text will do

9. Make GUI design predictable and consistent

10. Check that printed pages fit appropriately to paper pages[consider that many people just surf and print. Check

especially the pages for which format is important. E.g.: an application form can either be filled on-line or

printed/filled/faxed]

7.8. Portability Testing1. Check that links to URLs outside the web site must be in canonical form (Ex: http://www.arkinsys.com/)

2. Check that links to URLs into the web site must be in relative form(e.g. …./arkinsys/images/images.gif)

Page 36: Testing Material

7.9. Cookies Testing

What are cookies?

A "Cookie" is a small piece of information sent by the web server to store on a web browser. So it can later be read back

from the browser. This is useful for having the browser remember some specific information.

Why must you test cookies?

1. Cookies can expire

2. Users can disable them in Browser

How to perform Cookies testing?

1. Check the behavior after cookies expiration

2. Work with cookies disabled

3. Disable cookies mid-way

4. Delete web cookies mid-way

5. Clear memory and disk cache mid-way

Page 37: Testing Material

8. TESTING - WHEN IS A PROGRAM CORRECT?There are levels of correctness. We must determine the appropriate level of correctness for each system because it costs

more and more to reach higher levels.

1. No syntactic errors

2. Compiles with no error messages

3. Runs with no error messages

4. There exists data which gives correct output

5. Gives correct output for required input

6. Correct for typical test data

7. Correct for difficult test data

8. Proven correct using mathematical logic

9. Obeys specifications for all valid data

10. Obeys specifications for likely erroneous input

11. Obeys specifications for all possible input

9. TEST PLANA software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing

effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of

Page 38: Testing Material

a software product. The completed document will help people outside the test group understand the 'why' and 'how' of

product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will

read it. The following are some of the items that might be included in a test plan, depending on the particular project:

1. Title of the Project

2. Identification of document including version numbers

3. Revision history of document including authors, dates, approvals

4. Table of Contents

5. Purpose of document, intended audience

6. Objective of testing effort

7. Software product overview

8. Relevant related document list, such as requirements, design documents, other test plans, etc.

9. Relevant standards or legal requirements

10. Traceability requirements

11. Relevant naming conventions and identifier conventions

12. Test organization and personnel/contact-info/responsibilities

13. Assumptions and dependencies

14. Project risk analysis

Page 39: Testing Material

15. Testing priorities and focus

16. Scope and limitations of testing

17. Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module,

etc. as applicable

18. Outline of data input equivalence classes, boundary value analysis, error classes

19. Test environment - hardware, operating systems, other required software, data configurations, interfaces to other

systems

20. Test environment setup and configuration issues

21. Test data setup requirements

22. Database setup requirements

23. Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be

used to help describe and report bugs

24. Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or

source of bugs

25. Test automation - justification and overview

26. Test tools to be used, including versions, patches, etc.

27. Test script/test code maintenance processes and version control

Page 40: Testing Material

28. Problem tracking and resolution - tools and processes

29. Project test metrics to be used

30. Reporting requirements and testing deliverables

31. Software entrance and exit criteria

32. Initial sanity testing period and criteria

33. Test suspension and restart criteria

34. Personnel pre-training needs

35. Test site/location

36. Relevant proprietary, classified, security, and licensing issues.

37. Appendix - glossary, acronyms, etc.

10. TEST CASES10.1. What's a 'test case'?

A test case is a document that describes an input, action, or event and an expected response, to determine if a

feature of an application is working correctly. A test case should contain particulars such as test case identifier,

test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Page 41: Testing Material

Note that the process of developing test cases can help find problems in the requirements or design of an

application, since it requires completely thinking through the operation of the application. For this reason, it's

useful to prepare test cases early in the development cycle if possible.

11. TESTING COVERAGE1. Line coverage. Test every line of code (Or Statement coverage: test every statement).

2. Branch coverage. Test every line, and every branch on multi-branch lines.

3. N-length sub-path coverage. Test every sub-path through the program of length N. For example, in a 10,000-

line program, test every possible 10-line sequence of execution.

4. Path coverage. Test every path through the program, from entry to exit. The number of paths is impossibly large

to test.

5. Multicondition or predicate coverage. Force every logical operand to take every possible value. Two different

conditions within the same test may result in the same branch, and so branch coverage would only require the

testing of one of them.

6. Trigger every assertion check in the program. Use impossible data if necessary.

7. Loop coverage. "Detect bugs that exhibit themselves only when a loop is executed more than once."

8. Low Memory Conditions - In the event the application fails due to low memory conditions, it should fail in

graceful manner. This means that error logs should be written detailing what the conditions were causing the

Page 42: Testing Material

failure to occur, orders the user may have open are closed, an error message is given informing the user of the

condition and the actions that will be taken, etc

9. Every module, object, component, tool, subsystem, etc. This seems obvious until you realize that many

programs rely on off-the-shelf components. The programming staff doesn't have the source code to these

components, so measuring line coverage is impossible. At a minimum (which is what is measured here), you need

a list of all these components and test cases that exercise each one at least once.

10. Fuzzy decision coverage. If the program makes heuristically-based or similarity-based decisions, and uses

comparison rules or data sets that evolve over time, check every rule several times over the course of training.

11. Relational coverage. "Checks whether the subsystem has been exercised in a way that tends to detect off-by-one

errors" such as errors caused by using < instead of <=. This coverage includes:

12. Every boundary on every input variable.

13. Every boundary on every output variable.

14. Every boundary on every variable used in intermediate calculations.

15. Data coverage. At least one test case for each data item / variable / field in the program.

16. Constraints among variables: Let X and Y be two variables in the program. X and Y constrain each other if the

value of one restricts the values the other can take. For example, if X is a transaction date and Y is the

transaction's confirmation date, Y can't occur before X.

Page 43: Testing Material

17. Each appearance of a variable. Suppose that you can enter a value for X on three different data entry screens,

the value of X is displayed on another two screens, and it is printed in five reports. Change X at each data entry

screen and check the effect everywhere else X appears.

18. Every type of data sent to every object. A key characteristic of object-oriented programming is that each object

can handle any type of data (integer, real, string, etc.) that you pass to it. So, pass every conceivable type of data

to every object.

19. Handling of every potential data conflict. For example, in an appointment-calendaring program, what happens

if the user tries to schedule two appointments at the same date and time?

20. Handling of every error state. Put the program into the error state, check for effects on the stack, available

memory, handling of keyboard input. Failure to handle user errors well is an important problem, partially because

about 90% of industrial accidents are blamed on human error or risk-taking. Under the legal doctrine of

foreseeable misuse, the manufacturer is liable in negligence if it fails to protect the customer from the

consequences of a reasonably foreseeable misuse of the product.

21. Every complexity / maintainability metric against every module, object, subsystem, etc. There are many such

measures. Jones lists 20 of them. People sometimes ask whether any of these statistics are grounded in a theory of

measurement or have practical value. But it is clear that, in practice, some organizations find them an effective

tool for highlighting code that needs further investigation and might need redesign.

avv, 07/29/05,
I think, this Needs elaboration – Not able to give more info. If you can , can u pl add.. Otherwise we will remove this
Page 44: Testing Material

22. Conformity of every module, subsystem, etc. against every corporate coding standard. Several companies

believe that it is useful to measure characteristics of the code, such as total lines per module, ratio of lines of

comments to lines of code, frequency of occurrence of certain types of statements, etc. A module that doesn't fall

within the "normal" range might be summarily rejected (bad idea) or re-examined to see if there's a better way to

design this part of the program.

23. Table-driven code. The table is a list of addresses or pointers or names of modules. In a traditional CASE

statement, the program branches to one of several places depending on the value of an expression. In the table-

driven equivalent, the program would branch to the place specified in, say, location 23 of the table. The table is

probably in a separate data file that can vary from day to day or from installation to installation. By modifying the

table, you can radically change the control flow of the program without recompiling or relinking the code. Some

programs drive a great deal of their control flow this way, using several tables. Coverage measures? Some

examples:

24. check that every expression selects the correct table element

25. check that the program correctly jumps or calls through every table element

26. check that every address or pointer that is available to be loaded into these tables is valid (no jumps to impossible

places in memory, or to a routine whose starting address has changed)

27. check the validity of every table that is loaded at any customer site.

Page 45: Testing Material

28. Every interrupt. An interrupt is a special signal that causes the computer to stop the program in progress and

branch to an interrupt handling routine. Later, the program restarts from where it was interrupted. Interrupts might

be triggered by hardware events (I/O or signals from the clock that a specified interval has elapsed) or software

(such as error traps). Generate every type of interrupt in every way possible to trigger that interrupt.

29. Every interrupt at every task, module, object, or even every line. The interrupt handling routine might change

state variables, load data, use or shut down a peripheral device, or affect memory in ways that could be visible to

the rest of the program. The interrupt can happen at any time-between any two lines, or when any module is being

executed. The program may fail if the interrupt is handled at a specific time. (Example: what if the program

branches to handle an interrupt while it's in the middle of writing to the disk drive?)

The number of test cases here is huge, but that doesn't mean you don't have to think about this type of testing.

This is path testing through the eyes of the processor (which asks, "What instruction do I execute next?" and

doesn't care whether the instruction comes from the mainline code or from an interrupt handler) rather than path

testing through the eyes of the reader of the mainline code. Especially in programs that have global state

variables, interrupts at unexpected times can lead to very odd results.

30. Every anticipated or potential race. Imagine two events, A and B. Both will occur, but the program is designed

under the assumption that A will always precede B. This sets up a race between A and B -if B ever precedes A, the

avv, 07/29/05,
An example will give clear idea – Anil, I think that we can remove this as this is more OS oriented.. what do u say?
Page 46: Testing Material

program will probably fail. To achieve race coverage, you must identify every potential race condition and then

find ways, using random data or systematic test case selection, to attempt to drive B to precede A in each case.

Races can be subtle. Suppose that you can enter a value for a data item on two different data entry screens. User 1

begins to edit a record, through the first screen. In the process, the program locks the record in Table 1. User 2

opens the second screen, which calls up a record in a different table, Table 2. The program is written to

automatically update the corresponding record in the Table 1 when User 2 finishes data entry. Now, suppose that

User 2 finishes before User 1. Table 2 has been updated, but the attempt to synchronize Table 1 and Table 2 fails.

What happens at the time of failure, or later if the corresponding records in Table 1 and 2 stay out of synch?

31. Every time-slice setting. In some systems, you can control the grain of switching between tasks or processes.

The size of the time quantum that you choose can make race bugs, time-outs, interrupt-related problems, and

other time-related problems more or less likely. Of course, coverage is a difficult problem here because you aren't

just varying time-slice settings through every possible value. You also have to decide which tests to run under

each setting. Given a planned set of test cases per setting, the coverage measure looks at the number of settings

you've covered.

32. Varied levels of background activity. In a multiprocessing system, tie up the processor with competing,

irrelevant background tasks. Look for effects on races and interrupt handling. Similar to time-slices, your

coverage analysis must specify

Page 47: Testing Material

33. categories of levels of background activity (figure out something that makes sense) and

34. all timing-sensitive testing opportunities (races, interrupts, etc.).

35. Each processor type and speed. Which processor chips do you test under? What tests do you run under each

processor? You are looking for:

36. speed effects, like the ones you look for with background activity testing, and

37. consequences of processors' different memory management rules, and

38. floating point operations, and

39. any processor-version-dependent problems that you can learn about.

40. Every opportunity for file / record / field locking.

41. Every dependency on the locked (or unlocked) state of a file, record or field.

42. Every opportunity for contention for devices or resources.

43. Performance of every module / task / object. Test the performance of a module then retest it during the next

cycle of testing. If the performance has changed significantly, you are either looking at the effect of a

performance-significant redesign or at a symptom of a new bug.

44. Free memory / available resources / available stack space at every line or on entry into and exit out of every

module or object.

Page 48: Testing Material

45. Execute every line (branch, etc.) under the debug version of the operating system. This shows illegal or

problematic calls to the operating system.

46. Vary the location of every file. What happens if you install or move one of the program's component, control,

initialization or data files to a different directory or drive or to another computer on the network?

47. Check the release disks for the presence of every file. It's amazing how often a file vanishes. If you ship the

product on different media, check for all files on all media.

48. Every embedded string in the program. Use a utility to locate embedded strings. Then find a way to make the

program display each string.

49. Operation of every function / feature / data handling operation under:

50. Every program preference setting.

51. Every character set, code page setting, or country code setting.

52. The presence of every memory resident utility (inits, TSRs).

53. Each operating system version.

54. Each distinct level of multi-user operation.

55. Each network type and version.

56. Each level of available RAM.

57. Each type / setting of virtual memory management.

Page 49: Testing Material

58. Compatibility with every previous version of the program.

59. Ability to read every type of data available in every readable input file format. If a file format is subject to

subtle variations (e.g. CGM) or has several sub-types (e.g. TIFF) or versions (e.g. dBASE), test each one.

60. Write every type of data to every available output file format. Again, beware of subtle variations in file

formats-if you're writing a CGM file, full coverage would require you to test your program's output's readability

by every one of the main programs that read CGM files.

61. Every typeface supplied with the product. Check all characters in all sizes and styles. If your program adds

typefaces to a collection of fonts that are available to several other programs, check compatibility with the other

programs (nonstandard typefaces will crash some programs).

62. Every type of typeface compatible with the program. For example, you might test the program with (many

different) TrueType and Postscript typefaces, and fixed-sized bitmap fonts.

63. Every piece of clip art in the product. Test each with this program. Test each with other programs that should

be able to read this type of art.

64. Every sound / animation provided with the product. Play them all under different device (e.g. sound) drivers /

devices. Check compatibility with other programs that should be able to play this clip-content.

65. Every supplied (or constructible) script to drive other machines / software (e.g. macros) / BBS's and

information services (communications scripts).

Page 50: Testing Material

66. All commands available in a supplied communications protocol.

67. Recognized characteristics. For example, every speaker's voice characteristics (for voice recognition software)

or writer's handwriting characteristics (handwriting recognition software) or every typeface (OCR software).

68. Every type of keyboard and keyboard driver.

69. Every type of pointing device and driver at every resolution level and ballistic setting.

70. Every output feature with every sound card and associated drivers.

71. Every output feature with every type of printer and associated drivers at every resolution level.

72. Every output feature with every type of video card and associated drivers at every resolution level.

73. Every output feature with every type of terminal and associated protocols.

74. Every output feature with every type of video monitor and monitor-specific drivers at every resolution level.

75. Every color shade displayed or printed to every color output device (video card / monitor / printer / etc.) and

associated drivers at every resolution level. And check the conversion to grey scale or black and white.

76. Every color shade readable or scannable from each type of color input device at every resolution level.

77. Every possible feature interaction between video card type and resolution, pointing device type and resolution,

printer type and resolution, and memory level. This may seem excessively complex, but some times crash bugs

occur only under the pairing of specific printer and video drivers at a high resolution setting. Other crashes

Page 51: Testing Material

required pairing of a specific mouse and printer driver, pairing of mouse and video driver, and a combination of

mouse driver plus video driver plus ballistic setting.

78. Every type of CD-ROM drive, connected to every type of port (serial / parallel / SCSI) and associated drivers.

79. Every type of writable disk drive / port / associated driver. Don't forget the fun you can have with removable

drives or disks.

80. Compatibility with every type of disk compression software. Check error handling for every type of disk error,

such as full disk.

81. Every voltage level from analog input devices.

82. Every voltage level to analog output devices.

83. Every type of modem and associated drivers.

84. Every FAX command (send and receive operations) for every type of FAX card under every protocol and driver.

85. Every type of connection of the computer to the telephone line (direct, via PBX, etc.; digital vs. analog

connection and signaling); test every phone control command under every telephone control driver.

86. Tolerance of every type of telephone line noise and regional variation (including variations that are out of spec) in

telephone signaling (intensity, frequency, timing, other characteristics of ring / busy / etc. tones).

87. Every variation in telephone dialing plans.

Page 52: Testing Material

88. Every possible keyboard combination. Sometimes you'll find trap doors that the programmer used as hotkeys to

call up debugging tools; these hotkeys may crash a debuggerless program. Other times, you'll discover an Easter

Egg (an undocumented, probably unauthorized, and possibly embarrassing feature). The broader coverage

measure is every possible keyboard combination at every error message and every data entry point. You'll

often find different bugs when checking different keys in response to different error messages.

89. Recovery from every potential type of equipment failure. Full coverage includes each type of equipment, each

driver, and each error state. For example, test the program's ability to recover from full disk errors on writable

disks. Include floppies, hard drives, cartridge drives, optical drives, etc. Include the various connections to the

drive, such as IDE, SCSI, MFM, parallel port, and serial connections, because these will probably involve

different drivers.

90. Function equivalence. For each mathematical function, check the output against a known good implementation

of the function in a different program. Complete coverage involves equivalence testing of all testable functions

across all possible input values.

91. Zero handling. For each mathematical function, test when every input value, intermediate variable, or output

variable is zero or near-zero. Look for severe rounding errors or divide-by-zero errors.

92. Accuracy of every graph, across the full range of graphable values. Include values that force shifts in the scale.

Page 53: Testing Material

93. Accuracy of every report. Look at the correctness of every value, the formatting of every page, and the

correctness of the selection of records used in each report.

94. Accuracy of every message.

95. Accuracy of every screen.

96. Accuracy of every word and illustration in the manual.

97. Accuracy of every fact or statement in every data file provided with the product.

98. Accuracy of every word and illustration in the on-line help.

99. Every jump, search term, or other means of navigation through the on-line help.

100.Check for every type of virus / worm that could ship with the program.

101.Every possible kind of security violation of the program, or of the system while using the program.

102.Check for copyright permissions for every statement, picture, sound clip, or other creation provided with the

program.

103.Verification of the program against every program requirement and published specification.

104.Verification of the program against user scenarios. Use the program to do real tasks that are challenging and

well-specified. For example, create key reports, pictures, page layouts, or other documents events to match ones

that have been featured by competitive programs as interesting output or applications.

105.Verification against every regulation (IRS, SEC, FDA, etc.) that applies to the data or procedures of the program.

Page 54: Testing Material

106.Usability tests of:

107.Every feature / function of the program.

108.Every part of the manual.

109.Every error message.

110.Every on-line help topic.

111.Every graph or report provided by the program.

112.Localizability / localization tests:

113.Every string. Check program's ability to display and use this string if it is modified by changing the length, using

high or low ASCII characters, different capitalization rules, etc.

114.Compatibility with text handling algorithms under other languages (sorting, spell checking, hyphenating, etc.)

115.Every date, number and measure in the program.

116.Hardware and drivers, operating system versions, and memory-resident programs that are popular in other

countries.

117.Every input format, import format, output format, or export format that would be commonly used in programs

that are popular in other countries.

118.Cross-cultural appraisal of the meaning and propriety of every string and graphic shipped with the program.

Page 55: Testing Material

12. WHAT IF THERE ISN'T ENOUGH TIME FOR THOROUGH TESTING? Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an

application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is

appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If

warranted, formal methods are also available.) Considerations can include:

119.Which functionality is most important to the project's intended purpose?

120.Which functionality is most visible to the user?

121.Which functionality has the largest safety impact?

122.Which functionality has the largest financial impact on users?

123.Which aspects of the application are most important to the customer?

124.Which aspects of the application can be tested early in the development cycle?

125.Which parts of the code are most complex, and thus most subject to errors?

126.Which parts of the application were developed in rush or panic mode?

127.Which aspects of similar/related previous projects caused problems?

128.Which aspects of similar/related previous projects had large maintenance expenses?

129.Which parts of the requirements and design are unclear or poorly thought out?

130.What do the developers think are the highest-risk aspects of the application?

Page 56: Testing Material

131.What kinds of problems would cause the worst publicity?

132.What kinds of problems would cause the most customer service complaints?

133.What kinds of tests could easily cover multiple functionalities?

134.Which tests will have the best high-risk-coverage to time-required ratio?

13. DEFECT REPORTINGThe bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be

re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems

elsewhere. The following are items to consider in the tracking process:

135.Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if

necessary.

136.Bug identifier (number, ID, etc.)

137.Current bug status (e.g., 'Open', 'Closed', etc.)

138.The application name and version

139.The function, module, feature, object, screen, etc. where the bug occurred

140.Environment specifics, system, platform, relevant hardware specifics

141.Test case name/number/identifier

Page 57: Testing Material

142.File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause

of the problem

143.Severity Level

144.Tester name

145.Bug reporting date

146.Name of developer/group/organization the problem is assigned to

147.Description of fix

148.Date of fix

149.Application version that contains the fix

150.Verification Details

A reporting or tracking process should enable notification of appropriate personnel at various stages.

14. TYPES OF AUTOMATED TOOLS151.code analyzers - monitor code complexity, adherence to standards, etc.

152.coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented

to code statement coverage, condition coverage, path coverage, etc.

153.memory analyzers - such as bounds-checkers and leak detectors.

154.load/performance test tools - for testing client/server and web applications under various load levels.

Page 58: Testing Material

155.web test tools - to check that links are valid, HTML code usage is correct, client-side and

156.server-side programs work, a web site's interactions are secure.

157.other tools - for test case management, documentation management, bug reporting, and configuration

management.

15. TOP TIPS FOR TESTERS Tip 1: Play nicely. Be part of the Web team. This keeps you in the loop, and gives you a voice in debates of functionality versus design. Remember, a tester is really a user advocate. Protect that position by making friends.

Tip 2: A good spec is a tester's best friend. Get design and functional specifications -- even if you have to whine or threaten. Specs help you determine what is a "real" bug and what is "by design."

Tip 3: KISS (Keep It Simple, Stupid). If you have a form that's supposed to accept only integers 1 to 10, you can start your tests by entering NULL, 0, 1, 10, 11, 1000000000, -1, a, and the other acceptable values. You needn't enter every possible key stroke to find a code hole.

Tip 4: Expect the unexpected. Before you think the previous tip allows you to cut back your testing, remember the user. If your audience is non-technical, the possible keystrokes, browser settings, and other "mystery" influences are unlimited. Think like a user, and stress the Web site the way a user would.

Tip 5: Use all the automated testing tools you can get your hands on. Many useful tools are on the Web already. With a little searching, you may be able to find the right ones for you.


Recommended