+ All Categories
Home > Documents > IE C Testing Myths 1110 FINAL WEB

IE C Testing Myths 1110 FINAL WEB

Date post: 18-Oct-2015
Category:
Upload: luis-javier-erazo-gallo
View: 5 times
Download: 0 times
Share this document with a friend
Popular Tags:

of 16

Transcript
  • Quality AssuranceSoftware testing is easy...and other myths

    Consulting

  • 2About us

    In Ireland, we have been providing quality assurance and testing services to public and private sector clients for over 16 years and have built up a highly experienced quality assurance team and centre of excellence.

    Our consulting services cover the whole range of quality assurance and testing, and our key focus is on providing our clients QA managers and development teams with the support and expertise they require to deliver the highest quality software products, while reducing risk and keeping their test phases on time and within budget.

    Our services are grouped under three key headings Test Strategy and Advisory, Test Execution and Quality Management, and Test Enabling Services

    Test Strategyand Advisory

    Test Executionand Quality Management

    Test EnablingServices

    Test Advisory

    Test StrategyDevelopment

    Total Cost of

    Test ReviewFully

    Managed End to End Test Cycles

    Test

    Automation

    Test ProgrammeReview

    Test QualityAssurance

    Experienced Resource

    SecondmentTest

    Outsourcing

    Offshore & Nearshore

    FunctionalTesting

    SIT/UAT/End to End

    Security Testing

    Penetration Testing

    Non Functional

    Performance& Load Testing

    Deloitte has helped hundreds of companies around the world implement a more rigorous and comprehensive approach to software quality assurance.

  • 3Quality Assurance, including testing, is a critical component of the software development process. Yet the complexity and effort to ensure quality are often underestimated.

    For those not directly involved in QA, the activities appear straightforward. From their perspective, the hard part is design and development and as a result, QA is often not given the time or resources to complete the work properly.

    This booklet examines the top 10 myths of software quality assurance, and offers practical insights to help organisations get more return on their investments from quality assurance. QA professionals will recognise many of these myths from their own personal experience.

    The top 10 myths of software testing

    1. Quality assurance = testing

    2. The goal of software testing is 100 percent defect-free code

    3. Testing is easy. Anyone can test

    4. Automation eliminates the need for manual testing

    5. Performance testing can only be performed during the last stages of development

    6. Overlapping test phases saves time

    7. Security is a separate activity to testing and development

    8. Bigger is better more scripts mean better testing

    9. Analysts document. Developers code. Testers test

    10. Offshoring the QA function is an easy way to reduce the cost of testing

    Dispelling the myths around software Quality Assurance (QA), with practical suggestions on how to improve your overall QA process.

    Myth busting

    ...help organisations get more return on their investments from quality assurance.

  • 4RealityTesting is just one component of software quality assurance.

    When developing new software applications, many organisations treat quality assurance and testing as one and the same. In reality, effective QA spans the entire software lifecycle, starting with requirements definitions and continuing long after a system or application goes live. Final stage software testing is just one key part.

    Deloitte software quality assurance approachThe sooner problems are detected, the less time and money it takes to fix them. Industry studies have shown that the cost of fixing software defects increases exponentially as you move through the development lifecycle. Some estimates suggest that a defect introduced in the requirements phase is 50200 times more costly to fix if not discovered until the testing phase. Thats why quality assurance must be an integral part of the software development process - starting from day one.

    Development phases

    Deloitte software quality assurance approach

    Identify variances in requirements and design documentation

    Ensure requirements and design are well-understood by all parties

    Ensure design meets the stated requirements

    Ensure developed code adheres to the design

    Monitor and enforce adherence to project standards and best practices

    Identify unit level and, where applicable, functional defects as soon as possible

    Ensure that the product delivered meets the requirements

    Ensure developed code adheres to the design

    Identify remaining product variances

    Capture post-project lessons learned for incorporation into subsequent releases

    Assess, align and formalise inspections, walkthroughs and sign-o processes

    Establish coding standard

    Establish gating criteria

    Establish coding standard

    Review, assess code inspections and peer reviews processes

    Execute unit testing

    Recommend improvements to build process

    Execute manual testing (by phase)

    Execute automated regression testing (if applicable)

    Coordinate and facilitate test management activities

    Schedule and prioritise defect xes

    Conduct root cause analysis and remediation

    Conduct post -deployment review

    Establish and align production variance reporting process

    Transition knowledge to production support teams

    Requirements and analysis Design and construction Testing Production

    Ob

    ject

    ives

    Act

    ivit

    ies

    Deloitte software quality assurance approach

    Checklist Dopeopleunderstandthedifference

    between QA and testing, or do they use the terms interchangeably?

    Doesyourdevelopmentmethodologyhavespecific quality assurance activities during the early phases, or does it largely focus on testing?

    Doyouhaveadedicatedqualityassurancestream that runs in parallel with your software development lifecycle?

    Areobjectivequalitystatisticscapturedthroughout the development lifecycle, or is quality only measured during the testing phase?

    Quality assurance = testing.

    Myth 1

  • 5 Complex applications will never be entirely free of defects.

    Instead of seeking perfection, companies should strive for the level of quality that is right for the business. That means balancing the trade-offs between time, cost, quality and risk.

    In general, our experience has shown that optimal software testing will catch about 90 percent of the problems including the most serious ones. The remaining 10 percent will be found after the system goes live, or they are so minor they might never show up.

    Checklist Is100%defectfreetheexpectation

    from the user community? Doestestingrequireareasonabletime

    and effort, or does it always seem to take too long?

    Arethebusinessusersrepresentedin decisions on which defects to resolve, and which ones can be released into production?

    Myth 2

    The goal of software testing is 100% defect-free code.

  • 6RealityQA requires dedicated resources with a unique understanding of application development, software testing, and business requirements.

    QA testers require a deep knowledge of testing methods, but they must also have a thorough understanding of business requirements and the software development process. This broad perspective enables the QA group to design tests that make sense, and also helps them serve as a liaison between IT and the business units.

    It takes a special kind of person to succeed at quality assurance, yet many organisations do not provide sufficient support for their QA staff. The result: low morale and high turnover. Quality assurance should be treated as a legitimate career choice critical to application deployment not just a stepping stone to something else.

    Moreover, QA is not just the responsibility of technology focused staff. Business users need to get more involved in the QA process, investing time and business knowledge to improve the overall organisations QA procedures.

    QA staff provide skills and expertise that complement those of software developers and business analysts. Making them an integral and valued part of the software delivery organisation maximises their effectiveness, and can significantly improve the overall quality and efficiency of software development.

    Software certification programmes

    Benefits of software certification:CommoncodeofknowledgeCodeofethicsRecognitionbyITmanagementforprofessional

    achievementProfessionalcompetenciesmaintainedthrough

    recertificationCompetencyresourcestoITstaff

    These benefits create value for the software industry, the individual, the employer and professional colleagues.

    Professional Certifications Advanced Certifications Master Certifications

    ISTQB Foundation Certificate in Software testing

    ISTQB Intermediate Certificate in Software Testing

    ISTQB Certified Tester Advanced Level

    Systems Testing ISTQB Practitioner in Test Analysis/ Test Management

    Managing User Acceptance Testing

    Checklist WhoisreallydoingyourtestingITor

    the business? Areyourtestersprimarilyoutsidecontractors,

    or full-time employees? IsQArecognisedasacareeroption,with

    defined training and certifications paths? Dotestersunderstandthebusiness

    requirements sufficiently to prioritise and categorise the defects?

    Dotestershavesufficienttechnical knowledge of the application?

    Areyouretainingthebesttalent?

    Myth 3

    Testing is easy. Anyone can test.

  • 7RealityManual testing is always required.

    Automated software testing accelerates the overall testing effort, but it doesnt eliminate the need for manual testing. A certain amount of manual testing will always be required to ensure that new and modified features are functioning properly.

    Effective testing is best achieved through a combination of manual and automated tests. Automation makes sense for validation of existing system functionality where few defects are expected and speed is a top priority. But new and complex systems generally require manual testing by a knowledgeable tester.

    The business case for automated testing varies from one project to the next, and isnt always clear. Automation requires a significant upfront investment in tools and script development, and usually only pays back after 46 iterations of testing. For systems with limited testing needs, or involving only minor changes, this initial investment may not be justified.

    Regression effort analysis

    Reality check

    Testing is easy...and other myths

    Regression automation effort analysis

    0

    50

    100

    150

    200

    250

    300

    350

    400

    3 6 9 12

    Manual

    Each test cycle takes 26 effort days for 500 test cases

    After initial automation, each test cycle requires minimal additional effort

    Payback point at 6 test runsOne-time fixed

    costs for initial automation

    Effo

    rt d

    ays

    Test cycles

    Automated

    9

    Checklist Doyouuseautomatedtestingintheright

    situations, such as: - Applications driven by a graphical user

    interface? - Regressiontestingofexistingfunctions? - Tests that are highly repetitive or

    data-driven, with predictable results? Doyouusemanualtestingintheright

    situations, such as: - Testing new or extensively modified

    functions? - Functions that are highly complex and

    would require a massive effort to develop automated tests?

    - Resultsthataredifficulttopredict?

    Myth 4

    Automation eliminates the need for manual testing

  • 8RealityPerformance problems can and should be identified and fixed throughout the development lifecycle.

    Many people think the sole purpose of testing is to find defects that could crash the system or produce incorrect results. But performance problems often have an even greater impact on the organisation frustrating users, eroding productivity, and in some cases bringing the entire operation to a grinding halt.

    To avoid costly delays and reworks, start looking for performance issues early in the development cycle. Bottleneck analysis, code-profiling and performance modelling can help identify and resolve defects as early as the design phase preventing performance issues from becoming performance problems.

    Software development lifecycle and main performance management activities and outcomes

    Most organisations test performance by trying to replicate the production environment; however, this kind of test environment is expensive to procure and maintain. Moreover, demand for such an environment typically exceeds supply particularly near the end of a project creating a major bottleneck in the schedule.

    Development phases

    Performance management

    Software development life cycle and main performance management activities and outcomes

    Establish performance acceptance criteria

    Establish performance risk management plan

    Establish performance test scenarios and models

    Establish architectural guidelines for performance

    Identify and optimise inecient database queries

    Identify and optimise inecient cod e

    Establish baseline and benchmark for test executed

    Identify bottleneck suspects

    Monitor application performance

    Gather data for capacity management

    Gather past workload distribution statistics

    Gather clients architectural standards, regional impacts, security model

    Identify architectural constraints

    Capture architectural views

    Dene key metrics to monitor

    For each sub-system build a prototype and instrument prototype for performance metrics and evaluate results

    Instrument critical code-paths with performance monitors

    Unit testing and code proling activities have taken place

    Tune application, database, and middleware

    Monitor system performance continually

    Monitor user activity

    Monitor system availability

    Performance acceptance criteria

    Modeling and engineering Code optimisation

    Performancetesting

    Monitoring and capacity planning

    Requirements and analysis Design and construction Testing Production

    Ob

    ject

    ives

    Act

    ivit

    ies

    Checklist Areyoumakingagoodfirstimpression

    with your users? Areperformancerequirementsobjectively

    defined, or are they inferred based on current system performance?

    Areperformancedefectsidentifiedearly, or do they often cause last-minute surprises and delays?

    Isperformancetestingonlyperformedina full replica of the production environment?

    Myth 5

    Performance testing can only be performed during the last stages of development.

  • 9RealityShortcuts in testing often increase the long-term costs of a system implementation.

    When schedules start to slip, testing is usually one of the first activities to get squeezed. But more often than not, shortcuts in testing end up costing more time and money than they save.

    Organisations should avoid sacrificing testing plans and timelines to compensate for development delays, budget overruns, and accelerated release deadlines. Before making the decision to eliminate or overlap test phases, carefully consider the potential impact on quality and risk.

    Releasingasystemthathasnotbeenfullyvalidated can cause major problems and endanger the operation

    Usersmayloseconfidenceinthefinishedproduct,creating a significant barrier to change

    Overlappingtestphasescanleadtoduplicationofeffort and inefficient use of resources

    These risks may jeopardise the project and increase the overall cost and timeline. Our experience suggests that effective testing should account for 2025 percent of the total development effort. Anything less and you may be exposing yourself to inordinate risk.

    If the decision is made to eliminate or overlap certain testing activities, be sure there are clear requirements for moving from one phase to the next. When scaling back on testing, this rigorous approach is more important than ever.

    Steve McConnell, Software Project Survival Guide

    Checklist Areyouallocatingenoughtimefortesting,

    or does testing often seem rushed and out of control?

    Areyoudoingenoughtestingtomitigate risk, or is testing often sacrificed due to time and budget pressure?

    Are90percentofyoursoftwaredefectscaught prior to going live, or do many not show up until the system is in production?

    The problem with quick and dirty projects... is that the dirty remains long after the quick is forgotten.

    Myth 6

    Overlapping test phases saves time.

  • 10

    RealitySecurity needs to be an integrated process throughout the development and testing lifecycles.

    In the past, security was often confined to an afterthought of the development and testing process. The result of this was that organisations faced much greater costs to address issues identified so late in the process.

    Integration of security activities throughout the systems development process enables timely, risk-based identification, and remediation of security vulnerabilities throughout the lifecycle. In this way, our experience shows us that the critical security assessment activities prior to going live, such as penetration testing, can be more effective in providing a final assurance step rather than being the first time security vulnerabilities are identified.

    Key security activities that should typically be incorporated into the lifecycle include security requirements definition and analysis, security architecture review, secure code review, and application penetration testing. Integration of these activities into an organisations standard software development lifecycle (SDLC) processes enables the organisation to understand the applications risk posture while also identifying and mitigating risks.

    The result of adopting such an integrated approach is that vulnerability management costs are reduced, along with the likelihood of successful attacks. Furthermore, application quality/productivity and regulatory compliance efforts are improved.

    When security is built into the SDLC, the results will be...

    User AcceptanceTest Specifications

    Analysis

    Lowered cost of operations Increased resource

    productivity

    Improved application quality

    Increased customer

    satisfaction

    Requirements Definition(Security requirements)

    Move to production(Pre and post application testing)

    UAT(Security requirements reviewed)

    System & Integration Testing (Security architecture review)

    Unit Testing (Unit security review)

    Requirements Analysis(Security requirements)

    System Design(Security requirements)

    Unit Design(Detailed access & data protection requirements)

    Coding/Debugging(Secure code/review)

    System IntegrationTest Specifications

    Design

    UnitTest Specifications

    Development

    When security is built into the SDLC, the results will be...

    Myth 7

    Security is a separate activity to testing and development which comprises of a security assessment before going live.

    Checklist Doyouhaveaclearviewofthesecurityof

    your systems before the penetration testing phase?

    Doyouminimisethecostofaddressingsecurity vulnerabilities by identifying them as early in the lifecycle as possible?

    Doyouhaveasecuredevelopmentlifecyclewith embedded security activities?

    Doyouconductkeysecurityactivities such as:

    - Threat modelling or risk assessment? - Security requirements definition and

    analysis? - Security architecture review? - Secure code review? - Conducting penetration tests and

    vulnerability assessments?

  • 11

    RealityLess is more carefully allocating testing effort according to business and technology risk can maximise quality without wasted effort.

    100%testcoverageisatargetthatisoftenset,but rarely, if ever, achieved. Many projects start off with this intention, but as the timeline becomes compressed and as quality issues are identified, testing coverage is reduced to accommodate the new schedule. Functionality that is perceived as less important is reduced or dropped off the list entirely. This on-the-fly coverage adjustment leads to wasted test preparation effort and in its haste often leads to quality issues slipping through QA.

    A more systematic approach carefully allocating testing effort up-front based on technology risk and business criticality generally delivers the same or betterresultsforlesstimeandmoney.Risk-basedtesting is a technique to prioritise each business and system requirement and helps to determine the appropriate level of testing for each area, based on system complexity and risk. High risk items, those that are subject to regulatory oversight for example, receive proportionally more testing effort than low risk items.

    Risk-basedtestingcanalsobeusedtocontinuouslyimprove the testing process. Once underway, you can use the results of testing to decide if the coverage should be reduced or expanded. If a significant number of defects are slipping into production, more comprehensive testing is probably needed. But if quality is good, you might be able to safely scale back your efforts further.

    Sample risk based testing priority chart

    Risk Coverage Risk Factors

    High 100% SupportsregulatoryrequirementsExternalorclient-facingSupportsfinancialtransactionsortheirreferenceFunctionalitycannotbesupportedbymanual/workaroundprocessSupportsexternalsystemconnectivity(real-timetransactional)Subjectedtohighvolumeoftransactions

    Medium 75% Internalemployee-facingSupportsexternalsystemconnectivity(batch)Functionalitysupportsreferencedata(e.g.productlists)

    Low 50% FunctionalityisnotessentialtothefinancialmanagementFunctionalitycouldbemanuallyduplicatedSupportssmallnumberoftransactions

    Checklist Canyouaccuratelyestimatetherequired

    number of test scripts based on your requirements, or does the number grow and grow during test planning?

    Doesyourprocessforrequestingchanges to a project reflect the effort required to update test plans and test scripts?

    Hasatestingstrategybeendefinedtodetermine the right breadth and depth, or does testing always seem to require too much time and money?

    Myth 8

    Bigger is better - more scripts mean better testing.

  • 12

    RealityBusiness analysts and developers are central to an effective overall quality assurance approach.

    Software development traditionally has followed a sequential model. Business analysts define and document business requirements, then pass those requirements along to the software developers. Developers write programs to satisfy the requirements, and then throw the code over the wall to the testing group. Testers develop and run tests based on the business requirements, then send defective code back upstream for rework.

    That sequential approach might have been good enough in the past, but in todays agile world, there are much better ways to manage and test software quality.

    Continuous integration makes QA an integral part of the software development process, with involvement from QA staff throughout the development lifecycle. This requires an up-front investment in tools and training, but quickly pays for itself through improved quality and efficiency.

    Quality should be a top priority from day one starting with scoping, estimating and planning. This enables you to identify potential defects early, while they are still relatively cheap to fix. It also provides integrated unit testing and source code management.

    Automated unit tests

    Comprehensive automated unit tests to validate codebase

    Integration scheduler

    Scheduler that will call scripts to continuously

    produce builds and generate reports

    Common code repository

    Safe and central location where developers can retrieve and check in

    source code

    Automated build scripts

    Automation scripts thatfollow procedures to

    compile and deploy apps

    Frequent cycle of check in,

    build, unit test and reporting

    Checklist Areasignificantpercentageofyourdefects

    classified as unknown requirements that result in change requests and rework?

    Istestingoftenheldupintheearlyphases due to system stability issues?

    Aredevelopersrequiredtosuccessfullyunittest their code before completion of the development phase?

    Aretoolsbeingusedtoautomateand validate the process for code check-in and build?

    Myth 9

    Analysts document. Developers code. Testers test.

  • 13

    RealityOffshore testing presents many of the same challenges as other offshore development activities and should be evaluated as part of the overall delivery approach.

    As organisations look for ways to reduce the overall cost of system development, more and more are moving key development functions offshore.

    Offshoring can provide significant cost savings, as well as other valuable benefits such as overnight turnaround on testing and defect logging. However, it is ultimately a delivery model, not just a staffing model, and must be evaluated as part of the overall delivery approach.

    Time zone differences between onshore and offshore resources can be a real problem, particularly for business analysts and QA analysts who often require heavy interaction. Offshoring also bucks the trend of co-locating development and testing resources to shorten the cycle for validating and resolving defects.

    The decision about what QA activities to move offshore and how should be made at the project level, and depends on what other project activities will be performed offshore.

    One proven approach is to offshore the activities associated with selected system components, but keeping development and testing together.

    Checklist Doesyourorganisationhaveexperiencewith

    outsourcing or offshoring other development functions?

    IsQA/testingcurrentlyastandaloneunitorfunction within your organisation?

    Areyourbusinessusersorbusinessanalystsheavily involved in the testing cycle?

    Areyourrequirementsdocumentedwellenough that detailed testing specifications could be effectively transferred to an offshore group?

    Myth 10

    Offshoring the QA function is an easy way to reduce the cost of testing.

    As organisations look for ways to reduce the overall cost of system development, more and more are moving key development functions offshore.

  • 14

    .

    Facing realityThe myths that surround testing and quality assurance make it hard for the QA function to do its job. Even worse, they silently undermine software quality and drive development and support costs through the roof.

    Studies show that one hour invested in quality assurance generally saves 3 to 10 hours in downstream costs. Moreover, defects introduced during the requirements phase if not found until final testing cost 50 to 200 times more to correct than detecting and resolving them immediately.

    Testing - by itself - cannot deliver a high level of quality. To be effective, quality assurance must be integrated into the entire software delivery process.

    Conclusion

  • 15

  • ContactsDublinDeloitte & ToucheDeloitte & Touche HouseEarlsfort Terrace Dublin 2 T: +353 1 417 2200 F: +353 1 417 2300

    CorkDeloitte & ToucheNo.6 Lapps QuayCorkT: +353 21 490 7000 F: +353 21 490 7001

    LimerickDeloitte & ToucheDeloitte & Touche HouseCharlotte Quay Limerick T: +353 61 435500 F: +353 61 418310

    www.deloitte.com/ie

    For further information on quality assurance please contact:

    Shane MohanPartnerT: +353 1 4172543E: [email protected]

    Harry Goddard PartnerT: +353 1 4172589E: [email protected]

    David CassSenior ManagerT: +353 1 4172629E: [email protected]

    Leonard Mc AuliffeSenior ManagerT: +353 1 4175704E: [email protected]

    For Eight Years Running

    50%

    Cert no. SGS COC 003235

    Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity. Please see www.deloitte.com/ie/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms.

    Deloittes 1,100 people in Dublin, Cork and Limerick provide audit, tax, consulting, and corporate finance to public and private clients spanning multiple industries. With a globally connected network of member firms in more than 150 countries, Deloitte brings world class capabilities and deep local expertise to help clients succeed wherever they operate. Deloittes approximately 170,000 professionals are committed to becoming the standard of excellence.

    This publication contains general information only, and none of Deloitte Touche Tohmatsu Limited, Deloitte Global Services Limited, Deloitte Global Services Holdings Limited, the Deloitte Touche Tohmatsu Verein, any of their member firms, or any of the foregoings affiliates (collectively the Deloitte Network) are, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your finances or your business. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No entity in the Deloitte Network shall be responsible for any loss whatsoever sustained by any person who relies on this publication.

    2010 Deloitte & Touche. All rights reserved


Recommended