+ All Categories
Home > Documents > stqa unit 3 notes

stqa unit 3 notes

Date post: 06-Jul-2018
Category:
Upload: chandan-kumar-giri
View: 273 times
Download: 3 times
Share this document with a friend

of 22

Transcript
  • 8/18/2019 stqa unit 3 notes

    1/22

    UNIT III

    SOFTWARE QUALITY METRICS

    Software Measurement and Metrics

    Measurement Teor!

    Measurement is defined as the process of assigning symbols, usually numbers, to represent an

    attribute of the entity of interest, by rule. Characteristics of a good measurement are

      Re"ia#i"it! - The outcome of the measurement process is reproducible. (Similar results are gotten

    over time and across situations.)

    $a"idit! - The measurement process actually measures hat it purports to measure.

      Sensiti%it! - The measurement process shos variability in responses hen it e!ists in the

    stimulus or situation.

    "ith respect to softare, Measurement is the continuous process of defining, collecting, and

    analy#ing data on the softare development process and its products in order to understand and

    control the process and its products, and to supply meaningful information to improve that

     process and its products.

    Measurement is essential to achieving the basic management ob$ectives of prediction, progress, and process improvement. Softare is usually measured to serve folloing purposes.

      Caracteri&ation, i.e., the gathering of information about some characteristic of softare

     processes and products, ith the goal of ac%uiring a better idea of &&hat's going on.

      E%a"uation, i.e., $udging some characteristic of a softare process or product, for instance based

    on historical data in the same development environment or data available from e!ternal sources.

    Im'ro%ement, i.e., using a cause-effect relationship to identify parts of the process or product

    that can be changed to obtain positive effects on some characteristic of interest, and collecting

    data after the changes have been made to confirm or disconfirm hether the effect as positive

    and assess its e!tent. (rediction, i.e., identifying a cause-effect relationship among product and process

    characteristics.

      Trac)in*, i.e., the (possibly constant and regular) ac%uisition of information on some

    characteristic of softare processes and products over time, to understand if those characteristics

    are under control in on-going pro$ects.

      $a"idation, i.e., validating the identified best practices e!perimentally.

    Software Measurement 'rocess

    The softare measurement process must be an ob$ective, orderly method for %uantifying,

    assessing, ad$usting, and ultimately improving the development process. ata are collected based

    on *non, or anticipated, development issues, concerns, and %uestions. They are then analy#ed

    ith respect to the softare development process and products.

    The measurement process is used to assess %uality, progress, and performance throughout

    all life cycle phases. The *ey components of an effective measurement process are+

    Clearly defined softare development issues and the measure (data elements) needed to provide

    insight into those issues

    rocessing of collected data into graphical or tabular reports (indicators) to aid in issue analysis

    nalysis of indicators to provide insight into development issues and,

    /se of analysis results to implement process improvements and identify ne issues and

     problems.

  • 8/18/2019 stqa unit 3 notes

    2/22

    0igure shos the softare measurement process. The activities re%uired to design a

    measurement process using this architecture are+

    eveloping a measurement process to be made available as part of the organi#ation's standard

    softare

     process

    lanning the process on pro$ects and documenting procedures by tailoring and adapting the

     process asset

    1mplementing the process on pro$ects by e!ecuting the plans and procedures and 1mproving the process by evolving plans and procedures as the pro$ects mature and their 

     measurement needs change .

    Software Measurement Sca"es

    0ive measurement levels are generally used in Measurement Theory hich are applicable

    to softare as ell. /sually, the narroer the arrangement of acceptable changes, the smaller the

    number of scales, and the more informative the scale. Measurement scales are hierarchal, and

    each level scale possesses all the properties of loer scales as shon belo. "e can converthigher level scales into loer level ones (i.e., ratio to interval or ordinal or nominal or interval to

    ordinal or nominal or ordinal to nominal). oerful analysis can be applied to data ith more

    informative scale of measurement.

      Nomina" Sca"e

    This is the most primitive form of measurement. scale is a nominal one if it divides the

    set of entities into categories, ith no particular ordering among them.

     2 3.g. 1333 456.7, 456.6, 456.89456.77.

      Ordina" Sca"e

    scale is an ordinal one if it divides the set of entities into categories that are ordered

    according to some order. There is no %uantitative comparison.

     2 3.g. programmer s*ill (lo, medium, high).

      Inter%a" Sca"e

    This scale captures information about the si#e of the intervals that separate classes. 1n an

    interval scale, the e!act difference beteen to values is the basis for meaningful statements.

     2 3.g. programmer capability beteen+ :5th and 45th percentile of population.

     

    Ratio Sca"e

  • 8/18/2019 stqa unit 3 notes

    3/22

    measurement mapping that the preserves ordering, the si#e of intervals beteen

    entities, and ratios beteen entities 1n a ratio scale, contrary to interval scale, there is an absolute

    and non arbitrary #ero point.

     2 3.g. pro$ect too* tice as long as pro$ect ;.

      A#so"ute Sca"e

    bsolute scale is the most informative in the measurement scale hierarchy ffer a point of reference for softare measurers to verify their measurement results and

    their ability to measure the same reference material

    llo measurers to use the related reference concept, and thus to spea* at the same level.

    ?oever measurement of softare is challenging in nature ;ecause of folloing reasons+

    Softare is an intangible product. 1t is an atypical product hen compared to other 

    industrial products, in that it varies greatly in terms of si#e, comple!ity, design

    techni%ues, test methods, applicability, etc.

    There is little consensus on specific measures of softare attributes, as illustrated by the

    scarcity of international standard measures for softare attributes, such as softare

    comple!ity and %uality.

    Softare development is so comple! that all models are ea* appro!imations as they are

    hard to validate.

    Measurements are not common to all pro$ects, organi#ations. Measures that or* for one

     pro$ect may not be applicable to another one.

    Software Metrics

    'ro+ect,

  • 8/18/2019 stqa unit 3 notes

    4/22

    roductivity

    Attri#utes of Effecti%e Software Metrics

    Software -ua" it ! metrics

    Measurement enables the >rgani#ation to improve the softare process assist in

     planning, trac*ing and controlling the softare pro$ect and assess the %uality of the softare thus

     produced. 1t is the measure of such specific attributes of the process, pro$ect and product that are

    used to compute the softare metrics. Metrics are analy#ed and they provide a dashboard to the

    management on the overall health of the process, pro$ect and product. @enerally, the validation

    of the metrics is a continuous process spanning multiple pro$ects. The *ind of metrics employed

    generally account for hether the %uality re%uirements have been achieved or are li*ely to be

  • 8/18/2019 stqa unit 3 notes

    5/22

    achieved during the softare development process. s a %uality assurance process, a metric is

    needed to be revalidated every time it is used. To leading firms namely, 1;M and ?elett-

    ac*ard have placed a great deal of importance on softare %uality. The 1;M measures the user 

    satisfaction and softareacceptability in eight dimensions hich are capability or functionality,

    usability, performance, reliability, ability to be installed, maintainability, documentation, and

    availability. 0or the Softare Auality Metrics the ?elett-ac*ard normally follos the five

    Buran %uality parameters namely the functionality,the usability, the reliability, the performance

    and the serviceability. 1n general, for most softare %uality assurance systems the commonsoftare metrics that are chec*ed for improvement are the Source lines of code, cyclical

    comple!ity of the code, 0unction point analysis, bugs per line of code, code coverage, number of 

    classes and interfaces, cohesion and coupling beteen the modules etc.

    Common softare metrics include+

    ;ugs per line of code

    Code coverage

    Cohesion

    Coupling

    Cyclomatic comple!ity

    0unction point analysis

     umber of classes and interfaces

     umber of lines of customer re%uirements

    >rder of groth

    Source lines of code

    Dobert Cecil MartinEs softare pac*age metrics

    Softare Auality Metrics focus on the process, pro$ect and product. ;y analy#ing the

    metrics the organi#ation the organi#ation can ta*e corrective action to fi! those areas in the

     process, pro$ect or product hich are the cause of the softare defects.

    The de-facto definition of softare %uality consists of the to ma$or attributes based on

    intrinsic product %uality and the user acceptability. The softare %uality metric encapsulates the

    above to attributes, addressing the mean time to failure and defect density ithin the softare

    components. 0inally it assesses user re%uirements and acceptability of the softare. The intrinsic

    %uality of a softare product is generally measured by the number of functional defects in the

    softare, often referred to as bugs, or by testing the softare in run time mode for inherent

    vulnerability to determine the softare crash scenarios.1n operational terms, the to metrics

    are often described by terms namely the defect density (rate) and mean time to failure (MTT0).

    lthough there are many measures of softare %uality, correctness, maintainability, integrity and

    usability provide useful insight.Correctness

    program must operate correctly. Correctness is the degree to hich the softare

     performs the re%uired functions accurately. >ne of the most common measures is efects per 

    FG>C. FG>C means thousands (. ilo) Of Lines of Code.) FG>C is a ay of measuring the si#e

    of a computer program by counting the number of lines of source code a program has.

    Maintaina#i"it!

    Maintainability is the ease ith hich a program can be correct if an error occurs. Since

    there is no direct ay of measuring this an indirect ay has been used to measure this. MTTC

    (Mean time to change) is one such measure. 1t measures hen a error is found, ho much time it

    ta*es to analy#e the change, design the modification, implement it and test it.

    http://it.toolbox.com/wiki/index.php/IBMhttp://it.toolbox.com/wiki/index.php/Hewlett_Packardhttp://it.toolbox.com/wiki/index.php/Hewlett_Packardhttp://it.toolbox.com/wiki/index.php/Softwarehttp://it.toolbox.com/wiki/index.php/Hewlett_Packardhttp://it.toolbox.com/wiki/index.php/Hewlett_Packardhttp://it.toolbox.com/wiki/index.php/Softwarehttp://it.toolbox.com/wiki/index.php/IBM

  • 8/18/2019 stqa unit 3 notes

    6/22

    Inte*rit!

    This measure the systemEs ability to ith stand attac*s to its security. 1n order to measure

    integrity to additional parameters are threat and security need to be defined. Treat 2 

     probability that an attac* of certain type ill happen over a period of time. Securit! 2 

     probability that an attac* of certain type ill be removed over a period of time. 1ntegrity H

    Summation L76:.

    /efect Remo%a" Efficienc!

    efect Demoval 3fficiency (D3) is a measure of the efficacy of your SA activities..

    0or eg. 1f the D3 is lo during analysis and design, it means you should spend time improving

    the ay you conduct formal technical revies.

    D3 H 3 K ( 3 )

    "here 3 H o. of 3rrors found before delivery of the softare and H o. of 3rrors found after 

    delivery of the softare.

    1deal value of D3 should be 7 hich means no defects found. 1f you score lo on D3 it

    means to say you need to re-loo* at your e!isting process. 1n essence /RE is a indicator of te

    fi"terin* a#i"it! of -ua"it! contro" and -ua"it! assurance acti%it! . 1t encourages the team tofind as many defects before they are passed to the ne!t activity stage. Some of the Metrics are

    listed out here+

    Test Coverage H umber of units (FG>CK0) tested K total si#e of the system umber of tests

     per unit si#e H umber of test cases per FG>CK0 efects per si#e H efects detected K system

    si#e Cost to locate defect H Cost of testing K the number of defects located efects detected in

    testing H efects detected in testing K total system defects efects detected in production H

    efects detected in productionKsystem si#e Auality of Testing H o. of defects found during

    TestingK(o. of defects found during testing o of acceptance defects found after delivery)

    N755 System complaints H umber of third party complaints K number of transactions processed

    3ffort roductivity H Test lanning roductivity H o of Test cases designed K ctual 3ffort for 

    esign and ocumentation Test 3!ecution roductivity H o of Test cycles e!ecuted K ctual

    3ffort for testing Test efficiencyH (number of tests re%uired K the number of system errors)

    Measure Metrics

    0, Customer

    satisfaction inde1

     umber of system enhancement re%uests per year umber of 

    maintenance fi! re%uests per year /ser friendliness+ call volume to

    customer service hotline /ser friendliness+ training time per ne user 

     umber of product recalls or fi! releases (softare vendors) umber of 

     production re-runs (in-house information systems groups)

    http://it.toolbox.com/wiki/index.php/ISO_9126http://it.toolbox.com/wiki/index.php/ISO_9126

  • 8/18/2019 stqa unit 3 notes

    7/22

    2, /e"i%ered defect

    -uantities

     ormali#ed per function point (or per G>C) t product delivery (first 8

    months or first year of operation) >ngoing (per year of operation) ;y

    level of severity ;y category or cause, e.g.+ re%uirements defect, design

    defect, code defect, documentationKon-line help defect, defect introduced

     by fi!es, etc.

    3, Res'onsi%eness

    4turnaround time5 to

    users

    Turnaround time for defect fi!es, by level of severity Time for minor vs.

    ma$or enhancements actual vs. planned elapsed time (by customers) in

    the first year after product delivery

    6, Com'"e1it! of 

    de"i%ered 'roduct

    McCabe's cyclomatic comple!ity counts across the system ?alsteadEs

    measure Card's design comple!ity measures redicted defects and

    maintenance costs, based on comple!ity measures

    7, Test co%era*e

    ;readth of functional coverage ercentage of paths, branches or 

    conditions that ere actually tested ercentage by criticality level+

     perceived level of ris* of paths The ratio of the number of detected faults

    to the number of predicted faults.

    8, Cost of defects

    ;usiness losses per defect that occurs during operation ;usiness

    interruption costs costs of or*-arounds Gost sales and lost goodillGitigation costs resulting from defects nnual maintenance cost (per 

    function point) nnual operating cost (per function point) Measurable

    damage to your boss's career 

    09, Costs of -ua"it!

    acti%ities

    Costs of revies, inspections and preventive measures Costs of test

     planning and preparation Costs of test e!ecution, defect trac*ing, version

    and change control Costs of diagnostics, debugging and fi!ing Costs of 

    tools and tool support Costs of tools and tool support Costs of test case

    library maintenance Costs of testing O A education associated ith the

     product Costs of monitoring and oversight by the A organi#ation (if 

    separate from the development and test organi#ations)

    00, Re:wor) 

    De-or* effort (hours, as a percentage of the original coding hours) De-

    or*ed G>C (source lines of code, as a percentage of the total delivered

    G>C) De-or*ed softare components (as a percentage of the total

    delivered components)

    02, Re"ia#i"it!

    vailability (percentage of time a system is available, versus the time the

    system is needed to be available) Mean time beteen failure (MT;0)

    Mean time to repair (MTTD) Deliability ratio (MT;0 K MTTD) umber 

    of product recalls or fi! releases umber of production re-runs as a ratio

    of production runs

    (roduct Qua"it! Metrics

    The de facto definition of softare %uality consists of to levels+ intrinsic product %uality and

    customer satisfaction. The metrics e discuss here cover both levels+

    Mean time to failure

    efect density

    Customer problems

    http://it.toolbox.com/wiki/index.php/MTBFhttp://it.toolbox.com/wiki/index.php/MTBFhttp://it.toolbox.com/wiki/index.php/MTBFhttp://it.toolbox.com/wiki/index.php/MTBF

  • 8/18/2019 stqa unit 3 notes

    8/22

    Customer satisfaction.

      The MTT0 metric is most often used ith safety critical systems such as airline traffic control

    systems, avionics and eapons.

    7. n error is a human mista*e that results in incorrect softare.

    6. The Desulting fault is an accidental condition that causes a unit of the system to fail to function

    as re%uired.

    8. defect is an anomaly in a product.P. failure occurs hen a functional unit of a softare-related system can no longer perform its

    re%uired function or cannot perform it ith in specified limits.

      0rom the date gathering perspective, time beteen failures data is much more e!pensive. 1t

    re%uires the failure occurrence time for each softare failure recorded. To be useful, time

     beteen failures data also re%uires a high degree of accuracy. This is perhaps the reason the

    MTT0 metric is not used idely by commercial developers.

    6. efect density metric + To compare the defect rates of a softare product various issues are to

     be addressed.

    To calculate defect rate for the ne and changed code, the folloing must be done+

    7. G>C count+ The entire softare product as ell as the ne and changed code must be counted

    for each release of the product.

    6. efect Trac*ing+ efects must be trac*ed to the release origin 2 the portion of the code that

    contains the defects and at hat release the portion as added, changed or enhanced. "hen

    calculating the defect rate of the entire product, all defects are used hen calculating the defect

    rate for the ne and changed code, only defects of the release origin of the ne and changed

    code are included.

    8. CustomerEs erspective+ The defect rate metric measures code %uality on a per unit basis.

    @ood practice is Softare Auality 3ngineering, hoever, also needs to consider customers

     perspective. 0rom the customerEs point of vie, the defect rate is not as relevant as the total no.of defects heKshe is going to encounter. Therefore, it becomes necessary to reduce the number of 

    defects from release to release irrespective of the change in si#e.

    P. Customer roblems Metric + nother product metric that is used by ma$or developers in

    the softare industry measures the problems customers encounter hen using the product.

    er /ser Month (/M) H Total problems that customers reported for time periodKTotal no. of 

    license- months of the softare during the period.

    "here

      umber of license months H umber of install licenses of the softare N o. of months in

    the calculation period.

      To achieve a lo /M, approaches that can be ta*en are+7. 1mprove the development process and reduce the product defects

    6. Deduce the non-defect-oriented problems by improving all aspects of the products(such as

    usability, documentation), customer education, and support

    8. 1ncrease the sale (the number of installed licenses) of the product.

  • 8/18/2019 stqa unit 3 notes

    9/22

    0, Customer satisfaction inde1

    (Auality ultimately is measured in terms of customer satisfaction.)

    Surveyed before product delivery and after product delivery

    (and on-going on a periodic basis, using standard %uestionnaires) umber of system enhancement re%uests per year

     umber of maintenance fi! re%uests per year

    /ser friendliness+ call volume to customer service hotline

    /ser friendliness+ training time per ne user

     umber of product recalls or fi! releases (softare vendors)

     umber of production re-runs (in-house information systems groups)

    2, /e"i%ered defect -uantities

     ormali#ed per function point (or per G>C)

    t product delivery (first 8 months or first year of operation)

    >ngoing (per year of operation)

    ;y level of severity

    ;y category or cause, e.g.+ re%uirements defect, design defect, code defect,

    documentationKon-line help defect, defect introduced by fi!es, etc.

    3, Res'onsi%eness 4turnaround time5 to users

    Turnaround time for defect fi!es, by level of severity

    Time for minor vs. ma$or enhancements actual vs. planned elapsed time

    ;, (roduct %o"ati"it!

    Datio of maintenance fi!es (to repair the system O bring it into

    compliance ith specifications), vs. enhancement re%uests

  • 8/18/2019 stqa unit 3 notes

    10/22

    (re%uests by users to enhance or change functionality)

    C

    re-delivery defects+ annual post-delivery defects

    efects per function point of the system modifications

    =, /efect remo%a" efficienc! umber of post-release defects (found by clients in field operation),

    categori#ed by level of severity

    Datio of defects found internally prior to release (via inspections and testing),

    as a percentage of all defects

    ll defects include defects found internally plus e!ternally (by

    customers) in the first year after product delivery

    6, Com'"e1it! of de"i%ered 'roduct

    McCabe's cyclomatic comple!ity counts across the system

    ?alsteadEs measure

    Card's design comple!ity measures

    redicted defects and maintenance costs, based on comple!ity measures

    7, Test co%era*e

    ;readth of functional coverage

    ercentage of paths, branches or conditions that ere actually tested

    ercentage by criticality level+ perceived level of ris* of paths

    The ratio of the number of detected faults to the number of predicted faults.

    8, Cost of defects

    ;usiness losses per defect that occurs during operation

    ;usiness interruption costs costs of or*-arounds

    Gost sales and lost goodill

    Gitigation costs resulting from defects

    nnual maintenance cost (per function point)

    nnual operating cost (per function point)

    Measurable damage to your boss's career

    09, Costs of -ua"it! acti%ities

    Costs of revies, inspections and preventive measures

    Costs of test planning and preparation

    Costs of test e!ecution, defect trac*ing, version and change control

    Costs of diagnostics, debugging and fi!ingCosts of tools and tool support

    Costs of test case library maintenance

    Costs of testing O A education associated ith the product

    Costs of monitoring and oversight by the A organi#ation

    (if separate from the development and test organi#ations)

    00, Re:wor)

    De-or* effort (hours, as a percentage of the original coding hours)

    De-or*ed G>C (source lines of code, as a percentage of the total delivered G>C)

    De-or*ed softare components (as a percentage of the total delivered components)

  • 8/18/2019 stqa unit 3 notes

    11/22

    02, Re"ia#i"it!

    vailability (percentage of time a system is available, versus the time

    the system is needed to be available)

    Mean time beteen failure (MT;0)

    Mean time to repair (MTTD)

    Deliability ratio (MT;0 K MTTD)

     umber of product recalls or fi! releases umber of production re-runs as a ratio of production runs

    Co""ectin* Software En*ineerin* /ata

    The challenge of collecting softare engineering data is to ma*e sure that the collected

    data can provide useful information for pro$ect, process, and %uality management and, at the

    same time, that the data collection process ill not be a burden on development teams.

    Therefore, it is important to consider carefully hat data to collect. The data must be based on

    ell-defined metrics and models, hich are used to drive improvements. Therefore, the goals of the data collection should be established and the %uestions of interest should be defined before

    any data is collected. ata classification schemes to be used and the level of precision must be

    carefully specified. The collection form or template and data fields should be pretested. The

    amount of data to be collected and the number of metrics to be used need not be overhelming.

    1t is more important that the information e!tracted from the data be focused, accurate, and useful

    than that it be plentiful. "ithout being metrics driven, over-collection of data could be asteful.

    >ver collection of data is %uite common hen people start to measure softare ithout an a

     priori specification of purpose, ob$ectives, profound versus trivial issues, and metrics and

    models.

    @athering softare engineering data can be e!pensive, especially if it is done as part of a

    research program, 0or e!ample, the S Softare 3ngineering Gaboratory spent about 7QR of 

    their development costs on gathering and processing data on hundreds of metrics for a number of 

     pro$ects (Shooman, 7L48). 0or large commercial development organi#ations, the relative cost of 

    data gathering and processing should be much loer because of economy of scale and feer 

    metrics. ?oever, the cost of data collection ill never be insignificant. onetheless, data

    collection and analysis, hich yields intelligence about the pro$ect and the development process,

    is vital for business success. 1ndeed, in many organi#ations, a trac*ing and data collection system

    is often an integral part of the softare configuration or the pro$ect management system, ithout

    hich the chance of success of large and comple! pro$ects ill be reduced.

    ;asili and "eiss (7L4P) propose a data collection methodology that could be applicable

    anyhere. The schema consists of si! steps ith considerable feedbac* and iteration occurring at

    several places+

    7. 3stablish the goal of the data collection.

    6. evelop a list of %uestions of interest.

  • 8/18/2019 stqa unit 3 notes

    12/22

    8. 3stablish data categories.

    P. esign and test data collection forms.

    Q. Collect and validate data.

    :. naly#e data.

    The importance of the validation element of a data collection system or a development trac*ing

    system cannot be overemphasi#ed.

    1n their study of S's Softare 3ngineering Gaboratory pro$ects, ;asili and "eiss

    (7L4P) found that softare data are error-prone and that special validation provisions are

    generally needed. alidation should be performed concurrently ith softare development and

    data collection, based on intervies ith those people supplying the data. 1n cases here data

    collection is part of the configuration control process and automated tools are available, data

    validation routines (e.g., consistency chec*, range limits, conditional entries, etc.) should be an

    integral part of the tools. 0urthermore, training, clear guidelines and instructions, and an

    understanding of ho the data are used by people ho enter or collect the data enhance data

    accuracy significantly.

    The actual collection process can ta*e several basic formats such as reporting forms,

    intervies, and automatic collection using the computer system. 0or data collection to be

    efficient and effective, it should be merged ith the configuration management or change control

    system. This is the case in most large development organi#ations. 0or e!ample, at 1;M

    Dochester the change control system covers the entire development process, and online tools are

    used for plan change control, development items and changes, integration, and change control

    after integration (defect fi!es). The tools capture data pertinent to schedule, resource, and pro$ectstatus, as ell as %uality indicators. 1n general, change control is more prevalent after the code is

    integrated. This is one of the reasons that in many organi#ations defect data are usually available

    for the testing phases but not for the design and coding phases.

    "ith regard to defect data, testing defects are generally more reliable than inspection

    defects. uring testing, a bug e!ists hen a test case cannot e!ecute or hen the test results

    deviate from the e!pected outcome. uring inspections, the determination of a defect is based on

    the $udgment of the inspectors. Therefore, it is important to have a clear definition of an

    inspection defect. The folloing is an e!ample of such a definition+

     Inspection defect: problem found during the inspection process hich, if not fi!ed, ould

    cause one or more of the folloing to occur+

    defect condition in a later inspection phase

    defect condition during testing

    field defect

     onconformance to re%uirements and specifications

  • 8/18/2019 stqa unit 3 notes

    13/22

     onconformance to established standards such as performance, national language

    translation, and usability

    0or e!ample, misspelled ords are not counted as defects, but ould be if they ere found on a

    screen that customers use. /sing nested 10-T?3-3GS3 structures instead of a S3G3CT

    statement ould not be counted as a defect unless some standard or performance reason dictated

    otherise.

    0igure P. is an e!ample of an inspection summary form. The form records the total number of 

    inspection defects and the G>C estimate for each part (module), as ell as defect data classified

     by defect origin and defect type. The folloing guideline pertains to the defect type classification

     by development phase+

     Interface defect: n interface defect is a defect in the ay to separate pieces of logic

    communicate. These are errors in communication beteen+

    Components

    roducts

    Modules and subroutines of a component

    /ser interface (e.g., messages, panels)

    3!amples of interface defects per  

    development phase follo.

    FI>URE ;,6 An Ins'ection Summar!

    Form

     

     High-Level Design (I0)

    /se of rong parameter 

    1nconsistent use of function *eys on user 

    interface (e.g., screen)

    1ncorrect message used resentation of information on screen not usable

     

     Low-Level Design (I1)

    Missing re%uired parameters (e.g., missing parameter on module)

    "rong parameters (e.g., specified incorrect parameter on module)

    http://popup%28%27/content/images/chap4_0201729156/elementLinks/04fig07.gif')http://popup%28%27/content/images/chap4_0201729156/elementLinks/04fig07.gif')http://popup%28%27/content/images/chap4_0201729156/elementLinks/04fig07.gif')http://popup%28%27/content/images/chap4_0201729156/elementLinks/04fig07.gif')

  • 8/18/2019 stqa unit 3 notes

    14/22

    1ntermodule interfaces+ input not there, input in rong order 

    1ntramodule interfaces+ passing valuesKdata to subroutines

    1ncorrect use of common data structures

    Misusing data passed to code

     

    Code (I)

    assing rong values for parameters on macros, application program interfaces (1s), modules

    Setting up a common control bloc*Karea used by another piece of code incorrectly ot issuing

    correct e!ception to caller of code

     

     Logic defect: logic defect is one that ould cause incorrect results in the function to be

     performed by the logic. ?igh-level categories of this type of defect are as follos+

    0unction+ capability not implemented or implemented incorrectly

    ssignment+ initiali#ation

    Chec*ing+ validate dataKvalues before use

    Timing+ management of sharedKreal-time resources

    ata Structures+ static and dynamic definition of data

    3!amples of logic defects per development phase follo.

     

     High-Level Design (I0)

    1nvalid or incorrect screen flo

    ?igh-level flo through component missing or incorrect in the revie pac*age

    0unction missing from macros you are implementing

    /sing a rong macro to do a function that ill not or* (e.g., using IIIMS@ to receive a

    message from a program message %ueue, instead of UUUMS@).

    Missing re%uirements

  • 8/18/2019 stqa unit 3 notes

    15/22

    Missing parameterKfield on commandKin database structureKon screen you are implementing

    "rong value on *eyord (e.g., macro, command)

    "rong *eyord (e.g., macro, command)

     

     Low-Level Design (I1)

    Gogic does not implement 15 design

    Missing or e!cessive function

    alues in common structure not set

    ropagation of authority and adoption of authority (lac* of or too much)

    Gac* of code page conversion

    1ncorrect initiali#ation

     ot handling abnormal termination (conditions, cleanup, e!it routines)

    Gac* of normal termination cleanup

    erformance+ too much processing in loop that could be moved outside of loop

     

    Code (I)

    Code does not implement 17 design

    Gac* of initiali#ation

    ariables initiali#ed incorrectly

    Missing e!ception monitors

    3!ception monitors in rong order 

    3!ception monitors not active

    3!ception monitors active at the rong time

    3!ception monitors set up rong

    Truncating of double-byte character set data incorrectly (e.g., truncating before shift in character)

  • 8/18/2019 stqa unit 3 notes

    16/22

    1ncorrect code page conversion

    Gac* of code page conversion

     ot handling e!ceptionsKreturn codes correctly

     

     Docu!entation defect: documentation defect is a defect in the description of the function (e.g.,

     prologue of macro) that causes someone to do something rong based on this information. 0or 

    e!ample, if a macro prologue contained an incorrect description of a parameter that caused the

    user of this macro to use the parameter incorrectly, this ould be a documentation defect against

    the macro.

     

    3!amples of documentation defects per development phase follo.

     

     High-Level Design (I0)

    1ncorrect information in prologue (e.g., macro)

    Misspelling on user interface (e.g., screen)

    "rong ording (e.g., messages, command prompt te!t)

    /sing restricted ords on user interface

    "ording in messages, definition of command parameters is technically incorrect

     

     Low-Level Design (I1)

    "rong information in prologue (e.g., macros, program, etc.)

    Missing definition of inputs and outputs of module, subroutines, etc.

    1nsufficient documentation of logic (comments tell hat but not hy)

     

    Code (I)

    1nformation in prologue not correct or missing

    "rong ording in messages

  • 8/18/2019 stqa unit 3 notes

    17/22

    Second-level te!t of message technically incorrect

    1nsufficient documentation of logic (comments tell hat but not hy)

    1ncorrect documentation of logic

  • 8/18/2019 stqa unit 3 notes

    18/22

  • 8/18/2019 stqa unit 3 notes

    19/22

    UNIT I$

    SOFTWARE QUALITY ASSURANCE

    Test Maturit! Mode"

    Test Maturity Model is based on the Capability Maturity Model (CMM), and it as first

    developed by the 1llinois 1nstitute of Technology. 1t is a detailed model for test process

  • 8/18/2019 stqa unit 3 notes

    20/22

    improvement. 1t can be complemented ith any process improvement model or can be used as a

    ST G>3 model.

    TMM has ma$or to components

    7. set of Q levels that define testing capability

    6. n ssessment Model

    /ifferent Le%e"s of Maturit! Mode"

    The five levels of the TMM help the organi#ation to determine the maturity of its process and to

    identify the ne!t improvement steps that are essential to achieving a higher level of test maturity.

    TMM Le%e"s >oa"s O#+ecti%e of TMM "e%e"s

    Gevel 7+

    1nitial

    Softare should run

    successfully

    • t this level, no process areas are

    identified

    • >b$ective of testing is to ensure that

    softare is or*ing fine

    • This level lac*s resources, tools, and

    trained staff 

    •  o %uality assurance chec*s beforesoftare delivery

    Gevel 6+

    efined

    evelop testing and

    debugging goals and

     policies

    • This level distinguish testing from

    debugging O they are considered distinct

    activities

    • Testing phase comes after coding

    • rimary goal of testing is to sho

    softare meets specification

    • ;asic testing methods and techni%ues

  • 8/18/2019 stqa unit 3 notes

    21/22

    are in place

    Gevel 8+

    1ntegrated

    1ntegrate testing into thesoftare life cycle

    • Testing gets integrated into entire life

    cycle

    • ;ased on re%uirements test ob$ectives

    are defined

    • Test organi#ation e!ists

    • Testing recogni#ed as a professional

    activity

    Gevel P+

    Management andMeasurement

    3stablish a test

    measurement program

    • Testing is a measured and %uantified

     process

    Devie at all development phases arerecogni#ed as tests

    • 0or reuse and regression testing, test

    cases are gathered and recorded in a test

    database

    • efects are logged and given severity

    levels

    Gevel Q+

    >ptimi#ed

    Test process optimi#ation• Testing is managed and defined

    • Testing effectiveness and costs can be

    monitored• Testing can be fine-tuned and

    continuously improved

    • Auality control and defect prevention

    are practiced

    • rocess reuse is practiced

    • Test related metrics also have tool

    support

    • Tools provide support for test case

    design and defect collection

    /ifference #etween CMM ? TMM

    CMM TMM

  • 8/18/2019 stqa unit 3 notes

    22/22

    • CMM or Capability Maturity Model

    is for $udging the maturity of the softare

     processes of an organi#ation

    • TMM or Test Maturity Model describes the

     process of testing and is related to monitoring

    the %uality of softare testing model


Recommended