+ All Categories
Home > Documents > Design Verification and Validation

Design Verification and Validation

Date post: 07-Apr-2018
Category:
Upload: bhagirath-singh
View: 239 times
Download: 0 times
Share this document with a friend

of 22

Transcript
  • 8/3/2019 Design Verification and Validation

    1/22

    Principles of Software Testing

    What is not Software Testing? Test is not debugging. Debugging has the goal to remove errors. The

    existence and the approximate location of the error are known. Debuggingis not documented. There is no specification and there will be no record(log) or report. Debugging is the result of testing but never asubstitution for it.

    Test can never find 100% of the included errors. There will be always arest of remaining errors which can not be found. Each kind of test will finda different kind of errors.

    Test has the goal to find errors and not their reasons. Therefore the

    activity of testing will not include any bug fixing or implementation offunctionality. The result of testing is a test report. A tester must not modifythe code he is testing, by no means! This has to be done by the developerbased on the test report he receives from the tester.

    What is Software Testing?

    Test is a formal activity. It involves a strategy and a systematicapproach. The different stages of tests supplement each other. Tests are

    always specified and recorded. Test is a planned activity. The workflow and the expected results are

    specified. Therefore the duration of the activities can be estimated. Thepoint in time where tests are executed is defined.

    Test is the formal proof of software quality.

    Overview of Test Methods

    Static testsThe software is not executed but analyzed offline. In this category would be codeinspections (e.g. Fagan inspections), Lint checks, cross reference checks, etc.

    Dynamic testsThis requires the execution of the software or parts of the software (using stubs). It canbe executed in the target system, an emulator or simulator. Within the dynamic teststhe state of the art distinguishes between structural and functional tests.

  • 8/3/2019 Design Verification and Validation

    2/22

    Structural testsThese are so called "white-box tests" because they are performed with the knowledgeof the source code details. Input interfaces are stimulated with the aim to run throughcertain predefined branches or paths in the software. The software is stressed withcritical values at the boundaries of the input values or even with illegal input values.

    The behavior of the output interface is recorded and compared with the expected(predefined) values.

    Functional testsThese are the so called "black-box" tests. The software is regarded as a unit withunknown content. Inputs are stimulated and the values at the output results arerecorded and compared to the expected and specified values.

    Test by progressive Stages

    The various tests are able to find different kinds of errors. Therefore it is not enough torely on one kind of test and completely neglect the other. E.g. white-box tests will beable to find coding errors. To detect the same coding error in the system test is verydifficult. The system malfunction which may result from the coding error will notnecessarily allow conclusions about the location of the coding error. Test thereforeshould be progressive and supplement each other in stages in order to find each kindof error with the appropriate method.

    Module testA module is the smallest compilable unit of source code. Often it is too small to allowfunctional tests (black-box tests). However it is the ideal candidate for white-box tests.These have to be first of all static tests (e.g. Lint and inspections) followed by dynamictests to check boundaries, branches and paths. This will usually require theemployment of stubs and special test tools.

    Component testThis is the black-box test of modules or groups of modules which represent certainfunctionality. There are no rules about what can be called a component. It is just whatthe tester defined to be a component, however it should make sense and be a testableunit. Components can be step by step integrated to bigger components and tested assuch.

    Integration testThe software is step by step completed and tested by tests covering a collaboration ofmodules or classes. The integration depends on the kind of system. E.g. the stepscould be to run the operating system first and gradually add one component after theother and check if the black-box tests still run (the test cases of course will increasewith every added component). The integration is still done in the laboratory. It may bedone using simulators or emulators. Input signals may be stimulated.

  • 8/3/2019 Design Verification and Validation

    3/22

    System testThis is a black-box test of the complete software in the target system. Theenvironmental conditions have to be realistic (complete original hardware in thedestination environment).

    Which Test finds which Error?

    Possible errorCan be best

    found by Example

    Syntax errors Compiler, LintMissing semicolons, Values defined but notinitalized or used, order of evaluationdisregarded.

    Data errorsSoftwareinspection, module

    Overflow of variables at calculation, usage ofinappropriate data types, values not

  • 8/3/2019 Design Verification and Validation

    4/22

    tests initialized, values loaded with wrong data orloaded at a wrong point in time, lifetime ofpointers.

    Algorithm and logical

    errors

    Softwareinspection, moduletests

    Wrong program flow, use of wrong formulas

    and calculations.

    Interface errors

    Softwareinspection, moduletests, componenttests.

    Overlapping ranges, range violation (min. andmax. values not observed or limited),unexpected inputs, wrong sequence of inputparameters.

    Operating systemerrors, architectureand design errors

    Design inspection,integration tests

    Disturbances by OS interruptions or hardwareinterrupts, timing problems, lifetime andduration problems.

    Integration errorsIntegration tests,system tests

    Resource problems (runtime, stack, registers,memory, etc.)

    System errors System tests Wrong system behaviour, specification errors

    /======================================/

    Metrics Used In Testing

    In this tutorial you will learn about metrics used in testing, The Product Quality Measures -

    1. Customer satisfaction index, 2. Delivered defect quantities, 3. Responsiveness

    (turnaround time) to users, 4. Product volatility, 5. Defect ratios, 6. Defect removal

    efficiency, 7. Complexity of delivered product, 8. Test coverage, 9. Cost of defects, 10.

    Costs of quality activities, 11. Re-work, 12. Reliability and Metrics for Evaluating Application

    System Testing.

    Ads

    The Product Quality Measures:

    1. Customer satisfaction index

    This index is surveyed before product delivery and after product delivery

    (and on-going on a periodic basis, using standard questionnaires).The following are

    analyzed:

  • 8/3/2019 Design Verification and Validation

    5/22

    Number of system enhancement requests per year

    Number of maintenance fix requests per year

    User friendliness: call volume to customer service hotline

    User friendliness: training time per new user

    Number of product recalls or fix releases (software vendors)

    Number of production re-runs (in-house information systems groups)

    2. Delivered defect quantities

    They are normalized per function point (or per LOC) at product delivery (first 3 months orfirst year of operation) or Ongoing (per year of operation) by level of severity, by category

    or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help

    defect, defect introduced by fixes, etc.

    3. Responsiveness (turnaround time) to users

    Turnaround time for defect fixes, by level of severity

    Time for minor vs. major enhancements; actual vs. planned elapsed time

    4. Product volatility

    Ratio of maintenance fixes (to repair the system & bring it into compliance with

    specifications), vs. enhancement requests (requests by users to enhance or change

    functionality)

    5. Defect ratios

    Defects found after product delivery per function point.

    Defects found after product delivery per LOC

  • 8/3/2019 Design Verification and Validation

    6/22

    Pre-delivery defects: annual post-delivery defects

    Defects per function point of the system modifications

    6. Defect removal efficiency

    Number of post-release defects (found by clients in field operation), categorized by level

    of severity

    Ratio of defects found internally prior to release (via inspections and testing), as a

    percentage of all defects

    All defects include defects found internally plus externally (by customers) in the first

    year after product delivery

    7. Complexity of delivered product

    McCabe's cyclomatic complexity counts across the system

    Halsteads measure

    Card's design complexity measures

    Predicted defects and maintenance costs, based on complexity measures

    8. Test coverage

    Breadth of functional coverage

    Percentage of paths, branches or conditions that were actually tested

    Percentage by criticality level: perceived level of risk of paths

    The ratio of the number of detected faults to the number of predicted faults.

    9. Cost of defects

    Business losses per defect that occurs during operation

    Business interruption costs; costs of work-arounds

    Lost sales and lost goodwill

  • 8/3/2019 Design Verification and Validation

    7/22

  • 8/3/2019 Design Verification and Validation

    8/22

    Number of product recalls or fix releases

    Number of production re-runs as a ratio of production runs

    Metrics for Evaluating Application System Testing:

    Metric = Formula

    Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC

    represents Lines of Code)

    Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines

    of Code).

    Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

    Defects per size = Defects detected / system size

    Test cost (in %) = Cost of testing / total cost *100

    Cost to locate defect = Cost of testing / the number of defects located

    Achieving Budget = Actual cost of testing / Budgeted cost of testing

    Defects detected in testing = Defects detected in testing / total system defects

    Defects detected in production = Defects detected in production/system size

  • 8/3/2019 Design Verification and Validation

    9/22

    Quality of Testing = No of defects found during Testing/(No of defects found during

    testing + No of acceptance defects found after delivery) *100

    Effectiveness of testing to business = Loss due to problems / total resources processedby the system.

    System complaints = Number of third party complaints / number of transactions

    processed

    Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

    Source Code Analysis = Number of source code statements changed / total number of

    tests.

    Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual

    Effort for Design and Documentation

    Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

    /==============S/W Design Validation/Verifications=============/

    These are FDA's (an agency notorious for their strict definitions...) of design verification and

    design validation:

    Design verification means confirmation by examination and provision of objective evidencethat specified requirements have been fulfilled.

    Design validation means establishing by objective evidence that device (product)specifications conform with user needs and intended use(s).

    In other words:

  • 8/3/2019 Design Verification and Validation

    10/22

    You verify a design by checking drawings and specs, running simulations, checking that all

    design requirements have been addressed, that calculations are correct, etc. It's mainly a

    "paper" exercise and when you are through with it you should be quite confident that thedesign is complete and that a product that will eventually be built according to thosedrawings and specs stands a good chance of conforming to the requirements in the realworld.

    You validate a design by trying out actual products (an initial run or batch of products), built

    as above by real workers, installed and operated in the real environment of use by realoperators, etc. It's the proverbial proof of the pudding where you can catch errors and otherproblems that escaped all former verification efforts ... and others bugs that might havecrept in later on.

    /=========================/

    After reading many of the responses to the verification/validation problem, I keep

    wondering why, after all these years, this is still debated. I am also surprised at howunhelpful most of the examples posted on this list appear to be, because they often do not

    accurately differentiate between the two concepts. I hope the description below will help.

    Verification means examining all of the things you have anticipated the customer will wantthe product to do, and checking to see if the product does those things the way you haveanticipated the customer will want. Verification is about checking the things you have

    intentionally designed into the product and making sure the product meets those design

    requirements. It is about whether you were smart enough to _meet_ your specified designgoals.

    Validation means checking to see if the product does what the customer (or user) actually

    wants the product to do under real-world conditions, or as-close-to-real-world conditions asyou can possibly simulate. Validation encompasses all of the things that can be verified, as

    well as all of the things that cannot -- i.e., all of the things that the product designers mightnever have anticipated the customer might want or expect the product to do. It is aboutwhether you were smart enough to specify the _right_ design goals.

    Of course, in the ideal world, product designers would correctly anticipate all of the thingsthe customer would ever want the product to do, they would specify all the right design

    goals, and they would meet them. As a result, the validation would reveal nothing beyondwhat the verification covered.

    In the real world, product designers sometimes don't correctly anticipate all of the thingsthe customer will want the product to do, and they discover that the product exhibits certain

    unanticipated or undesirable characteristics, from the user's perspective.

    Thus, the difference between verification and validation is that verification checks the

    adequacy of the design with respect to _identified_ design goals, whereas validation checksthe adequacy of the design with respect to both identified and _unidentified_ customerexpectataions.

    If you want an example of the difference between verification and validation, consider the

    difference between the foot-operated controls and the hand-operated controls on most cars.If you rent a car, even a model you have never driven before, you always know which foot

  • 8/3/2019 Design Verification and Validation

    11/22

    pedal makes the car go and which foot pedal makes the car stop. In fact, virtually any

    driver can easily get into any rental car -- at night or in the rain -- and operate the foot

    pedals to make the car go and stop. This is an example of a design that passes bothverification and validation, because the pedals work as the designers intended (verification)and they also work as the user expects, even under foreseeable adverse use conditions(validation) such as darkness or rain.

    Now consider the hand-operated controls. I don't know about you, but there have been

    times when I have rented a car and simply not known how to operate some of the criticalcontrols. For example, the stalk to the left of the steering wheel might operate theheadlights, it might operate the windshield wipers, it might operate both, or it might

    operate neither. If the controls differ from the car I own and drive daily, I might reach for

    the stalk in an effort to turn on the windshield wipers, and instead turn on the headlights.This is an example of a design that passes verification, because the controls perform their

    stated function; but it fails validation, because the controls do not work as the user expects

    under foreseeable conditions of use. Indeed, in this example, a user's inability to correctlyoperate the vehicle controls could, at a critical moment, cause the user to be unable to seeoncoming hazards on the road, possibly with fatal consequences.

    This illustrates one more thing about verification and validation: instead of "intended use,"

    which many people think is relevant to a design validation, I find it far more helpful to think

    in terms of "reasonably foreseeable use," "reasonably foreseeable user" and "reasonablyforeseeable use environment." The important distinction is that what is "intended" is never,

    ever enough: one must also consider things that are "unintended." The phrase "reasonablyforeseeable" includes both, and is, in fact, the standard that U.S. tort law applies in cases of

    product liability. It should therefore be the minimum standard that should be used byproduct designers.

    Another very simple example is the common screwdriver. Intended use: turn screws.

    Intended use environment: home and shop. Intended user: Joe Sixpack. Simple, right? Nowconsider what is "reasonably foreseeable." Reasonably foreseeable uses might include: turn

    screws, pry open paint cans, chisel holes in wood, remove dried spackling compound, chiseloff rusted bolts.... Reasonably foreseeable use environments: home and shop, chemistry

    lab, salt-water pier, high tension electrical tower, off-shore oil rig.... Reasonably foreseeable

    user: Joe Sixpack, his neighbor Martha with arthritis in her hands, her 10-year-old grandsonJohnny with small hands, his sister Sue who is left handed, their mother Jan who works in

    that chemical lab with all sorts of corrosive materials, her husband Jim who works on thosehigh tension electrical towers wearing heavily insulated gloves, Jim's brother Dave whoworks on that off-shore oil rig and who's hands are always covered with oil....

    Get the idea? If you only think in terms of what is "intended," you might not think hardenough about all the ways your product might be used. If so, your product might actually

    be hazardous to use or, at the very least, you might miss an important market segment

    that your competitor will be very happy to fill at your expense -- and, by doing so, reduceyour market share, eat your lunch, and take away the income you were planning to save foryour kid's college education or your early retirement.

    And that, as I see it, is the important difference between verification and validation.

    /============================/

  • 8/3/2019 Design Verification and Validation

    12/22

    Software Testing Life Cycle Models

    Software Testing Life Cycle ModelsThe various activities which are undertaken when developing software are commonlyModeled as a software development lifecycle. The software development lifecycle

    begins with the identification of a requirement for software and ends with the formalverification of the developed software against that requirement. The softwaredevelopment lifecycle does not exist by itself; it is in fact part of an overall Productlifecycle. Within the product lifecycle, software will undergo maintenance to correcterrors and to comply with changes to requirements. The simplest overall form is Wherethe product is just software, but it can become much more complicated, with Multiplesoftware developments each forming part of an overall system to comprise a Product.There are a number of different models for software development lifecycles. One thingWhich all models have in common, is that at some point in the lifecycle, software has tobe tested. This paper outlines some of the more commonly used software developmentLifecycles, with particular emphasis on the testing activities in each model. A software

    life cycle model depicts the significant phases or activities of a software project fromconception until the product is retired. It specifies the relationship between projectphases, including transition criteria, feedback mechanisms, milestones, baselines,reviews, and deliverables. Typically, a life cycle model addresses the following phasesof a software project: requirement phase, design phase, implemenentation, integration,testing, operations and maintenance. Much of the motivation behind utilizing a life cyclemodel is to provide structure. Life cycle models describe the inter relationship betweensoftware development phases .the common life cycle models are;

    V-mode of SDLC

    V & V PROCESS MODEL :V&V Model is Verification & Validation Model.In This Model We work simultaneouslyDevelopment and Testing.In this Model One V for Verification and one For Validationfirst 'V' we follow SDLC(software Development Life Cycle) and Second 'V' we followSTLC-(Software Testing Life Cycle).

    Testing normally done in a large system in 2 parts. The functional verification

    and validation against the Requirement specification and Performanceevaluation against the indicated requirements.

    Testing activity is involved right from the beginning of the project.

    Use of V&V process model increases the rate of success in a project

    development company to deliver the application on time and increases the

    cost effectiveness.

    http://testingsolution.blogspot.com/2006/12/software-testing-life-cycle-models.htmlhttp://testingsolution.blogspot.com/2006/12/v-mode-of-sdlc.htmlhttp://testingsolution.blogspot.com/2006/12/v-mode-of-sdlc.htmlhttp://testingsolution.blogspot.com/2006/12/software-testing-life-cycle-models.html
  • 8/3/2019 Design Verification and Validation

    13/22

    Testing Related ActivitiesDuring Requirement Phase

    Creation and finalization of testing template.

    Creation of test plan and test strategy .

    Capturing Acceptance criteria and preparation of acceptance test plan.

    Capturing Performance Criteria of the software requirements.

    Testing activities in Design Phase

    Develop test cases to ensure that product is on par with Requirement

    Specification document.

    Verify Test Cases & Test Scripts by peer reviews.

    Preparation of traceability matrix from system requirements.

    Testing activities in Unit Testing Phase

    Unit test is done for validating the product with respect to client requirements.

    Testing can be in multiple rounds.

    Defects found during system test should be logged in to defect tracking

    system for the purpose of resolving and tracking.

    Test logs and defects are captured and maintained.

    Review of all test documents.

    Testing activities in Integration Testing Phase

    This testing is done in parallel with integration of various applications or

    components.

    Testing the product with its external and internal interfaces without using

    drivers and stubs.

    http://bp1.blogger.com/_IMmmNaDrX3U/RYuWPvFqyKI/AAAAAAAAAAw/ptYBqKAxgTs/s1600-h/vv-1.bmp
  • 8/3/2019 Design Verification and Validation

    14/22

    Incremental approach while integrating the interfaces.

    Performance testing

    This is done to validate the performance criteria of the product/ application.

    This is non-functional testing.

    Business Cycle testing

    This refers to end to end testing of real life like business scenarios.

    Testing activities during Release phase

    Acceptance testing is conducted at the customer location.

    Resolves all defects reported by the customer during Acceptance testing.

    Conduct Root Cause Analysis (RCA) for those defects reported by customer

    during acceptance testing.

    Waterfall Model

    Waterfall Model

    The waterfall Model is an engineering model designed to be applied to the developmentof software.The idea is the following: there are different stages to the development andthe outputs of the First Stage "Flow" into the second stage and these outputs "flow" intothe third stage and so on.there are Usually five stages in this model of softwareDevelopment.

    http://testingsolution.blogspot.com/2006/12/waterfall-model.htmlhttp://bp3.blogger.com/_IMmmNaDrX3U/RYuX8PFqyLI/AAAAAAAAAA8/JGt2I46J_4Q/s1600-h/wfm-1.bmphttp://testingsolution.blogspot.com/2006/12/waterfall-model.html
  • 8/3/2019 Design Verification and Validation

    15/22

    Stages of Water Fall ModelRequirement analyis and Planninng:- In This stage the requirements of the "To bedeveloped Software" are established.These are usually the services it will provide,itsconstraints and Goalsof the Software.Once theae are Established they Have to bedefined such a way that are usable in the next Stage.This Stage is often Preludes by a

    Feasibility or a Feasible study is included in this stage.the Feasibility study includesQuestions like;Should we develop the Software,what are the alternatives? It could becalled the conception of a software project and might be seen as the very begining ofthe life cycle.

    Prototype Model

    Spiral Model

    http://testingsolution.blogspot.com/2006/12/prototype-model.htmlhttp://testingsolution.blogspot.com/2006/12/spiral-model.htmlhttp://bp2.blogger.com/_IMmmNaDrX3U/RYuYw_FqyMI/AAAAAAAAABI/vXKVXQSQF2I/s1600-h/prototype+model-1new.bmphttp://testingsolution.blogspot.com/2006/12/spiral-model.htmlhttp://testingsolution.blogspot.com/2006/12/prototype-model.html
  • 8/3/2019 Design Verification and Validation

    16/22

    Iteration Model

    /================================/

    Coupling (computer programming)

    From Wikipedia, the free encyclopedia

    http://testingsolution.blogspot.com/2006/12/iteration-model.htmlhttp://bp1.blogger.com/_IMmmNaDrX3U/RYuazvFqyOI/AAAAAAAAABg/2AsMu54MLwo/s1600-h/iterative+modal+new-1.bmphttp://bp0.blogger.com/_IMmmNaDrX3U/RYuc4fFqyPI/AAAAAAAAABs/KLEkEWjuI6o/s1600-h/spiral+model.bmphttp://bp1.blogger.com/_IMmmNaDrX3U/RYuazvFqyOI/AAAAAAAAABg/2AsMu54MLwo/s1600-h/iterative+modal+new-1.bmphttp://bp0.blogger.com/_IMmmNaDrX3U/RYuc4fFqyPI/AAAAAAAAABs/KLEkEWjuI6o/s1600-h/spiral+model.bmphttp://testingsolution.blogspot.com/2006/12/iteration-model.html
  • 8/3/2019 Design Verification and Validation

    17/22

    Jump to:navigation,search

    This article needs additionalcitationsforverification.

    Please helpimprove this articleby addingreliable references. Unsourced material may bechallenged

    andremoved.(September 2010)

    Incomputer science,coupling or dependency is the degree to which each program modulerelies on each one of the other modules.

    Coupling is usually contrasted withcohesion. Low coupling often correlates with high cohesion,and vice versa. Thesoftware quality metricsof coupling and cohesion were invented byLarry

    Constantine, an original developer of Structured Design[1]

    who was also an early proponent of

    these concepts (see alsoSSADM). Low coupling is often a sign of a well-structuredcomputer

    systemand a good design, and when combined with high cohesion, supports the general goals ofhigh readability and maintainability.

    Contents

    [hide]

    1 Types of coupling

    o 1.1 Object-oriented programming

    2 Disadvantages

    3 Performance issues

    4 Solutions 5 Coupling versus Cohesion

    6 Module coupling

    7 See also

    8 References

    Types of coupling

    http://en.wikipedia.org/wiki/Coupling_(computer_programming)#mw-headhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#mw-headhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#mw-headhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#p-searchhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#p-searchhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#p-searchhttp://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citationshttp://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citationshttp://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citationshttp://en.wikipedia.org/wiki/Wikipedia:Verifiabilityhttp://en.wikipedia.org/wiki/Wikipedia:Verifiabilityhttp://en.wikipedia.org/wiki/Wikipedia:Verifiabilityhttp://en.wikipedia.org/w/index.php?title=Coupling_(computer_programming)&action=edithttp://en.wikipedia.org/w/index.php?title=Coupling_(computer_programming)&action=edithttp://en.wikipedia.org/w/index.php?title=Coupling_(computer_programming)&action=edithttp://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sourceshttp://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sourceshttp://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sourceshttp://en.wikipedia.org/wiki/Template:Citation_neededhttp://en.wikipedia.org/wiki/Template:Citation_neededhttp://en.wikipedia.org/wiki/Template:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidencehttp://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidencehttp://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidencehttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Cohesion_(computer_science)http://en.wikipedia.org/wiki/Cohesion_(computer_science)http://en.wikipedia.org/wiki/Cohesion_(computer_science)http://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Larry_Constantinehttp://en.wikipedia.org/wiki/Larry_Constantinehttp://en.wikipedia.org/wiki/Larry_Constantinehttp://en.wikipedia.org/wiki/Larry_Constantinehttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#cite_note-0http://en.wikipedia.org/wiki/Coupling_(computer_programming)#cite_note-0http://en.wikipedia.org/wiki/Structured_Systems_Analysis_and_Design_Methodologyhttp://en.wikipedia.org/wiki/Structured_Systems_Analysis_and_Design_Methodologyhttp://en.wikipedia.org/wiki/Structured_Systems_Analysis_and_Design_Methodologyhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)http://en.wikipedia.org/wiki/Coupling_(computer_programming)http://en.wikipedia.org/wiki/Coupling_(computer_programming)http://en.wikipedia.org/wiki/Coupling_(computer_programming)#Types_of_couplinghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Types_of_couplinghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Object-oriented_programminghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Object-oriented_programminghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Disadvantageshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Disadvantageshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Performance_issueshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Performance_issueshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Solutionshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Solutionshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Coupling_versus_Cohesionhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Coupling_versus_Cohesionhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Module_couplinghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Module_couplinghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#See_alsohttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#See_alsohttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Referenceshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Referenceshttp://en.wikipedia.org/wiki/File:Question_book-new.svghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Referenceshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#See_alsohttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Module_couplinghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Coupling_versus_Cohesionhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Solutionshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Performance_issueshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Disadvantageshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Object-oriented_programminghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#Types_of_couplinghttp://en.wikipedia.org/wiki/Coupling_(computer_programming)http://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Structured_Systems_Analysis_and_Design_Methodologyhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#cite_note-0http://en.wikipedia.org/wiki/Larry_Constantinehttp://en.wikipedia.org/wiki/Larry_Constantinehttp://en.wikipedia.org/wiki/Software_metrichttp://en.wikipedia.org/wiki/Cohesion_(computer_science)http://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidencehttp://en.wikipedia.org/wiki/Template:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sourceshttp://en.wikipedia.org/w/index.php?title=Coupling_(computer_programming)&action=edithttp://en.wikipedia.org/wiki/Wikipedia:Verifiabilityhttp://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citationshttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#p-searchhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#mw-head
  • 8/3/2019 Design Verification and Validation

    18/22

    Conceptual model of coupling

    Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some

    types of coupling, in order of highest to lowest coupling, are as follows:

    Content coupling (high)

    Content coupling is when one module modifies or relies on the internal workings of another

    module (e.g., accessing local data of another module).

    Therefore changing the way the second module produces data (location, type, timing) will lead

    to changing the dependent module.

    Common coupling

    Common coupling is when two modules share the same global data (e.g., a global variable).

    Changing the shared resource implies changing all the modules using it.

    External coupling

    External coupling occurs when two modules share an externally imposed data format,

    communication protocol, or device interface.

    Control coupling

    Control coupling is one module controlling the flow of another, by passing it information on

    what to do (e.g., passing a what-to-do flag).

    Stamp coupling (Data-structured coupling)

    http://en.wikipedia.org/wiki/File:Coupling_sketches_cropped_1.svghttp://en.wikipedia.org/wiki/File:Coupling_sketches_cropped_1.svghttp://en.wikipedia.org/wiki/File:Coupling_sketches_cropped_1.svghttp://en.wikipedia.org/wiki/File:Coupling_sketches_cropped_1.svg
  • 8/3/2019 Design Verification and Validation

    19/22

    Stamp coupling is when modules share a composite data structure and use only a part of it,

    possibly a different part (e.g., passing a whole record to a function that only needs one field of

    it).

    This may lead to changing the way a module reads a record because a field that the module

    doesn't need has been modified.

    Data coupling

    Data coupling is when modules share data through, for example, parameters. Each datum is an

    elementary piece, and these are the only data shared (e.g., passing an integer to a function that

    computes a square root).

    Message coupling (low)

    This is the loosest type of coupling. It can be achieved by state decentralization (as in objects)

    and component communication is done via parameters or message passing (seeMessage

    passing).

    No coupling

    Modules do not communicate at all with one another.

    Object-oriented programming

    Subclass Coupling

    Describes the relationship between a child and its parent. The child is connected to its parent,

    but the parent isn't connected to the child.

    Temporal coupling

    When two actions are bundled together into one module just because they happen to occur at

    the same time.

    Disadvantages

    Tightly coupled systems tend to exhibit the following developmental characteristics, which are

    often seen as disadvantages:

    1. A change in one module usually forces aripple effectof changes in other modules.

    2. Assembly of modules might require more effort and/or time due to the increased inter-module

    dependency.

    3. A particular module might be harder to reuse and/or test because dependent modules must beincluded.

    Performance issues

    http://en.wikipedia.org/wiki/Message_passinghttp://en.wikipedia.org/wiki/Message_passinghttp://en.wikipedia.org/wiki/Message_passinghttp://en.wikipedia.org/wiki/Message_passinghttp://en.wikipedia.org/wiki/Ripple_effecthttp://en.wikipedia.org/wiki/Ripple_effecthttp://en.wikipedia.org/wiki/Ripple_effecthttp://en.wikipedia.org/wiki/Ripple_effecthttp://en.wikipedia.org/wiki/Message_passinghttp://en.wikipedia.org/wiki/Message_passing
  • 8/3/2019 Design Verification and Validation

    20/22

    Whether loosely or tightly coupled, a system's performance is often reduced by message and

    parameter creation, transmission, translation and interpretation overhead. Seeevent-drivenprogramming.

    Message Creation Overhead and Performance

    Since all messages and parameters must possess particular meanings to be consumed (i.e.,

    result in intended logical flow within the receiver), they must be created with a particular

    meaning. Creating any sort of message requires overhead in either CPU or memory usage.

    Creating a single integer value message (which might be a reference to a string, array or data

    structure) requires less overhead than creating a complicated message such as a SOAP message.

    Longer messages require more CPU and memory to produce. To optimize runtime performance,

    message length must be minimized and message meaning must be maximized.

    Message Transmission Overhead and Performance

    Since a message must be transmitted in full to retain its complete meaning, messagetransmission must be optimized. Longer messages require more CPU and memory to transmit

    and receive. Also, when necessary, receivers must reassemble a message into its original state

    to completely receive it. Hence, to optimize runtime performance, message length must be

    minimized and message meaning must be maximized.

    Message Translation Overhead and Performance

    Message protocols and messages themselves often contain extra information (i.e., packet,

    structure, definition and language information). Hence, the receiver often needs to translate a

    message into a more refined form by removing extra characters and structure information

    and/or by converting values from one type to another. Any sort of translation increases CPU

    and/or memory overhead. To optimize runtime performance, message form and content must

    be reduced and refined to maximize its meaning and reduce translation.

    Message Interpretation Overhead and Performance

    All messages must be interpreted by the receiver. Simple messages such as integers might not

    require additional processing to be interpreted. However, complex messages such as SOAP

    messages require a parser and a string transformer for them to exhibit intended meanings. To

    optimize runtime performance, messages must be refined and reduced to minimize

    interpretation overhead.

    Solutions

    One approach to decreasing coupling isfunctional design, which seeks to limit the

    responsibilities of modules along functionality, coupling increases between two classes A and B

    if:

    http://en.wikipedia.org/wiki/Event-driven_programminghttp://en.wikipedia.org/wiki/Event-driven_programminghttp://en.wikipedia.org/wiki/Event-driven_programminghttp://en.wikipedia.org/wiki/Event-driven_programminghttp://en.wikipedia.org/wiki/Functional_designhttp://en.wikipedia.org/wiki/Functional_designhttp://en.wikipedia.org/wiki/Functional_designhttp://en.wikipedia.org/wiki/Functional_designhttp://en.wikipedia.org/wiki/Event-driven_programminghttp://en.wikipedia.org/wiki/Event-driven_programming
  • 8/3/2019 Design Verification and Validation

    21/22

    A has an attribute that refers to (is of type) B.

    A calls on services of an object B.

    A has a method that references B (via return type or parameter).

    A is a subclass of (or implements) class B.

    Low coupling refers to a relationship in which one module interacts with another module througha simple and stable interface and does not need to be concerned with the other module's internal

    implementation (seeInformation Hiding).

    Systems such asCORBAorCOMallow objects to communicate with each other without having

    to know anything about the other object's implementation. Both of these systems even allow for

    objects to communicate with objects written in other languages.

    Coupling versus Cohesion

    Coupling and Cohesion are the two terms which very frequently occur together. Together they

    talk about the quality a module should have. Coupling talks about the inter dependenciesbetween the various modules while cohesion describes how related functions within a moduleare. Low cohesion implies that module performs tasks which are not very related to each other

    and hence can create problems as the module becomes large.

    Module coupling

    Coupling in Software Engineering[2]describes a version of metrics associated with this concept.

    For data and control flow coupling:

    di: number of input data parameters

    ci: number of input control parameters

    do: number of output data parameters

    co: number of output control parameters

    For global coupling:

    gd: number of global variables used as data

    gc: number of global variables used as control

    For environmental coupling:

    w: number of modules called (fan-out)

    r: number of modules calling the module under consideration (fan-in)

    http://en.wikipedia.org/wiki/Information_Hidinghttp://en.wikipedia.org/wiki/Information_Hidinghttp://en.wikipedia.org/wiki/Information_Hidinghttp://en.wikipedia.org/wiki/CORBAhttp://en.wikipedia.org/wiki/CORBAhttp://en.wikipedia.org/wiki/CORBAhttp://en.wikipedia.org/wiki/Component_Object_Modelhttp://en.wikipedia.org/wiki/Component_Object_Modelhttp://en.wikipedia.org/wiki/Component_Object_Modelhttp://en.wikipedia.org/wiki/Coupling_(computer_programming)#cite_note-Pressman-1http://en.wikipedia.org/wiki/Coupling_(computer_programming)#cite_note-Pressman-1http://en.wikipedia.org/wiki/Coupling_(computer_programming)#cite_note-Pressman-1http://en.wikipedia.org/wiki/Component_Object_Modelhttp://en.wikipedia.org/wiki/CORBAhttp://en.wikipedia.org/wiki/Information_Hiding
  • 8/3/2019 Design Verification and Validation

    22/22

    Coupling(C)makes the value larger the more coupled the module is. This number ranges fromapproximately 0.67 (low coupling) to 1.0 (highly coupled)

    For example, if a module has only a single input and output data parameter

    If a module has 5 input and output data parameters, an equal number of control parameters, and

    accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4,


Recommended