____________________________________________________________________________________________
12345
Software quality factors and software quality metrics to6
enhance software quality assurance7
89
10ABSTRACT11
12Aims: Software quality assurance is a formal process for evaluating and documentingthe quality of the work products during each stage of the software development lifecycle.The practice of applying software metrics to operational factors and to maintain factors isa complex task. Successful software quality assurance is highly dependent on softwaremetrics. It needs linkage the software quality model and software metrics throughquality factors in order to offer measure method for software quality assurance. Thecontributions of this paper build an appropriate method of Software quality metricsapplication in quality life cycle with software quality assurance.Design: The purpose approach defines some software metrics in the factors anddiscussed several software quality assurance model and some quality factors measuremethod.Methodology: This paper solves customer value evaluation problem are: Build aframework of combination of software quality criteria. Describes software metrics. BuildSoftware quality metrics application in quality life cycle with software quality assurance.Results: From the appropriate method of Software quality metrics application in qualitylife cycle with software quality assurance, each activity in the software life cycle, there isone or more QA quality measure metrics focus on ensuring the quality of the process andthe resulting product. Future research is need to extend and improve the methodologyto extend metrics that have been validated on one project, using our criteria, validmeasures of quality on future software project.
13Keywords: Software quality assurance; software metric; Software quality factor; software life14cycle15
1617
1. INTRODUCTION18
Software quality assurance (SQA) is a technique to help achieve quality. SQA is becoming a19critical issue in software development and maintenance (Vennila et al 2011). SQA can20monitor that the software engineering processes and methods used to ensure quality.21Software metric deals with the measurement of software product and software product22development process and it guides and evaluates software development (Ma et al. 2006).23Software metric is quantitative measure of the extent to which a system, component, or24process. Software factors are going importance and acceptance in corporate sectors as25organizations grow nature and strive to improve enterprise quality. The metrics are the26quantitative measures of the degree to which software processes a given attribute that affects27its quality. SQA is a formal process for evaluating and documenting the quality of the28products produced during each stage of the software development lifecycle.29
There have four quality models: McCall’s quality model (McCall et al. 1977), Boehm’s quality30model (Boehm et al. 1978), FURPS mode; (Grady, 1992), Dromey’s quality model (Dromey,311996), and ISO/IEC 25000 standard. Each model contains different quality factors and32quality criteria (Drown et al. 2009). They are indicators of process and product and are33useful in case of software quality assurance (Tomar and Thakare, 2011). The aim of this34paper present what are software quality factors and their criteria and their impact on the SQA35function.36
The remaining part of this paper is organized as follows section 2 describes literature review,37it discuses the content of the relation of quality factors with quality criteria. Section 3 builds38a relationship of software quality criteria between metrics. Section 4 describes software39metrics, it found in the software engineering. Section 5 builds an appropriate method of40Software quality metrics application in quality life cycle with software quality assurance.41Figure 1 is a research framework42
43444546
○+474849505152535455565758596061626364
Figure 1: A research framework656667
2. LITERATURE RESEARCH6869
2.1 Software quality assurance model7071
In this section, it discusses the contents of the following quality assurance model: McCall72quality model, Boehm quality model, FURPS model, and Dromey model.73The McCall quality model (McMcall et al. 1977) has three quality of software product: product74transition (adaptability to new environment), product revision (ability to undergo changes),75and product operations (its operation characteristics). Product revision includes76Maintainability, Flexibility, and Testability. Product Transition includes Portability,77Reusability, and Interoperability. This model contains 11 quality factors and 23 quality78criteria. The quality factors describe different types of system characteristics and quality79
Software quality model and ISO/IEC 25000 standard
Quality criteria and quality metricQuality factors and quality criteria
Criteria of software quality factors
Quality assurance in the software life cycle
An appropriate method for Software quality assurance with quality measure metrics inquality life cycle
criterions are attributes to one or more of the quality factors. Table 1 is denoted as the factors80and criteria of McCall quality mode81
8283
Table 1: The factors and criteria of McCall quality modeCategory Software metrics 11 Quality factors Quality criteriaMcCall’squality
ProductOperation
Correctness Completeness, consistency,operability
Reliability Accuracy, complexity,consistency, error tolerance,modularity, simplicity
Efficiency Concision, execution, efficiency,operability
Integrity Audit ability, instrumentation,security
Usability Operability, trainingProduct Revision Maintainability Concision, consistency,
modularity, instrumentation,self-documentation, softwareindependence
Flexibility Generality, hardwareindependence, modularity,self-documentation, softwareindependence
Testability Audit ability, complexity,instrumentation, modularity,self-documentation, simplicity
ProductTransition
Portability Complexity, concision,consistency, expandability,generality, modularity,self-documentation, simplicity
Reusability Generality, hardwareindependence, modularity,self-documentation, softwareindependence
Interoperability Communications commonality,data communality
84
Boehm quality model attempts to automatics and qualitatively evaluate the quality of85software. The high – level characteristics address three classification; general utility into as86utility, maintainability, portability. In the intermediate level characteristics, Boehm quality87model have 7 quality factors like portability, reliability, efficiency, Usability, Human88engineering, understandability, flexibility (Boehm, 1976, 1978). Table 2 is denoted as the89quality factors and quality criteria of Boehm quality mode90
9192
Table 2: The factors and criteria of Boehm quality modeFactors Criteria
Portability Self contentedness, device independenceReliability Self contentedness, accuracy, completeness, robustness/ integrity,
consistencyEfficiency Accountability, device efficiency, accessibilityUsability CompletenessHumanengineering(testability)
Accountability, communicativeness, self descriptiveness,structuredness
Understanding Consistency, structured, concisenessModifiability(Flexibility)
Structured, augment ability
Dromey quality model proposed a framework for evaluate requirement, design and93implementation phases. The high-level product properties for the implementation quality94model include: correctness, internal, contextual, and descriptive (Dromey, 1995, 1996). Table953 is denoted as the factors and criteria of Dromey quality mode96
97Table 3:The factors and criteria of Dromey quality mode
Factors CriteriaCorrectness Functionality, ReliabilityInternal Maintainability, Efficiency, ReliabilityContextual Maintainability, Reusability, portability, reliabilityDescriptive Maintainability, Efficiency, reliability, usability
98
FURPS model originally presented by Grady (1992), then it is extended by IBM Rational99Software (Jacobson et al. 1999; Kruchten, 2000) into FURPS+. The “ + “ indicates such100requirements as design constraints, implementation requirements, interface requirements101and physical requirements (Jacobson et al. 1999). There are four characteristics in FURPS102model. Table 4 is denoted as the factors and criteria of FURPS quality mode103
104
Table 4:The quality factors and quality criteria of FURPS quality modeFactors CriteriaFunctionality Capabilities, and securityUsability Consistency, user documentation, training materialsReliability Frequency and security of failure, Recoverability, predictability,
accuracy, mean time between failurePerformance Speed efficiency, availability, accuracy, throughput, response
time, recovery time, resource usageSupportability Testability, extensibility, adaptability, maintainability, compatibility,
configurability, serviceability, install ability, localizability
105
ISO 9000 – it provides guidelines for quality assurance (Tomar and Thakare, 2011). ISO1069000 is a process oriented approach towards quality management (ISO 9001:2005). It107processes designing, documenting, implementing, supporting, monitoring, controlling and108
improving (ISO 9001:2001). Recently, the ISO/IEC 9126-1: 2001 software product quality109model, which defined six quality characteristics, has replaced by ISO/IEC 25010: 2011110system and software product quality model (ISO/IEC 25010). ISO 25010 is the most111commonly used quality standard model. It contains eight quality factors: Functional112suitability, reliability, operability, security, performance efficiency, compatibility,113maintainability, and portability. The 28 quality factors are arranged in six quality114characteristics. Table 5 is denoted as the factors and criteria of ISO/IEC 2510 quality115mode (ISO/IEC 25010; Esaki, 2013).116
117
Table 5: The factors and criteria of ISO/IEC 25010 quality modeFactors Criteria
Functionalsuitability
Functional appropriateness, accuracy
Performanceefficiency
Time behavior, resource utilization
Reliability Maturity, fault tolerance, recoverability, AvailabilityOperability Appropriateness reconcilability, Ease of use, User error protection,
User interface aesthetics, Technical learn ability, technicalaccessibility
Security Confidentiality integrity, Non-repudiation, Accountability, AuthenticityCompatibility Co-existence, InteroperabilityMaintainability Modularity, Reusability, Analyzability, Modifiability, testability,Portability Adaptability, install-ability, replace-ability
118
The quality models described above contain several factors in common, like Maintainability,119Efficiency, and Reliability. However, some of factors like correctness, understandability,120modifiability and supportability are not so common and are in one or two models. Table 5121compared of fours quality model and ISO/IEC 25010. Table 6 is denoted as a comparison of122the factors of four quality model and ISO/IEC 25010.123
124
Table 6: A comparison of criteria of the four quality model and ISO/IEC 2510factors McCall Boehm Dromey FURPS ISO/IEC25010CorrectnessIntegrityUsabilityEfficiencyFlexibilityTestabilityMaintainabilityReliabilityPortability
*********
**
**
*
**
*
*
**
****
ReusabilityInteroperabilityHuman engineeringUnderstandabilityModifiabilityFunctionalityPerformanceSupportabilitySecurity
**
***
***
****
***
*17 11 7 7 5 8
Extended Al-Outaish (2010).125
126127
3. COMBINATION OF SOFTWARE QUALITY CRITERIA AND SOFTWARE128METRICS129
130Under the software quality assume model and ISO/IEC 25000 standard, it found the131combination of software quality factors and criteria. In this section, it need describe the132relationship of software quality criteria and software quality metrics.133There are four reasons for developing a list of criteria for each factor:1341. Criteria offer a more complete, concrete definition of factors.1352. Criteria common among factors help to illustrate the interrelation between factors.1363. Criteria allow audit and review metrics to be developed with greater ease.1374. Criteria allow us to pinpoint that area of quality factors which may not be up to a138
predefined acceptable standard,139140141
3.1 Software quality factors and quality criteria142
Criteria are the characteristics which define the quality factors. The criteria for the factors143are the attributes of the software product or software production process by which the factor144can be judged or definition. The relationships between the factors between the criteria can145be found in Table 7.146
147Table 7: The relationships of factors with criteria of software quality
Factors CriteriaCorrectness Completeness, consistency, operabilityEfficiency Concision, execution, efficiency, operabilityFlexibility Complexity, concision, consistency, expandability, generality,
modularity, self-documentation, simplicityIntegrity Audit ability, instrumentation, securityInteroperability Communications commonality, data communalityMaintainability Concision, consistency, modularity, instrumentation,
self-documentation, software independencePortability Generality, hardware independence, modularity,
self-documentation, software independence
Reliability Accuracy, complexity, consistency, error tolerance, modularity,simplicity
Reusability Generality, hardware independence, modularity,self-documentation, software independence
Testability Audit ability, complexity, instrumentation, modularity,self-documentation, simplicity
Usability Operability, trainingModifiability Structure, augment ability,Understandability Consistency, Structure, conciseness. legibilityDocumentation CompletenessFunctionality Capability, securityPerformance Flexibility, efficiency, ReusabilitySupportability Testability, extensibility, maintainability, compatibility
3.2 Quality criteria and related factors148
Table 8 is criteria for software quality factors. It provides an illustration of the relationship149between these criteria and the factors (McCall et al. 1977).150
151152
Table 8: Criteria of software quality factors.Criterion Definition Related factors
Traceability Those attributes of the software that provide athread from the requirements to theimplementation with respected to the specificdevelopment and operational environment.
Correctness
Completeness Those attributes of the software that providefull implementation of the function required
Correctness
Consistency Those attributes of the software that provideuniform design and implementation techniquesand notation.
CorrectnessReliabilityMaintainability
Accuracy Those attributes of the software that providethe required precision in calculation andoutputs.
Reliability
Error Tolerance Those attributes of the software that providecontinuity of operation under monomialconditions.
Reliability
Simplicity Those attributes of the software that provideimplementation of functions in the mostunderstandable manner. (usually avoidance ofpractices which increase complexity)
ReliabilityMaintainabilityTestability
Modularity Those attributes of the software that provide astructure of highly independent modules
MaintainabilityFlexibilityTestabilityPortabilityReusabilityInteroperability
Generality Those attributes of the software that providebreadth to the functions performed
FlexibilityReusability
Expandability Those attributes of the software that provide forexpansion of data storage requirements orcomputational functions.
Flexibility
Instrumentation Those attributes of the software that provide forthe measurement of usage identification oferrors.
Testability
Self-Descriptiveness Those attributes of the software that provideexplanation of the implementation of function.
FlexibilityTestabilityPortabilityReusability
Execution Efficiency Those attributes of the software that provide forminimum processing time.
Efficiency
Storage Efficiency Those attributes of the software that provide forminimum storage requirements duringoperation.
Efficiency
Access Control Those attributes of the software that provide forcontrol of the access of software and data
Integrity
Access Audit Those attributes of the software that provide foraudit of the access of software and data
Integrity
Operability Those attributes of the software that determineoperation and procedure concerned with theoperation of the software
Usability
Training Those attributes of the software that providetransition from current operation or initialfamiliarization
Usability
Communicativeness Those attributes of the software that provideuseful inputs and outputs which can beassimilated
Usability
Software SystemIndependence
Those attributes of the software that determineits dependency on the software environment(operating systems, utilities, input/outputroutines, etc.)
PortabilityReusability
Machineindependence
Those attributes of the software that determineits dependency on the hardware system.
PortabilityReusability
CommunicationsCommonality
Those attributes of the software that providethe use of standard protocols and interfaceroutines
Interoperability
Data Commonality Those attributes of the software that providethe use of standard data representations.
Interoperability
Conciseness Those attributes of the software that provide forimplementation of a function with minimumamount of code.
Maintainability
153
3.3 Criteria of software quality factors154
The following table lists all software metrics. They copied from volume Ⅱ of the155specification of software quality attributes software quality evaluation guidebook (Bowen et al.1561985). Table 9 is denoted as the relationship of criteria between software quality metrics.157
158Table 9: The relationship of criteria between software quality metrics
Criteria Software Quality metricsAccuracy Accuracy checklistSelf-Descriptiveness Quality of comments
Effectiveness of commentsDescriptiveness of language
Simplicity Design structureStructured languageData and control flow complexityCoding simplicityHalstead’s level of difficulty measure
System accessibility Access controlAccess audit
System clarity Interface complexityProgram flow complexityApplication functional complexityCommunication complexityStructure clarity
System compatibility Communication compatibilityData compatibilityHardware compatibilitySoftware compatibility
Traceability Documentation for other systemCross reference
Document accessibility Access to documentationWell structured documentation
Efficiency process Processing effectiveness measureData usage effectiveness measure
Efficiencycommunication
Communication effectiveness measure
Efficiency storage Storage effectiveness measureFunctional Function specifically
Function commonalityFunction selective usability
Generality Unit referencingUnit implementation
Independence Software independence for systemMachine independence
Modularity Modular designOperability Operability checklist
User output communicativenessUser input communicativeness
Training Training checklistVirtual System/data independenceVisibility Unit testing
Integration testingCase testing
Applicationindependence
Database managementDatabase implementationDatabase independenceData structureArchitecture standardizationMicrocode independenceFunction independence
Augment ability Data storage expansionComputation extensibility
Channel extensibilityDesign extensibility
Completeness Completeness checklistConsistency Procedure consistency
Data consistencyAutonomy Interface complexity
Self- sufficiencyRe-configurability Restructure checklistAnomaly management Error tolerance
Improper input dataCommunications faultsHardware faultsDevice errorComputation failures
159
3.4 Quality assurance in the software life cycle160
Product metrics, process metrics, and project metrics are three important types of software161metrics. Product metrics measures the efficiency of accomplishing product targets for162instance size, complexity, design features, performance, and quality level. Process metrics163measures the efficiency of performance the product development process for instance164turnover rate. Project metrics measures the efficiency of product development process, for165instance schedule performance, cost performance, team performance (Shanthi and166Duraiswamy, 2011).167
In order to be efficient, quality assurance activities should following stage in the software life168cycle. For each activity in the software life cycle, there is one or more QA support activities169focus on ensuring the quality of the process and the resulting product. A concept framework170of QA support software quality life cycle as shown in figure 2.171
172173174175176177178179180181182183184185186187188189190191192
Figure 2: A concept framework of QA support software quality life cycle193
Projectplanning Requiremen
ts
Analyze and Design
Construction
Test
DeploymentSupport
Changes
Review project plan
Review Requirements
Analyze Design
Inspect code
Assess Tests
Evaluate qualitystatus
Ensure Project DeploymentTrack support and changeManagement
194195
4. SOFTWARE QUALITY METRICS196197
This section concentrates on different metrics found in the software engineering literature. A198classical classification of the software quality metrics: Halstead’s software metrics, McCabe’s199cyclomatic complexity metric, RADC’s methodology, Albrecht’s function points metric,200Ejiogu’s software metrics, and Henry and Kafura’s information metric.201
4.1 Halstead’s software metrics202
Halstead’s measure for calculation of module conciseness is essentially based on the203assumption that a well structured program is a function of only its unique operators and204operands. The best predictor of time required to develop and run the program successfully205was Halstead’s metric for program volume.206Halstead (1978) defined the following formulas of software characterization for instance.207
The measure of vocabulary: 21 nnn 208Program length: 21 NNN 209
Program volume: nNV 2log210
Program level:VVL
*211
Where 1n = the number of unique operators212
2n = the number of unique operand213
1N = the total number of operators214
2N = the total number of operands215
Christensen et al. (1988) have taken the idea further and produced a metrics called difficulty.216*V is the minimal program volume assuming the minimal set of operands and operators for217
the implementation of given algorithm:218
Program effort: E =LV *
219
Difficulty of implementation: D =2
212nNn220
Programming time in seconds: T =SE221
Difficulty:2
212 nNn 222
With S as the Stroud number ( )205 S which is introduced from the psychological science.223*2n is the minimal set of operands. 0E is determined from programmer’s previous work.224
The based on difficulty and volume Halstead proposed an estimator for actual programming225effect, namely226
Effort = difficulty * volume227Table 10 is denoted as the formulas of Halstead’s software metrics with software quality228factors229
230231
232Table10: The formulas of Halstead’s software metrics with software quality factorsSoftware metrics Software quality factors FormulasImplementationlength N
MaintainabilityNumber of BugsModularityPerformanceReliability
21 NNN )22log212log1 nnnn
Volume V ComplexityMaintainabilityNumber of BugsReliabilitySimplicity
nNV 2log
Potential Volume *V ConcisenessEfficiency )*
21(2log)*21(* nnnnV
Program Level L ConcisenessSimplicity V
VL*
Program Effort ClarityComplexityMaintainabilityModifiabilityModularityNumber of BugsPerformanceReliabilitySimplicityUnderstandability
E =LV *
Number of Bugs MaintainabilityNumber of BugsTestability. 0E
VB
233234
4.2 McCabe’s Cyclomatic complexity metrics235
McCable (McCable, 1976) has proposed a complexity metric on mathematical graph theory.236The complexity of a program is defined in terms of its control structure and is represented by237the maximum number of “linearly independent” path through the program. Software238developer can use this measure to determine which modules of a program are over-complex239and need to be re-coded.240The formulas for the cyclomatic complexity proposed by (McCable, 1976) are:241
V (G) = e - n + 2p242
Where e = the number of edges in the graph243n = the number of nodes in the graph244P = the number of connected components in the graph.245
The Cyclomatic complexity metric is based on the number of decision elements246(IF-THEN-ELSE, DO WHILE, DO UNTIL, CASE) in the language and the number of AND,247OR, and NOT phrases in each decision. The formula of the metric is: Cyclomatic complexity248= number of decisions +number of conditions + 1(Arthu, 1985)249
The Essential complexity metricb is based on the amount of unstructured code in a program.250Modules containing unstructured code may be more difficult to understand and maintain.251The essential complexity proposed by McCable (1976):252
mGVGEV )()(253
Where V (G) = the cyclomatic complexity254m = the number of proper sub graphs255
McCabe’s Cyclomatic complexity measure has been correlated with several quality factors.256These relationships are listed in Table 11.257
258
Table 11: The formulas of McCabe’s Cyclomatic complexity metricsSoftware metrics Software quality factors FormulasCyclomaticcomplexityV(G)
ComplexityMaintainabilityNumber of BugsModularitySimplicityReliabilityTestabilityUnderstandability
V(G) = e - n + 2p
Essential ComplexityEV(G)
ComplexityConcisenessEfficiencySimplicity
EV (G)= V(G) - m
2594.3 RADC’s methodology260
RADS expanded Boehm model. The metrics discussed in this section are based on261continuing development effort (Bowen et al. 1985). The requirements present a ratio of262actual occurrence to the possible number of occurrence for each situation: these results in a263clear correlation between the quality criteria and their associated factors. Table 12 is264denoted as the formulas of RADC’s methodology.265
266267
Table 12: The formulas of RADC’s methodologySoftware metrics Software
quality factorsFormulas (for example)
TraceabilityCompletenessConsistencyAccuracyError ToleranceSimplicityStructures programmingModularityGeneralityExpandabilityComputation extensibilityInstrumentationSelf-DescriptivenessExecution efficiency
CompletenessConsistencyCorrectnessEfficiencyExpandabilityFlexibilityIntegrityInteroperabilityMaintainabilityModularityPortabilityReliabilitySurvivabilityUsability
Traceability (1)
Cross reference relative modules torequirements
Completeness (2)
1. Unambiguous references (Input,function, output)
2. All external data referencesdefined, computed or obtainedfrom external source
3. All detailed functions defined4. All conditions and processing
defined for each decision point5. All defined and reference calling
Storage EfficiencyAccess controlAccess AuditOperabilityTrainingCommunicativenessSoftware systemindependenceMachine independenceCommunicationcommonalityData commonalityConciseness
Verifiability sequence parameters agree6. All problem reports resolved7. Design agree with requirements8. Code agree with design
Source: Bowen et al. (1985)2681 )( tsrequiremenofnumbertotaltracedtsrequiremenitemizedofNumbertyTraceabili 269
2 )9
9
1(
ielementforscore
ssCompletene270
4.4 Albrecht’s function points metrics271
Albrecht developed a metric to determine the number of elementary functions, hence the272value of source code. This metric was developed to estimates the amount the effort needed273to design and develop customer applications software (Albrecht and Gaffney, 1983).2741. Calculation the function counts (FCs) based on the following formula:275
5
1
3
1i jijij xwFC276
Where ijw are the weighting factors of the five components by complexity level (low, average,277
high) and ijx are the numbers of each component in the application.278It is a weighted of five major components (Kemerer and Porter, 1992) are:279
・External input: Low complexity, 3; average complexity, 4; high complexity, 6280・External output: Low complexity, 4; average complexity, 5; high complexity, 7281・Logical internal file: Low complexity, 5; average complexity, 7; high complexity, 10282・External interface file: Low complexity, 7; average complexity, 10; high complexity, 15283・External inquiry: Low complexity, 3; average complexity, 4; high complexity, 6284
2. Calculation the value adjustment factor, it involves a scale from 0 to 5 to assess the impact285of 14 general system characteristics in terms of their likely on the application. There are 14286characteristics: data communication distributed function, heavily used configuration,287transaction rate, online data entry, end user efficiency, online update, complex processing,288reusability, installation ease, operational ease, multiple sites, and facilitation of change.289The scores (ranging from 0 to 5) for these characteristics are then summed, based on the290following formulas, to arrive at the value adjustment factor (VAF)291
14
101.065.0iicVAF292
ic : the score of general system characteristics.293
3. The number of function points is obtained by multiplying function counts and the value294adjustment factor:295
VAFFCFP 2964.5 Ejiogu’s software metrics297
Ejiogu’s software metrics uses language constructs to determine the structural complexity of298a program. The syntactical constructs are nodes. These metrics are related to the299structural complexity of a program. They are also related to other quality factors, such as300usability, readability, and modifiability.301The structural complexity metrics gives a numerical notion of the distribution and302connectedness of a system’s components (Fjiogu, 1988).303
MtRHcS 304Where305H: the height of the deepest nested node,306Rt: the Twin number of the root,307M: the Monadicity (Fjiogu, 1990)308
The height for an individual node is the number of levels that a node is nested below the root309node. The Twin number is the number of nodes that branch out from a higher level node.310Monads are nodes that do not have branches emanating from them. They also are referred311to as “leaf nodes”.312Software size is the size of a set of nodes of source code. It is calculated using the number313of modes in the tree.314
1 nodesofumbertotalS315Where3161: represents the root node.317
4.6 Henry and Kafura’s information metrics318Information flow complexity (IFC) (Henry and kafura, 1984) describes the amount of319
information which flows into and out of a procedure. This metrics use the flow between320procedures to dhow the data flow complexity of a program. The Formula is:321
2)*( outfaninfanLengthIFC 322
Where Fan-in: The number of local flows into a procedure plus the number of global data323structures from which a procedure retrieves information.324
Fan-out: The number of local flows into a procedure plus the number of global data325structures from which a procedure updates.326
Length is the number of lines of source code in the procedure. In implementing this327count, embedded comments are also counted, but not comments preceding328the beginning of the executable code.329
4.7 Project metrics330
PMI PMBOK (Project management institute’s project management body of knowledge)331describes Project Management Processes, tools and techniques and provides one set of high332level businesses for all industries. The PMBOK includes all nine knowledge areas and all333associated with them tools and techniques: Integration management, Scope management,334Time management, Cost management, Quality management, Human Resource335management, Communication Management, Risk management, and Procurement336management (PMBOK).337
Some of those processes often are not applicable or even irrelevant to Software338Development industry. CMM (Capability Maturity model) speaks about software project339
planning processes without mentioning specific methodologies for project estimating340described in PMBOK (PMBOK). Basic key process areas (KPA) of the SEI CMM is341requirement management, project planning, project tracking and oversight, subcontract342management, quality assurance, and configuration management. The Table 13 is mapping343of some relevant to CMM activities, tools and techniques:344
345
Table 13: Mapping of project management processes to process groups and knowledge areas.
KnowledgeAreas / ProcessGroup
Initiating Planning Executing Controlling Closing
Project integrationmanagement
Project plandevelopment
Project planexecution
Integratedchange control
Project scopeManagement
Initiationscopedefinition
Scope planning Scope changecontrol
Scope verification
Project tinemanagement
Activity definitionActivity sequencingActivity durationestimatingScheduledevelopment
Schedule control
Project costmanagement
Resource planningCost estimatingCost budgeting
Cost control
Project qualitymanagement
Quality planningQualityassurance
Quality control
Project Humanresourcemanagement
OrganizationplanningStaff Acquisition
Teamdevelopment
Projectcommunicationmanagement
Communicationplanning
Informationdistribution
Performancereporting
Administrativeclosure
Risk projectmanagement
Risk managementplanningRisk identificationQualitative riskanalysisRisk responseplanning
Risk monitoringand control
Projectprocurementmanagement
ProcurementplanningSolicitation planning
SolicitationSourceselectionContractadministration
Contractcloseout
346
4.8 Reliability metrics347
A varies often used measure of reliability and availability in computer-based system is348mean time between failures (MTBF) (Cavano, 1984). The sum of mean time to failure349(MTTF) and mean time to repair (MTTR) gives the measure, i.e.350
MTBF = MTTF + MTTR351
The availability measure of software is the percentage that a program is operating352according to requirement at a given time and is given by the formula:353
Availability = MTTP / (MTTF +MTTE)* 100%354
The reliability growth models assume in general that all defects during the development and355testing phases are correct, and new errors are not introduced during theses phases. All356models seem to include some constraints on the distribution of defects or the hazard rate, i.e.357defect remaining in the system.358
Increase software reliability gives the metrics:359
Failure rate (FR) =timeExecutionfailuresofNumber360
4.9 Readability metrics361
Walston and Felix (1977) defined a ratio of document pages to LOC as:36201.149LD 363
Where D= number of pages of document364L = number of 1000 lines of code.365
4.10 Metrics-based estimation models366
1. COCOMO model367Most of the models presented in this subsection are estimators of the effort needed to368produce a software product. Probably the best known estimation model is Boehm’s369COCOMO model (Boehm, 1981). The first one is a basic model which is a single-value370model that computes software development effort and cost as a function of program size371expressed as estimated lines of code (LOC). The second COCOMO model computes372software development effort as a function of program size and a set of “coat drives” that373include subjective assessment of product, hardware, personal, and project attributes.374The basic COCOMO equations are:375
ibi KLOCaE )( , idiEcD 376
Where E is the effort applied in person-month.377D is the development time in chronological months378The coefficients ia and ic and the exponents ib and id are given in Table 14.379
380. Table 14: Basic COCOMO381
Softwareproject
ia ib ic id
OrganicSemi-detachedEmbedded
2.43.03.6
1.051.121.20
2.52.52.5
0.360.350.32
The second COCOMO has some special features, which distinguish it from other ones. The382usage of this method is very wide and its results are accurate. The equations are use to383estimate effort and schedule see Khatibi and Jawawi (2011).384
2. Putnam estimation model385
The Putnam estimation model (Putnam, 1978; Kemerer, 2008) assumes a specific386distribution of effort over the software development project. The distribution of effort can be387described by the Royleigh- Norden curve. The equation is:388
3/43/1dk tKcL 389
Where kc is the state of technology constant (the environment indictor),390
k is the effort expended (in person-years) over the whole life cycle.391dt is the development time in year.392
The kc valued ranging from 2000 for poor to 11000 for an excellent environment is used393(Pressman, 1988).394
3. Source line of code395
SLOC is an estimation parameter that illustrates the number of all comments and data396definition but it does not include instructions such as comments, blanks, and continuation397lines. Since SLOC is computed based on language instructions, comparing the size of398software which uses different language is too hard. SLOC usually is computed by399considering LS as the lowest, HS as the highest and MS as the most probable size400(Pressman, 2005).401
64 HSMSLSS
402
4. Productive estimation model403
Walston and Fellix (1977) give a productivity estimator of a similar form at their document404metric. The programming productivity is defined as the ratio of the delivered source lines of405code to the total effort in person-months required to produce the delivered product.406
91.02.5 LE E407
Where E is total effort in person-month408L is the number of 1000lines of code.409
4104.11 Metrics for software maintenance411
During the maintenance phase, the following metrics are very important: (Kan, 2002)412・Fix backlog and backlog management index413・Fix response time and fix responsiveness414・Percent delinquent fixes415・Fix quality416
Fix backlog is a workload statement for software maintenance. To manage the backlog of417open, unresolved, problems is the backlog management index (BMI). If BMI is large then418100, it means the backlog is reduced. If BMI is less than 100, then the backlog increased.419
%100
monththeduringarrivalsproblemofNumbermonththeduringclosedproblrmsofNumberBMI
420
4.12 Customer problem metrics421
The customer problems metric can be regarded as an intermediate measurement between422defects measure and customer satisfaction. The problems metric is usually expressed in423terms of problem per user month (PUM). PUM is usually calculated for each month after the424software is released to market, and also for monthly averages by user.425
Several metrics with slight variations can be Constructed and used, depending on the426purpose of analysis. For example: (Kan, 2002; Basili and Weiss, 1984; Daskalantonakis,4271992)428
・Percent of completely satisfied customers.429・Percent of satisfied customers (satisfied and completely satisfied)430
・Percent of dissatisfied customers(dissatisfied and completely dissatisfied)431・Percent of non (neutral, dissatisfied, and completely dissatisfied).432・Customer – founded defects (CFD) total433
sizesourcetotalequivalentAssemblydefectsfoundedcustomerofNumbertotalCFD
434
・Customer – founded defects (CFD) delta435
sizesourcetotalequivalentAssemblytdevelopmensoftwarentalincreaseme
bycauseddefectsfoundedcustomerofNumber
totalCFD
436
PUM = Total problems that customers reported (true defects and non-defects- orients437problems) for a time period + Total number of License- months of the software during the438period.439
Where Number of license- months = Number of install license of the software Number440of months in the calculation period.441
4.13 Test product and process metrics442Test process metrics provide information about preparation for testing, test execution and443
test progress (Farooq, et al. 2011). Some testing metrics (Premal and Kale, 2011; .Kuar,444et al. 2007; Farooq et al. 2011) as following:445
1.Number of test cases designed4462.Number of test cases executed4473.DA = Number of defects rejected / Total number of defects *100%4484.Bad Fix Defect =Number of Bad Fix Defect / Total number of valid defects449
*100%4505.Test case defect density = (Number of failed tests / Number of executed test451
cases) *1004526.Total actual execution time/ total estimated execution time4537.Average execution time of a test case454
Test product metrics provide information about the test state and testing status of a software455product. Using these metrics we can measure the products test state and indicative level456quality, useful for product release decision (Farooq, et al. 2011).457
1.Test Efficiency = (DT/(DT+DU)*1004582.Test Effectiveness = (DT/(DF+DU)*1004593.Test improvement TI = number of defects detected by the test team during / source460
lines of code in thousands4614.Test time over development time TD = number of business days used for product462
testing / number of business days used for product4635.Test cost normalized to product size (TCS) = total cost of testing the product in464
dollars / source lines of code in thousands4656.Test cost as a ration of development cost (TCD) = total cost of testing the product466
in dollars / total cost of developing the product in dollars4677.Test improvement in product quality = Number of defects found in the product after468
release / source lines of code in thousands4698.Cost per defect unit = Total cost of a specific test phase in dollars / number of470
defects found in the product after release4719.Test effectiveness for driving out defects in each test phase = (DD/(DD+DN)*100472
10. Performance test efficiency (PTE) = requirement during perform test / (requirement473during performance time + requirement after signoff of performance time) * 100%474
11. Cost per defect unit = Total cost of a specific test phase in dollars / number of475
defects found in the product after release47612. Estimated time for testing47713. Actual testing time47814. % of time spent = (actual time spent / Estimating time)*100479
480Where481
DD : Number of defects of this defect type that are detected after the test phase.482
TD : Number of defects found by the test team during the product cycle483DU : Number of defects of found in the product under test (before official release)484FD : Number of defects found in the product after release the test phase485
ND : Number of defects of this defect type (any particular type) that remain uncovered486after the test phase.487
4.14 Method of statistical analysis488
The revisions on the software measurement methods, developed with the purpose of489improving their consistency must be empirically evaluated so as to determine to what extent490is the pursued goal fulfilled. The most used statistical methods are given in the following491table (Lei and smith, 2003; pandian, 2004; Juristo and Moreno, 2003; Dumake, et al. 2002;492Dao, et al. 2002; Fenton, et al. 2002). Some commonly used statistical methodology493(include nonparametric tests) are discussed as follow:494
1. Ordinary least square regression models: Ordinary least square regression (OLS)495model is used to subsystem defects or defect densities prediction496
2. Poisson models: Poisson analysis applied to library unit aggregation defect analysis4973. Binomial analysis: Calculation the probability of defect injection4984. Ordered response models: Defect proneness4995. Proportional hazards models: Failure analysis incorporating software characteristics5006. Factor analysis: Evaluation of design languages based on code measurement5017. Bayesian networks: Analysis of the relationship between defects detecting during test502
and residual defects delivered5038. Spearman rank correlation coefficient: Spearman's coefficient can be used when both504
dependent (outcome; response) variable and independent (predictor) variable are505ordinal numeric, or when one variable is an ordinal numeric and the other is a506continuous variable.507
9. Pearson or multiple correlations: Pearson correlation is widely used in statistics to508measure the degree of the relationship between linear related variables. For the509Pearson correlation, both variables should be normally distributed510
10. Mann – Whitney U test: Mann – Whitney U test is a non-parametric statistical511hypothesis test for assessing whether one of two samples of independent512observations tends to have larger values than the other513
11. Wald-Wolfowitz two-sample Run test: Wald-Wolfowitz two-sample Run test is used to514examine whether two samples come from populations having same distribution.515
12. Median test for two samples: To test whether or not two samples come from same516population, median test is used. It is more efficient than run test each sample should517be size 10 at least.518
13. Sign test for match pairs: When one member of the pair is associated with the519treatment A and the other with treatment B, sign test has wide applicability.520
14. Run test for randomness: Run test is used for examining whether or not a set of521observations constitutes a random sample from an infinite population. Test of522randomness is of major importance because the assumption of randomness underlies523statistical inference.524
15. Wilcoxon signed rank test for matcher pairs: Where there is some kind of pairing525between observations in two samples, ordinary two sample tests are not appropriate.526
16. Kolmogorov-Smirnov test: Where there is unequal number of observations in two527samples, Kolmogorov-Smirnov test is appropriate. This test is used to test whether528there is any significant difference between two treatments A and B.529
5305. SOFTWARE QUALITY METRICS IN QUALITY LIFE CYCLE WITH531SOFTWARE QUALITY ASSURANCE532
533Software quality metrics focus on quality aspects of product metrics, process metrics,534maintenance metrics, customer metrics and project metrics.535Product metrics are measures of the software product at any stage of its development, from536requirements to installed system. Product metrics may measure the complexity of the537software design, the size of the final program, or the number of pages of documentation538production. Process metrics are measure of the software development process, such as539overall development time, the average level of experience of the programming staff, or type of540methodology used. The test process metrics provide information about preparation for541testing, test execution and test progress. Some test product metrics are number of test542cases design, % of test cases execution, or % test cases failed. Test product metrics543provide information of about the test state and testing status of a software product and are544generated by execution and code fixes or deferment. Some rest product metrics are545Estimated time for testing, average time interval between failures, or time remaining to546complete the testing.547
The software maintenance phases the defect arrivals by time interval and customer problem548calls. The following metrics are therefore very important: Fix backlog and backlog549management index, fix response time and fix responsiveness, percent delinquent fixes, and550fix quality.551
Subjective metrics may measure different values for a given metric, since their subjective552judgment is involved in arriving at the measured value. An example of a subjective product553metric is a classification of the software as “organic”, ”semi-detached” or “embedded” as554required in the COCOMO cost estimation model (Boehm, 1981).555From the customer’s perspective, it is bad enough to encounter functional defects when556running a business on the software. The problems metric is usually expressed in terms of557problem per user month (PUM). PUM is usually calculated for each month after the558software is released to market, and also for monthly averages by user.559
The customer problems metric can be regarded as an intermediate measurement between560defects measure and customer satisfaction. To reduce customer problems, one has to561reduce the functional defects in the products, and improve other factors (usability,562documentation, problem rediscovery, etc.). Table 15 is denoted as software quality563assurance with quality measure metrics in quality life cycle564
565Table15: Software quality assurance with quality measure metrics in quality life cycleCategory Description Project
perspective
Software qualityfactors
Software qualitymeasure metric
Project metrics Describe theproject’scharacteristicsand execution
Resource allocationRevieweffectivenessscheduleperformance, costperformance, team
Product estimation model
Project metrics
Software processtimetable metrics
performanceRequirementsgathering
ExamineRequirements
CompletenessCorrectnessTestability
Requirement specification.
Productmetrics
Describe thecharacteristicsof the product
ProductOperation
CorrectnessReliabilityEfficiencyIntegrityUsability
Productivity metrics
Execution Efficiency
ProductRevision
MaintainabilityFlexibilityTestability
Software systemindependence.Machine independence.
ProductTransition
PortabilityReusabilityInteroperability
Software systemindependence.
Processmetrics
Describe theeffectivenessand quality ofthe processthat producethe softwareproduct
Requirements
UnderstandabilityVolatilityTraceabilityModel clarity
Function points metrics
Requirement specification
Analysis anddesign
StructureComponentCompletenessInterfacecomplexityPatternsReliability
Complexity metrics
Structural design
Kafurd’s information flow
MTBF
Code ComplexityMaintainabilityUnderstandabilityReusabilityDocumentation
Halstead’ measure
Cyclometric measure
Structural programming
Ejiogu’s metrics
Error remove effectivenessTesting Correctness
Test effectivenessTest efficiency
Test process metrics
Test product metrics
Error rateImplementation
Resource usageCompletion ratesReliability
Reliability
Software correctivemaintenance productivityProcess quality metrics
Ensure ProjectDeployment
Describe thecustomersatisfactionmetrics
Usability,Documentation,Problem rediscover
Customer problem metrics
Failure density metrics
Productivity metrics
Effectiveness metricsTrack supportand changeManagement
Describe themaintenancemetrics
Changes CorrectnessDocumentation
Defect remove
Backlog managementindex.Fix backlog
Support Completion ratesMaintainability
Software maturity index
Statistical metrics
Readability metricsSource: this study566
5676. CONCLUSION568
Software quality metrics focus on quality aspects of product, process, and project. They569group into six categories in accordance with the software life cycle: project metrics,570requirements gathering, product metrics, process metrics, ensure Project deployment571(customer satisfaction metrics), track support and change management (maintenance572metrics). In order to understand the relationship of criteria of software quality factor, we573have discussed software quality model and standard, quality factors and quality criteria,574quality criteria and quality metric. We detail discussed software quality metrics. It includes575Halstead’s software metrics, McCabe’s Cyclomatic complexity metrics, RADC’s576methodology, Albrecht’s function points metric, Ejiogu’s software metrics, Henry and Kafura’s577information metric, project metric, Reliability metrics, Readability metrics, Metrics-based578estimation models, Metrics for software maintenance, In- process quality metrics, Customer579problem metrics, Test product and process metrics, and Method of statistical analysis.580Under the above 15 software quality metrics, we give table of software quality assurance with581quality measure metrics in quality life cycle. It contains software quality factors and software582quality measure metric in each software development phase,583
In order to continue to improve its software product, processes, and customer services.584Future research is need to extend and improve the methodology to extend metrics that have585been validated on one project, using our criteria, valid measures of quality on future software586project.587
588REFERENCES5891. Albrech, AJ. Gaffney, JE. Software function, source lines of code and development590
effort function: a software service validation, IEEE Transaction on Software Engineering.5911983; SE-9(6): 639-648.592
2. Al-Qutaish, RE. Quality models in software engineering literature: An analytical and593comparative study. Journal of American Science. 2010; 6(3): 166-175.594
3. Arthur, LJ., Measuring programmer productivity and software quality, John Wiley & Son,595New York, 1985.596
4. Basili, VR. Weiss, DM. A methodology for collecting valid software engineering data,597IEEE Transactions on Software Engineering. 1984; SE-10: 728-738.598
5. Boehm, BW. Brown, JR. and Lipow, M. Quantitative evaluation of software quality, In599Proceeding of the 2nd International Conference on Software engineering. 1976;600592-605.601
6. Boehm, BW. Brown, JR. Lipow, M. McLeod, G. and Merritt, M. Characteristics of602software quality. North Holland Publishing. Amsterdam, the Netherlands, 1978.603
7. Boehm, BW. Software Engineering Economics. Englewood Cliffs, NJ, Prentice Hall.6041981.605
8. Bowen, TP., Gray, BW, and Jay, TT. RADC-TR-85-37, RADS, Griffiss Air Face Base N.606Y., Volume Ⅰ, Ⅱ, Ⅲ, February, 1985.607
9. Cavano, JP. Software reliability measurement: Prediction, estimation, and assessment,608Journal of Systems and Software. 1984; 4:269-275.609
10. Christensen, K., FIstos, P and Smith, CP. A perspective on the software science, IBM610systems Journal. 1988; 29(4):372-387.611
11. Dao, M., Huchard, M., Libourel, T. and Leblance, H. A new approach to factorization –612
introducing metrics. In proceeding of the IEEE Symposium on Software Metrics,613METRICS. 2002; June 4-7: 27-236.614
12. Daskalantonakis, MK. A practical view of software measurement and implementation615Experience within Motorola. IEEE Transactions on Software Engineering. 1992; SE-18:616998-1010.617
13. Dromey, R. G. A model for software product quality. IEEE Transaction on Software618Engineering.1995; 21:146-162.619
14. Dromey, R. G. Concerning the Chimera (software quality). IEEE Software. 1996;1:62033-43.621
15. Drown, DJ. Khoshgoftaar, TM., and Seiya, N. Evaluation any sampling and software622quality model of high assurance systems, IEEE Transaction on systems, Mean and623Cybernetics, Part A: Systems and Human. 2009; 39(5):1097-1107.624
16. Dumake, R., Lother, M. and Wille, C., Situation and treads in Software Measurement –625A statistical Analysis of the SML Metrics Biolography, Dumke / Abran: Software626Measurement and Estimation, Shaker Publisher. 2002; pp.298-514.627
17. Esaki, K. System quality requirement and evaluation, importance of application of the628ISO/IEC 25000 series, Global Perspectives of Engineering Management. 2013; 2(2):62952-59.630
18. Farooq, SU. Quadri, SMK. and Ahmad, N. Software measurements and metrics: role in631effective software testing. Internal Journal of Engineering Science and Technology,6322011; 3(1): 671-680633
19. Fenton, N., Krause, P., And Neil, M. Software measurement: Uncertainty and causal634modeling, IEEE Software, July / August. 2002;116-122635
20. Fjiogu, LO. A unified theory of software metrics, Softmetrix, Inc. Chicago, IL, 1988;.636232-238.637
21. Fjiogu, LO. Beyond structured programming, An introduction to the principle of applied638software metrics, structured programming, Springer- verleg, N. Y., 1990.639
22. Grady, RB. Practical software metrics for project management and process640improvement, Prentice Hall, 1992.641
23. Halstead, MH. Elements of software Science, New York, North-Holland, 1978.64224. Henry, S. and Kafura, D. The evaluation of software systems’ structure using qualitative643
software metrics, Software- practice and Experience.1984; 14(6): 561-573.64425. IEEE standard dictionary of measures to product reliable software 982.1645
(1988) http://www.standards.ieee.org/reading/ieee/std_public/description/se/982.1-1986468_desc.html647
26. ISO /IEC25010: Software engineering– system and software quality requirement and648evaluation (SQuaRE)- system and software quality model, 2011.649
27. ISO 9001:2001, Quality management system Requirements, 2001.65028. ISO 9001:2005, Quality management system Fundamentals and vocabulary, 2005.65129. Jacobson, I. Booch, G. and Rumbaugh, J. the unified software development process,652
Addison Wesley, 1999.65330. Juristo, N. and Moreno, AM. Basic of software Engineering Experimentation, Kluwer654
Academic, publisher, Boston, 2003..65531. Kan, SH. Metrics and models in software quality engineering, chapter 4, software quality656
metrics overview, Addison-Wesley professional, 2002.65732. Kemerer, CF. An empirical validation of software code estimation models,658
Communications of the ACM.2008; 30(5): 416-429.65933. Kemerer, CF. and Porter, BS. Improving the reliability of function point measurement:660
an empirical study, IEEE Transactions on Software Engineering. 1992; 18(11):6611101-1024.662
34. Khatibi, V. and Jawawi, DNA. Software cost estimation methods: a review, Journal of663Emerging Trends in Computing and Information Sciences. 2011; 2(1): 21-29.664
35. Krruchtem, P. the rational unified process: an introduction, Addison Wesley, 2000.66536. Kuar, A., Suri, B., and Sharma, A. Software testing product metrics – A Survey, In666
Proceeding of national Conference in Challenges & Opportunities in Information667Technology, RIMT-JET, Mandi Gobindgarti, March 23, 2007.668
37. Lei, S. and smith, MR. Evaluation of several non-paramedic bootstrap methods to669estimate Conference Interval for Software Metrics, IEEE Transactions on Software670Engineering. 2003; 29(1): 996-1004.671
38. Ma, Y., He, K., Du D., Liu, J. and Yan, Y. A complexity metrics set for large-scale672object-oriented software system, In proceedings of the Sixth IEEE International673Conference on Computer and Information Technology, Washington, DC, USA. 2006;674189-189675
39. McCable, TJ. A complexity measure, IEEE Transaction on Software Engineering, 1976;676SE-2(4): 308-320677
40. McCall, JA., Richards, PK. and Walters, GF. Factors in software quality, RADC678TR-77-369 (Rome: Rome Air Development Center), 1, November 1977.679
41. PMBOK, A guide to the project management Body of Knowledge. Project Management680Institute Standards Committee, 2002.681
42. Pandian, CR. Software metrics – A guide to planning, Analysis, and Application, CRC682press Company, 2004.683
43. Premal. BN. and Kale, KV. A brief overview of software testing metrics, International684Journal of Computer Science and Engineering. 2011; 1(3/1):204-211.685
44. Pressman, RS. Making Software engineering happen: A Guide for instituting the686technology, Prentice Hall, New Jersey. 1988.687
45. Putnam, LH. A general empirical solution to the macro software and software sizing and688estimating problem. IEEE Transaction on Software Engineering, 1978; SE-4 (4):689345-361.690
46. Shanthi, PM. and Duraiswamy, K. An empirical validation of software quality metric suits691on open source software for fault-proneness prediction in object oriented system,692European journal of Scientific Research. 2011: 5(2):168-181.693
47. SQM, Software quality metrics, http://www.cs.nott.ac.uk/~cah’G53QAT11pdf6up.pdf69448. Tomar, AB. and Thakare, Dr. VM. A systematic study of software quality models,695
International Journal of software engineering & application. 2011;12(4):61-7069649. Vennila, G., Anitha, P., Karthik, R. and Krishnamoorthy, P. A study of evaluation697
information to determine the software quality assurance, International Journal of698Research and Reviews in Software Engineering. 2011;1(1):1-8.699
50. Walston, CE. Felix, CPA. Method of programming measurement, and estimation, IBM700Systems Journal. 1977; 16: 54-73.701
702703