+ All Categories
Home > Documents > Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School [email protected]...

Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School [email protected]...

Date post: 18-Jan-2016
Category:
Upload: morgan-stephens
View: 218 times
Download: 0 times
Share this document with a friend
Popular Tags:
23
Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School [email protected] CSSE Annual Research Review March 8, 2010
Transcript
Page 1: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

Proposed Metrics Definition Highlights

Raymond Madachy Naval Postgraduate School

[email protected]

CSSE Annual Research Review

March 8, 2010

Page 2: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

2

Agenda

• Data Analysis Issues

• Software Sizing Definitions

• Recent Workshop Results

• Conclusions

Page 3: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

3

DoD Empirical Data

• Data quality and standardization issues– No reporting of Equivalent Size Inputs – CM, DM, IM, SU, AA, UNFM, Type

– No common SLOC reporting – logical, physical, etc.

– No standard definitions – Application Domain, Build, Increment, Spiral,…

– No common effort reporting – analysis, design, code, test, CM, QA,…

– No common code counting tool

– Product size only reported in lines of code

– No reporting of quality measures – defect density, defect containment, etc.

• Limited empirical research within DoD on other contributors to productivity besides effort and size:

– Operating Environment, Application Domain, and Product Complexity

– Personnel Capability

– Required Reliability

– Quality – Defect Density, Defect Containment

– Integrating code from previous deliveries – Builds, Spirals, Increments, etc.

– Converting to Equivalent SLOC

• Categories like Modified, Reused, Adopted, Managed, and Used add no value unless they translate into single or unique narrow ranges of DM, CM, and IM parameter values. We have seen no empirical evidence that they do…

Page 4: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

4

SRDR Data Source

1.

1. System/Element Name (version/release): 2. Report As Of:

3. Authorizing Vehicle (MOU, contract/amendment, etc.): 4. Reporting Event: Contract/Release End

Submission # ________

(Supersedes # _______, if applicable)

Description of Actual Development Organization

5. Development Organization: 8. Lead Evaluator:

7. Certification Date: 9. Affiliation:

10. Precedents (list up to five similar systems by the same organization or team):

Comments on Part 1 responses:

2. Product and Development Description Percent of Product Size

Upgrade or New?

1. Primary Application Type: 2. % 3. 4.

17. Primary Language Used: 18. %

21. List COTS/GOTS Applications Used:

22. Peak staff (maximum team size in FTE) that worked on and charged to this project: __________

23. Percent of personnel that was: Highly experienced in domain: ___% Nominally experienced: ___% Entry level, no experience: ___%

Comments on Part 2 responses:

3.

2. Number of External Interface Requirements (i.e., not under project control)

4. Amount of New Code developed and delivered (Size in __________ )

5. Amount of Modified Code developed and delivered (Size in __________ )

6. Amount of Unmodified, Reused Code developed and delivered (Size in __________ )

Comments on Part 3 responses:

DD Form 2630-3 Page 1 of 2

Software Resources Data Report: Final Developer Report - Sample

Page 1: Report Context, Project Description and Size

Report Context

6. Certified CMM Level (or equivalent):

Product Size ReportingProvide Actuals at

Final Delivery

Actual Development Process

Code Size Measures for items 4 through 6. For each, indicate S for physical SLOC (carriage returns); Snc for noncomment SLOC only; LS for logical statements; or provide abbreviation _________ and explain in associated Data Dictionary.

1. Number of Software Requirements, not including External Interface Requirements (unless noted in associated Data Dictionary)

3. Amount of Requirements Volatility encountered during development (1=Very Low .. 5=Very High)

Page 5: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

5

Data Collection and Analysis• Approach

– Be sensitive to the application domain– Embrace the full life cycle and Incremental Commitment Model

• Be able to collect data by phase, project and/or build or increment

• Items to collect– SLOC reporting – logical, physical, NCSS, etc.– Requirements Volatility and Reuse

• Modified or Adopted using DM, CM, IM; SU, UNFM as appropriate

– Definitions for Application Types, Development Phase, Lifecycle Model,…

– Effort reporting – phase and activity– Quality measures – defects, MTBF, etc.

Page 6: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

6

Data Normalization Strategy

• Interview program offices and developers to obtain additional information not captured in SRDRs…

– Modification Type – auto generated, re-hosted, translated, modified

– Source – in-house, third party, Prior Build, Prior Spiral, etc.

– Degree-of-Modification – %DM, %CM, %IM; SU, UNFM as appropriate

– Requirements Volatility -- % of ESLOC reworked or deleted due to requirements volatility

– Method – Model Driven Architecture, Object-Oriented, Traditional

– Cost Model Parameters – True S, SEER, COCOMO, SLIM

Page 7: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

7

Agenda

• Data Analysis Issues

• Software Sizing Definitions

• Recent Workshop Results

• Conclusions

Page 8: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

8

Size Issues and Definitions

• An accurate size estimate is the most important input to parametric cost models.

• Desire consistent size definitions and measurements across different models and programming languages

• The sizing chapters address these:

– Common size measures defined and interpreted for all the models

– Guidelines for estimating software size

– Guidelines to convert size inputs between models so projects can be represented in in a consistent manner

• Using Source Lines of Code (SLOC) as common measure– Logical source statements consisting of data declarations executables

– Rules for considering statement type, how produced, origin, build, etc.

– Providing automated code counting tools adhering to definition

– Providing conversion guidelines for physical statements

• Addressing other size units such as requirements, use cases, etc.

Page 9: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

9

Sizing Framework Elements

• Core software size type definitions– Standardized data collection definitions

• Measurements will be invariant across cost models and data collections venues

– Project data normalized to these definitions• Translation tables for non-compliant data sources

• SLOC definition and inclusion rules• Equivalent SLOC parameters• Cost model Rosetta Stone size translations• Other size unit conversions (e.g. function points, use

cases, requirements)

Page 10: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

10

Core Software Size Types

Page 11: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

11

Equivalent SLOC – A User Perspective *

• “Equivalent” – A way of accounting for relative work done to generate software relative to the code-counted size of the delivered software

• “Source” lines of code: The number of logical statements prepared by the developer and used to generate the executing code

– Usual Third Generation Language (C, Java): count logical 3GL statements– For Model-driven, Very High Level Language, or Macro-based development: count statements

that generate customary 3GL code– For maintenance above the 3GL level: count the generator statements

– For maintenance at the 3GL level: count the generated 3GL statements

• Two primary effects: Volatility and Reuse– Volatility: % of ESLOC reworked or deleted due to requirements volatility– Reuse: either with modification (modified) or without modification (adopted)

* Stutzke, Richard D, Estimating Software-Intensive Systems, Upper Saddle River, N.J.: Addison Wesley, 2005

Page 12: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

12

Adapted Software Parameters

• For adapted software, apply the parameters:– DM: % of design modified

– CM: % of code modified

– IM: % of integration required compared to integrating new code

– Normal Reuse Adjustment Factor RAF = 0.4*DM + 0.3*CM + 0.3*IM

• Reused software has DM = CM = 0.

• Modified software has CM > 0. Since data indicates that the RAF factor tends to underestimate modification effort due to added software understanding effects, two other factors are used:– Software Understandability (SU): How understandable is the software to be

modified?– Unfamiliarity (UNFM): How unfamiliar with the software to be modified is the

person modifying it?

Page 13: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

13

SLOC Inclusion Rules

Page 14: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

14

Equivalent SLOC Rules

Source Includes Excludes

New

Reused Modified Generated

Generator statements 3GL generated statements

Converted

COTS

Volatility

How Produced in Development or Source

Includes Excludes

New

Reused Modified Generated Generator statements

(if 3GL generated

statements not modified in

development)

(if 3GL

generated statements modified in

development) 3GL generated statements

(if modified in development)

(if not modified in development)

Converted

COTS

Volatility

Equivalent SLOC Rules for Development Equivalent SLOC Rules for Maintenance

Page 15: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

15

Cost Model Size Inputs

Page 16: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

16

Agenda

• Data Analysis Issues

• Software Sizing Definitions

• Recent Workshop Results

• Conclusions

Page 17: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

17

Software Size Type Results

• Discussions forced clarification of categories and crisper definitions

• Practical sizing guidance captured in adaptation parameter ranges– E.g. maximum values where adapted code is instead replaced with new software identify range tops

• Created model-agnostic AAF weight ranges• Added sub-categories for generated, converted and

translated code to distinguish what is handled for applying equivalent size– Generator statements vs. generated

– Translated as-is vs. optimized

– Converted as-is vs. optimized

17

Page 18: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

18

Software Size Type Results (cont.)

• Category additions affected SLOC inclusion rules

• Practical guidance and updated adaption parameter ranges included in AFCAA Software Cost Estimation Metrics Manual

• Change request for CodeCount to flag and count moved code

18

Page 19: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

19

Modified Code Exercise Results

19

* If DM or C M is greater than 50%, start over with new** IM could be driven by safety critical applications, environments with high reliability requirements

Page 20: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

20

Agenda

• Data Analysis Issues

• Software Sizing Definitions

• Recent Workshop Results

• Conclusions

Page 21: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

21

Next Steps

• Create worked-out exercises for different cases exhibited in sizing rules

• Incorporate data analysis on existing data to find empirical value ranges for the reuse parameters for each size type in application domains.

21

Page 22: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

22

Concluding Remarks

• Goal is to publish a manual to help analysts develop quick software estimates using empirical metrics from recent programs

• Additional information is crucial for improving data quality across DoD

• We want your input on Productivity Domains and Data Definitions

• Looking for collaborators • Looking for peer-reviewers• Need more data

Page 23: Proposed Metrics Definition Highlights Raymond Madachy Naval Postgraduate School rjmadach@nps.edu CSSE Annual Research Review March 8, 2010.

23

References

• United States Department of Defense (DoD), “Instruction 5000.2, Operation of the Defense Acquisition System”, December 2008.

• W. Rosa, B. Clark, R. Madachy, D. Reifer, and B. Boehm, “Software Cost Metrics Manual”, Proceedings of the 42nd Department of Defense Cost Analysis Symposium, February 2009.

• B. Boehm, “Future Challenges for Systems and Software Cost Estimation”, Proceedings of the 13th Annual Practical Software and Systems Measurement Users’ Group Conference, June 2009.

• B. Boehm, C. Abts, W. Brown, S. Chulani, B. Clark, E. Horowitz, R. Madachy, D. Reifer, and B. Steece, Software Cost Estimation with COCOMO II, Upper Saddle River, NJ: Prentice-Hall, 2000.

• R. Stutzke, Estimating Software-Intensive Systems, Upper Saddle River, NJ: Addison Wesley, 2005.

• Madachy R, Boehm B, “Comparative Analysis of COCOMO II, SEER-SEM and True-S Software Cost Models”, USC-CSSE-2008-816, University of Southern California Center for Systems and Software Engineering, 2008.


Recommended