Date post: | 17-Jan-2016 |
Category: |
Documents |
Upload: | sabina-king |
View: | 219 times |
Download: | 0 times |
Software Process Dynamics
USC CSCI 510: Software Management and EconomicsNovember 18, 2009
Dr. Raymond [email protected]
2
Agenda
• Introduction• Example applications
– Inspection model– Spiral hybrid process model for Software-
Intensive System of Systems (SISOS)– Value-based product model with DoD analogs
• Backup slides• Introduction, background and examples
3
Research Background
• The evaluation of process strategies for the architecting and engineering of complex systems involves many interrelated factors.
• Effective systems and software engineering requires a balanced view of technology, mission or business goals, and people.
• System dynamics is a rich and integrative simulation framework used to quantify the complex interactions and the strategy tradeoffs between cost, schedule, quality and risk.
4
Systems and Software Engineering Challenges
• What to build? Why? How well?– Stakeholder needs balancing, mission/business case
• Who to build it? Where?– Staffing, organizing, outsourcing
• How to build? When; in what order?– Construction processes, methods, tools, components,
increments
• How to adapt to change? – In user needs, technology, marketplace
• How much is enough? – Functionality, quality, specifying, prototyping, test
5
The Software Process Dynamics Field
• 1991 Software Project Dynamics book, Abdel-Hamid and Madnick, MIT– A single model
• 1990’s Growing number of applications and commercial modeling tools
• 1998 Annual ProSim Workshop start• 1999 Refereed journal articles start• 2000’s Many applications, few modeling principles,
challenges of SISOS scalability and adaptability• 2008 Software Process Dynamics book, Madachy, USC
– Modeling techniques and principles, model building blocks, entire models, review of extensive model applications
6
Software Process DynamicsTable of Contents
Part 1 - Fundamentals Chapter 1 – Introduction and Background Chapter 2 – The Modeling Process with System Dynamics Chapter 3 – Model Structures and Behaviors for Software Processes
Part 2 – Applications and Future Directions Chapter 4 – People Applications Chapter 5 – Process and Product Applications Chapter 6 – Project and Organization Applications Chapter 7 – Current and Future Directions
Appendices and References Appendix A- Introduction to Statistics of Simulation Appendix B- Annotated Bibliography Appendix C- Provided Models
7
System Dynamics Principles
• Major concepts– Defining problems dynamically, in terms of graphs over time– Striving for an endogenous, behavioral view of the significant dynamics of a
system– Thinking of all real systems concepts as continuous quantities interconnected in
information feedback loops and circular causality– Identifying independent levels in the system and their inflow and outflow rates – Formulating a model capable of reproducing the dynamic problem of concern
by itself– Deriving understandings and applicable policy insights from the resulting model– Implementing changes resulting from model-based understandings and insights.
• The continuous view– Individual events are not tracked– Entities are treated as aggregate quantities that flow through a system
8
System Dynamics Notation
• System represented by x’(t)= f(x,p).• x: vector of levels (state variables), p: set of parameters
• Legend:
• Example system:
defects
defect generation rate
undetected defects
defect escape rate
detected defects
defect detection ratedefect detection
efficiency
Noname 1
level
rate
auxiliary variable
source/sink
information link
9
Model Elements
10
Model Elements (continued)
11
Agenda
• Introduction• Example applications
– Inspection model– Spiral hybrid process model for Software-
Intensive System of Systems (SISOS)– Value-based product model with DoD analogs
• Backup slides• Introduction, background and examples
12
Inspection Model Example
• Research problem addressed– What are the dynamic effects to the process of performing inspections?
• Model used to evaluate process quantitatively– Demonstrates effects of inspection practices on cost, schedule and quality
throughout lifecycle– Can experiment with changed processes before committing project
resources– Benchmark process improvement– Support project planning and management
• Model parameters calibrated to Litton and JPL data– Error generation rates, inspection effort, efficiency, productivity, others
• Model validated against industrial data
13
System Diagram
14
System Diagram (continued)
15
Effects of Inspections
3:18 PM 10/21/28
0.00 75.00 150.00 225.00 300.00
Days
1:
1:
1:
0.00
13.00
26.00
1: total manpower rate 2: total manpower rate
1
1
1
1
2
2
2
2
task graphs: Page 7
1: with inspections, 2: without inspections
• Qualitatively matches generalized effort curves for both cases from Michael Fagan, Advances in software inspections, IEEE Transactions on Software Engineering, July 1986
16
Inspection Policy Tradeoff Analysis
• Varying error generation rates shows diminishing returns from inspections:
3000
3500
4000
4500
5000
5500
10 20 30 40 50 60 70 80
Defects/KSLOC
To
tal E
ffo
rt (
Per
so
n-d
ays)
w ith inspections
w ithout inspections
17
Derivation of Phase Specific Cost Driver
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
Rqts. andProductDesign
DetailedDesign
Code andUnit Test
Integrationand Test
Phase
Eff
ort
Mu
ltip
lier
Nominal
High
Very High
Simulation Parameters COCOMO Rating fordesign inspection practice code inspection practice Use of Inspections
0 0 Nominal.5 .5 High1 1 Very High
18
Agenda
• Introduction• Example applications
– Inspection model– Spiral hybrid process model for Software-
Intensive System of Systems (SISOS)– Value-based product model with DoD analogs
• Backup slides• Introduction, background and examples
19
Spiral Hybrid Process Introduction
• The spiral lifecycle is being extended to address new challenges for Software-Intensive Systems of Systems (SISOS), such as coping with rapid change while simultaneously assuring high dependability
• A hybrid plan-driven and agile process has been outlined to address these conflicting challenges with the need to rapidly field incremental capabilities
• A system-of-systems (SOS) integrates multiple independently-developed systems and is very large, dynamically evolving, unprecedented, with emergent requirements and behaviors
– However, traditional static approaches cannot capture dynamic feedback loops and interacting phenomena that cause real-world complexity (e.g. hybrid processes, project volatility, increment overlap and resource contention, schedule pressure, slippages, communication overhead, slack, etc.)
• A system dynamics model is being developed to assess the incremental hybrid process and support project decision-making
• Both the hybrid process and simulation model are being evolved on a very large scale incremental project for a SISOS (U.S. Army Future Combat Systems)
20
Future Combat Systems (FCS) Network
21
Scalable Spiral Model Increment Activities
Increment N Baseline
Future Increment Baselines Rapid Change
High Assurance
Agile Rebaselining for Future Increments
Short, Stabilized Development of Increment N
V&V of Increment N
Increment N Transition/O&M
Current V&V
Short Development Increments
Future V&V
Stable Development Increments
Continuous V&V
Concerns Artifacts
Deferrals Foreseeable Change (Plan)
Resources Resources
Increment N Baseline
Future Increment Baselines Rapid Change
High Assurance
Agile Rebaselining for
Short, Stabilized Development of Increment N
V&V of Increment N
Increment N Transition/O&M
Current V&V
Short Development Increments
Future V&V
Stable Development Increments
Continuous V&V
Concerns Artifacts
Deferrals Foreseeable Change (Plan)
Resources Resources
Unforseeable Change (Adapt)
• Organize development into plan-driven increments with stable specs
• Agile team watches for and assesses changes, then negotiates changes so next increment hits the ground running
• Try to prevent usage feedback from destabilizing current increment
• Three team cycle plays out from one increment to the next
22
Spiral Hybrid Model Features
• Estimates cost and schedule for multiple increments of a hybrid process that uses three specialized teams (agile re-baseliners, developers, V&V’ers) per the scalable spiral model
• Considers changes due to external volatility and feedback from user-driven change requests
• Deferral policies and team sizes can be experimented with
• Includes tradeoffs between cost and the timing of changes within and across increments, length of deferral delays, and others
23
Model Input Control Panel
24
Model Overview
required capabilities developed capabilities V & V'ed capabilities
field issues
development rate V & V rate
development team
capability changes
capability volatility rate
non deferrable capability change rate
to current increment
deferred capability change rate
to successive increments
field issue rate
field issue delay
V &V team
average change analysis effort
volatility trends
agile rebaselining team
change deferral %
V & V productivity
development team allocation rate
V & V team allocation rate
agile rebaselining team allocation rate
construction effort EAC construction schedule EAC
baseline effort baseline schedule
current increment
development productivity
Agile Team Size
• Built around a cyclic flow chain for capabilities
– Arrayed for multiple increments• Each team is represented with a level and
corresponding staff allocation rate• Changes arrive a-periodically via the
volatility trends time function and flow into the level for capability changes
• Changes are processed by the agile team and allocated to increments per the deferral policies
– Constant or variable staffing for the agile team• For each increment the required
capabilities are developed into developed capabilities and then V&V’ed into V & V’ed capabilities
– Productivities and team sizes for development and V&V calculated with a Dynamic COCOMO variant and continuously updated for scope changes
– Flow rates between capability changes and V & V’ed capabilities are bi-directional for capability “kickbacks” sent back up the chain
• User-driven changes from the field are identified as field issues that flow back into the capability changes
25
Volatility Cost Functions
Volatility Effort Multiplier Lifecycle Timing Effort Multiplier
• The volatility effort multiplier for construction effort and schedule is an aggregate multiplier for volatility from different sources (e.g. COTS, mission, etc.) relative to the original baseline for increment
• The lifecycle timing effort multiplier models increased development cost the later a change comes in midstream during an increment
26
Sample Response to Volatility
• An unanticipated change occurs at month 8 shown as a volatility trend [1] pulse• It flows into capability changes [1] which declines to zero as the agile team processes the change• The change is non-deferrable for increment 1 so total capabilities [1] increases• Development team staff size dynamically responds to the increased scope
* [1] refers to increment #1Increment 1
0.00 6.00 12.00 18.00 24.00
Months
1:
1:
1:
2:
2:
2:
3:
3:
3:
4:
4:
4:
0
2
4
0
1
1
0
20
40
10
15
20
1: volatility trends[1] 2: capability changes[1] 3: development team 4: total capabilities[1]
1 1 1 12 22 2
33
3 3
4
44 4
27
Sample Test Results
• Test case for two increments of 15 baseline capabilities each• A non-deferrable change comes at month 8 (per previous slide)• The agile team size is varied from 2 to 10 people• Increment 1 mission value also lost for agile team sizes of 2 and 4
0
500
1000
1500
2000
2500
3000
3500
4000
2 4 6 8 10
# Agile People
Eff
ort
(P
M)
60
65
70
75
80
85
Sch
edu
le (
Mth
s
Total Effort (PM)
Total Schedule (Mths.)
0
500
1000
1500
2000
2500
3000
3500
4000
2 4 6 8 10
# Agile People
Eff
ort
(P
M)
Dev+V&V (2) Effort (PM)
Agile Effort (PM)
Dev+V&V Effort (1) (PM)
28
Sample Test Results (cont.)
$-
$10
$20
$30
$40
$50
$60
$70
$80
$90
2 4 6 8 10 12 14 16 18 20
# Agile People
Co
st (
Mil
lio
ns
of
Do
llar
s)
Labor CostInc. 1 Mission Value Loss
Total Cost
29
• System dynamics is a convenient modeling framework to deal with the complexities of a SISOS• A hybrid process appears attractive to handle SISOS dynamic evolution, emergent requirements
and behaviors• Initial results indicate that having an agile team can help decrease overall cost and schedule
– Model can help find the optimum balance• Will obtain more empirical data to calibrate and parameterize model including volatility and
change trends, change analysis effort, effort multipliers and field issue rates• Model improvements
– Additional staffing options• Rayleigh curve staffing profiles• Constraints on development and V&V staffing levels
– More flexible change deferral options across increments– Increment volatility balancing policies– Provisions to account for (timed) business/mission value of capabilities
• Additional model experimentation– Include capabilities flowing back from developers and V&V’ers– Vary deferral policies and volatility patterns across increments– Compare different agile team staffing policies
• Continue applying the model on a current SISOS and seek other potential pilots
Spiral Hybrid Model Conclusions and Future Work
30
References
• Abdel-Hamid T, Madnick S, Software Project Dynamics, Englewood Cliffs, NJ, Prentice-Hall, 1991
• Boehm B, Huang L, Jain A. Madachy R, “ The ROI of Software Dependability: The iDAVE Model”, IEEE Software Special Issue on Return on Investment, May/June 2004
• Boehm B, Software Engineering Economics. Englewood Cliffs, NJ, Prentice-Hall, 1981• Boehm B and Huang L, “Value-Based Software Engineering: A Case Study, IEEE Computer,
March 2003• Boehm B., Abts C., Brown A.W., Chulani S., Clark B., Horowitz E., Madachy R., Reifer D.,
Steece B., Software Cost Estimation with COCOMO II, Prentice-Hall, 2000 • Boehm B., Turner R., Balancing Agility and Discipline, Addison Wesley, 2003• Boehm B., Brown A.W., Basili V., Turner R., “Spiral Acquisition of Software-Intensive Systems
of Systems”, CrossTalk. May 2004• Boehm B., “Some Future Trends and Implications for Systems and Software Engineering
Processes”, USC-CSE-TR-2005-507, 2005• Brooks F, The Mythical Man-Month, Reading, MA, Addison-Wesley, 197• Chulani S, Boehm B, “Modeling Software Defect Introduction and Removal: COQUALMO
(COnstructive QUALity MOdel)”, USC-CSE Technical Report 99-510, 1999• Forrester JW, Industrial Dynamics. Cambridge, MA: MIT Press, 1961 • Kellner M, Madachy R, Raffo D, Software Process Simulation Modeling: Why? What? How?,
Journal of Systems and Software, Spring 1999
31
References (cont.)
• Madachy R, A software project dynamics model for process cost, schedule and risk assessment, Ph.D. dissertation, Department of Industrial and Systems Engineering, USC, December 1994
• Madachy R, System Dynamics and COCOMO; Complementary Modeling Paradigms, Proceedings of the Tenth International Forum on COCOMO and Software Cost Modeling, SEI, Pittsburgh, PA, 1995
• Madachy R, System Dynamics Modeling of an Inspection-Based Process, Proceedings of the Eighteenth International Conference on Software Engineering, IEEE Computer Society Press, Berlin, Germany, March 1996
• Madachy R, Tarbet D, Case Studies in Software Process Modeling with System Dynamics, Software Process Improvement and Practice, Spring 2000
• Madachy R, Simulation in Software Engineering, Encyclopedia of Software Engineering, Second Edition, Wiley and Sons, Inc., New York, NY, 2001
• Madachy R, Integrating Business Value and Software Process Modeling, Proceedings of SPW/ProSim 2005, Springer-Verlag, May 2005
• Madachy R, Boehm B, Lane J, Spiral Lifecycle Increment Modeling for New Hybrid Processes, Journal of Systems and Software, 2007 (to be published)
• Madachy R., Software Process Dynamics, Wiley-IEEE Computer Society, 2008• Reifer D., Making the Software Business Case, Addison Wesley, 2002• Richardson GP, Pugh A, Introduction to System Dynamics Modeling with DYNAMO, MIT Press, Cambridge, MA,
1981
USC Web Sites• http://rcf.usc.edu/~madachy/spd• http://csse.usc.edu/softwareprocessdynamics • http://sunset.usc.edu/classes/cs599_99
32
Agenda
• Introduction• Example applications
– Inspection model– Spiral hybrid process model for Software-
Intensive System of Systems (SISOS)– Value-based product model with DoD analogs
• Backup slides• Introduction, background and examples
33
Value-Based Model Background
• Purpose: Support software business decision-making by experimenting with product strategies and development practices to assess real earned value
• Description: System dynamics model relates the interactions between product specifications and investments, software processes including quality practices, market share, license retention, pricing and revenue generation for a commercial software enterprise
34
Model Features
• A Value-Based Software Engineering (VBSE) model covering the following VBSE elements:
– Stakeholders’ value proposition elicitation and reconciliation– Business case analysis– Value-based monitoring and control
• Integrated modeling of business value, software products and processes to help make difficult tradeoffs between perspectives
– Value-based production functions used to relate different attributes
• Addresses the planning and control aspect of VBSE to manage the value delivered to stakeholders
– Experiment with different strategies and track financial measures over time– Allows easy investigation of different strategy combinations
• Can be used dynamically before or during a project– User inputs and model factors can vary over the project duration as opposed to a static model– Suitable for actual project usage or “flight simulation” training where simulations are interrupted to
make midstream decisions
35
Model Sectors and Major Interfaces
• Software process and product sector computes the staffing and quality over time
• Market and sales sector accounts for market dynamics including effect of quality reputation
• Finance sector computes financial measures from investments and revenues Finances
Market andSales
SoftwareProcess and
Product
Staffing Rate
Product Quality
Sales Revenue
Product Specifications
36
Software Process and Product
~
Function Points effort multiplier
cumulative effort
~
Reliability Setting
defect density
actual qualitydefects
~
defect removal rate
staffing rate
estimated total effort
learning function
manpower buildup parameter
defect generation rate
start staff
product defect flows
effort and schedule calculation with dynamic COCOMO variant
37
Finances, Market and Sales
cash flow
financials
quality indicators
~
investment rate
desired cumulative revenue
~
desired revenue generation rate
sales and market
sales and market table
financial table
cumulative investment
~
investment rate
license expiration fraction
perceived qualitychange in perceived quality
~
current indicator of quality ~
delay in adjusting perceptions
active licenses
license expiration ratenew license selling rate
~
potential CX percent of market
cumulative revenue
revenue generation rate
desired ROI
average license price
9.6cumulative revenue
ROI
54.5desired cumulative reÉ
0.9ROI
desired cash flow
cumulative investment
9.7desired ROI
5.1cumulative investment
~
market size multiplier
potential market share
potential market share rate change
potential market share increase due to new product
market share delay
12:48 PM Wed, May 14, 2003
sales and market
0.00 1.25 2.50 3.75 5.00Years
1:
1:
1:
2:
2:
2:
3:
3:
3:
4:
4:
4:
0
1500
3000
200
600
1000
20
60
100
10
25
40
1: active licenses 2: new license selling É 3: license expiration rÉ 4: potential market shÉ
12:48 PM Wed, May 14, 2003
quality
0.00 1.25 2.50 3.75 5.00Years
1:
1:
1:
2:
2:
2:
0
50
100
1: perceived quality 2: current indicator of quality
investment and revenue flows software license sales
market share dynamics including quality reputation
38
Quality Assumptions
• COCOMO cost driver Required Software Reliability is a proxy for all quality practices
• Resulting quality will modulate the actual sales relative to the highest potential
• Perception of quality in the market matters– Quality reputation quickly lost and takes much longer to regain (bad news travels fast)
– Modeled as asymmetrical information smoothing via negative feedback loop
12:48 PM Wed, May 14, 2003
quality
0.00 1.25 2.50 3.75 5.00Years
1:
1:
1:
2:
2:
2:
0
50
100
1: perceived quality 2: current indicator of quality
11 1
1
2 2
2
2
39
Market Share Production Function and Feature Sets
0%
5%
10%
15%
20%
25%
0 2 4 6 8
Cost ($M)
Ad
de
d M
ark
et
Sh
are
Pe
rce
nt
Reference Case(700 Function Points)
Case 1 and Case 2(550 Function Points)
CoreFeatures
High PayoffFeatures
Features withDiminishing Returns
Cases from Example 1
40
DoD Analog: Mission Effectiveness Production Function and Feature Sets
0%
5%
10%
15%
20%
25%
0 2 4 6 8
Cost ($M)
Ad
de
d M
ark
et
Sh
are
Pe
rce
nt
Reference Case(700 Function Points)
Case 1 and Case 2(550 Function Points)
CoreFeatures
High PayoffFeatures
Features withDiminishing Returns
Ad
ded
Mis
sio
n E
ffec
tive
nes
s P
erce
nt
41
Sales Production Function and Reliability
30%
40%
50%
60%
70%
80%
90%
100%
0.9 1 1.1 1.2 1.3
Relative Effort to Achieve Reliability
Pe
rce
nt
of
Po
ten
tia
l S
ale
s
Low
Nominal
High
Very High
Required ReliabilitySettings
Reference Caseand Case 1
Case 2
Cases from Example 1
42
DoD Analog: Product Illity Production Function
30%
40%
50%
60%
70%
80%
90%
100%
0.9 1 1.1 1.2 1.3
Relative Effort to Achieve Reliability
Pe
rce
nt
of
Po
ten
tia
l S
ale
s
Low
Nominal
High
Very High
Required ReliabilitySettings
Reference Caseand Case 1
Case 2
Rel
iab
ilit
y o
r O
ther
Pro
du
ct I
llit
y R
atin
g
43
Example 1: Dynamically Changing Scope and Reliability
• Shows how model can assess the effects of combined strategies by varying the scope and required reliability independently or simultaneously
• Simulates midstream descoping, a frequent strategy to meet time constraints by shedding features
• Three cases are demonstrated:– Unperturbed reference case– Midstream descoping of the reference case after ½ year– Simultaneous midstream descoping and lowered required
reliability at ½ year
44
Control Panel and Simulation Results
Page 10.00 1.00 2.00 3.00 4.00 5.00
Years
1:
1:
1:
2:
2:
2:
3:
3:
3:
0
8
15
10
23
35
-1
1
3
1: staffing rate 2: market share 3: ROI
1
1
1 1 1
2 2
2
2 2
3 3
3
3
3
Page 10.00 1.00 2.00 3.00 4.00 5.00
Years
1:
1:
1:
2:
2:
2:
3:
3:
3:
0
8
15
10
23
35
-1
1
3
1: staffing rate 2: market share 3: ROI
11
1 1 1
2 22
2 23 3
3
3
3
Unperturbed Reference Case
Case 2
Case 1
Descope
Descope +Lower Reliability
45
Case Summaries
Case DeliveredSize(Function Points)
DeliveredReliabilitySetting
Cost ($M)
Delivery Time(Years)
Final Market Share
ROI
Reference Case: Unperturbed
700 1.0 4.78 2.1 28% 1.3
Case 1: Descope at Time = ½ years
550 1.0 3.70 1.7 28% 2.2
Case 2: Descope and Lower Reliabilityat Time = ½ years
550 .92 3.30 1.5 12% 1.0
46
Example 2: Determining the Reliability Sweet Spot
• Analysis process– Vary reliability across runs– Use risk exposure framework to find process optimum– Assess risk consequences of opposing trends: market delays and bad quality
losses – Sum market losses and development costs– Calculate resulting net revenue
• Simulation parameters– A new 80 KSLOC product release can potentially increase market share by
15%-30% (varied in model runs)– 75% schedule acceleration– Initial total market size = $64M annual revenue
• Vendor has 15% of market• Overall market doubles in 5 years
47
Cost Components
$0
$5
$10
$15
$20
$25
$30
$35
Low Nominal High Very High
Software Reliability
Co
st (
Mil
lio
ns)
development costmarket delay lossbad quality losstotal cost
3-year time horizon
48
• To achieve real earned value, business value attainment must be a key consideration when designing software products and processes
• Software enterprise decision-making can improve with information from simulation models that integrate business and technical perspectives
• Optimal policies operate within a multi-attribute decision space including various stakeholder value functions, opposing market factors and business constraints
• Risk exposure is a convenient framework for software decision analysis
• Commercial process sweet spots with respect to reliability are a balance between market delay losses and quality losses
• Model demonstrates a stakeholder value chain whereby the value of software to end-users ultimately translates into value for the software development organization
Value-Based Model Conclusions
49
Value-Based Model Future Work
• Enhance product defect model with dynamic version of COQUALMO to enable more constructive insight into quality practices
• Add maintenance and operational support activities in the workflows • Elaborate market and sales for other considerations including pricing
scheme impacts, varying market assumptions and periodic upgrades of varying quality
• Account for feedback loops to generate product specifications (closed-loop control)
– External feedback from users to incorporate new features– Internal feedback on product initiatives from organizational planning and control
entity to the software process• More empirical data on attribute relationships in the model will help
identify areas of improvement• Assessment of overall dynamics includes more collection and analysis
of field data on business value and quality measures from actual software product rollouts
50
Agenda
• Introduction• Example applications
– Inspection model– Spiral hybrid process model for Software-
Intensive System of Systems (SISOS)– Value-based product model with DoD analogs
• Backup slides• Introduction, background and examples
51
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
52
• System: a grouping of parts that operate together for a common purpose; a subset of reality that is a focus of analysis• Open, closed
• Software Process: a set of activities, methods, practices and transformations used by people to develop software.
• Model: an abstract representation of reality.• Static, dynamic; continuous, discrete
• Simulation: the numerical evaluation of a mathematical model.
• System dynamics: a simulation methodology for modeling continuous systems. Quantities are expressed as levels, rates and information links representing feedback loops.
Terminology
53
Why Model Systems?
• A system must be represented in some form in order to analyze it and communicate about it.
• The models are abstractions of real or conceptual systems used as surrogates for low cost experimentation and study.
• Models allow us to understand systems/processes by dividing them into parts and looking at how the parts are related.
• We resort to modeling and simulation because there are too many interdependent factors to be computed, and truly complex systems cannot be solved by analytical methods.
54
Software Process Models
• Used to quantitatively reason about, evaluate and optimize the software process.
• Demonstrate effects of process strategies on cost, schedule and quality throughout lifecycle and enable tradeoff analyses.
• Can experiment with changed processes via simulation before committing project resources.
• Provide interactive training for software managers; “process flight simulation”.
• Encapsulate our understanding of development processes (and support organizational learning).
• Benchmark process improvement when model parameters are calibrated to organizational data.
• Process modeling techniques can be used to evaluate other existing descriptive theories/models.
– Force clarifications, reveal discrepancies, unify fields
55
Process Modeling Characterization Matrix and Examples
Scope /Purpose
Portion oflifecycle
Developmentproject
Multiple,concurrentprojects
Long-termproductevolution
Long-termorganization
Strategicmanagement
product-linereusestrategies
Planning stage-basedcost/scheduleestimation
staffing
projectcost/schedule/qualityestimation
reuse costs
Control andoperationalmanagement
stage tracking earned valuetracking
Processimprovementand technologyadoption
peer reviewoptimization
peer revieweffects onproject
inter-projectreuseprocesses
product-linereuseprocesses
Understanding requirementsvolatility
core reusedynamics
Training andlearning
managerialmetricstraining
managerialmetricstraining
Example Litton studies in [Madachy et al. 2000]
56
System Dynamics Approach
• Involves following concepts [Richardson 91]– Defining problems dynamically, in terms of graphs over time
– Striving for an endogenous, behavioral view of the significant dynamics of a system
– Thinking of all real systems concepts as continuous quantities interconnected in information feedback loops and circular causality
– Identifying independent levels in the system and their inflow and outflow rates
– Formulating a model capable of reproducing the dynamic problem of concern by itself
– Deriving understandings and applicable policy insights from the resulting model
– Implementing changes resulting from model-based understandings and insights.
• Dynamic behavior is a consequence of system structure
57
Systems Thinking
• A way to realize the structure of a system that leads to it’s behavior• Systems thinking involves:
– Thinking in circles and considering interdependencies
• Closed-loop causality vs. straight-line thinking
– Seeing the system as a cause rather than effect
• Internal vs. external orientation
– Thinking dynamically rather than statically
– Operational vs. correlational orientation
• Improvement through organizational learning takes place via shared mental models• The power of models increase as they become more explicit and commonly
understood by people – A context for interpreting and acting on data
• System dynamics is a methodology to implement systems thinking and leverage learning efforts
58
Software Processes and System Dynamics
• Software development and evolution are dynamic and complex processes– Interrelated technology, business, and people factors that keep changing
• E.g. development methods and standards, reuse/COTS/open-source, product lines, distributed development, improvement initiatives, increasing product demands, operating environment, volatility, resource contention, schedule pressure, communication overhead, motivation, etc.
• System dynamics features– Provides a rich and integrative framework for capturing process phenomena and their
relationships– Complex and interacting process effects are modeled using continuous flows interconnected in
loops of information feedback and circular causality– Provides a global system perspective and the ability to analyze combined strategies– Can model inherent tradeoffs between schedule, cost and quality– Attractive for schedule analysis accounting for critical path flows, task interdependencies and
bottlenecks not available with static models or PERT/CPM methods– Enables low cost process experimentation
• System dynamics is well-suited to deal with the complexities of software processes and their improvement strategies
59
Software Process Control System
Software Process Software ArtifactsRequirements, resources etc.
internal project feedback
external feedback from operational environment
Software Development or Evolution Project
60
A Software Process
61
Modeling Process Overview
• Iterative, cyclic
policy implementation
system understandings
problem definition
model conceptualization
model formulation
simulation
policy analysis
62
Modeling Stages and Concerns
problem definition
model conceptualization
model formulation
simulation
evaluation
context; symptoms
reference behavior modes
model purpose
system boundary
feedback structure
model representation
model behavior
reference behavior modes
63
The Continuous View
• Individual events are not tracked
• Entities are treated as aggregate quantities that flow through a system– can be described through differential equations
• Discrete approaches usually lack feedback, internal dynamics
64
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
65
Error Co-flows
tasks designed
design ratedesign errors
design error generation rate
design error density
66
Learning Curve
tasks completed
development rate
manpower ratelearning
productivity
job size
percentage complete
n
67
Example Levels and Rates
developed software
software development rate
personnel
hiring rate attrition rate
use cases ready for review
use case generation rate use case review rate
cumulative cost
financial burn rate
defects
defect generation rate
defect density
requirements tasksdesign tasks
design rate
design productivity
design personnel
personnel
desired personnel
hiring rate
Levels
Rates
68
Example Auxiliaries
tasks developed
software development rate
productivity
productivity factor 1
personnelprogress varianceplanned tasks to develop
actual tasks developed
69
Software Product Chain
tasks designedtasks required
rqts generation rate design rate
tasks coded
coding rate
tasks tested
testing rate
Cycle time per task = transit time through relevant phase(s)
Cycle time per phase = start time of first flowed entity - completion time of last flowed entity
70
Error Detection and Rework Chain
• Cost/schedule/quality tradeoffs available when defects are represented as levels with the associated variable effort and cycle time for rework and testing as a function of those levels.
error generation rate
undetected errorserrors
detected errors
reworked errors
error detection rate
error escape rate
rework rate
cum rework effort
rework manpower rate
rework effort per error
x
71
Personnel Chain
new workforce experienced workforce
hiring rate assimilation rate quit rate
new hire transfer rate experienced transfer rate
72
Feedback Loops
• A feedback loop is a closed path connecting an action decision that affects a level, then information on the level being returned to the decision making point to act on.
level
decision
73
Software Production Structure
• Combines task development and personnel chains.
• Production constrained by productivity and applied
personnel resources.
personnel
hiring rate attrition rate
completed software
software development rate
~
productivity
Graph 1
74
Example Delay Structure and Behavior
• Delays are ubiquitous in processes and important components of feedback systems
outflow rate = level / delay timelevel
outflow rate
delay time
0.00 25.00 50.00 75.00 100.00Days
1:
1:
1:
0
5
101: level
1
1
11
75
Typical Behavior Patterns
Time
Per
form
ance
, M
easu
re
goal
Goal seeking behavior
Time
Per
form
ance
, M
easu
re
Exponential growth
Time
Per
form
ance
, M
easu
re
S-shaped growth
Time
Per
form
ance
, M
easu
re
Oscillating
76
General System Behaviors
• Behaviors are representative of many known types of systems.
• Knowing how systems respond to given inputs is valuable intuition for the modeler
• Can be used during model assessment– use test inputs to stimulate the system behavioral
modes
77
System Order
• The order of a system refers to the number of levels contained.
• A single level system cannot oscillate, but a system with at least two levels can oscillate because one part of the system can be in disequilibrium.
78
Example System Behaviors
• Delays
• Goal-seeking Negative Feedback
– First-order Negative Feedback
– Second-order Negative Feedback
• Positive Feedback Growth or Decline
• S-curves
79
Delays
• Time delays are ubiquitous in processes
• They are important structural components of feedback systems. • Example: hiring delays in software development.
– the average hiring delay represents the time that a personnel requisition remains open before a new hire comes on board
personnel requisitions new hires
hiring rate
average hiring delay
80
Third Order Delay
• A series of 1st order delays• Graphs show water levels over time in each tank
tank 1 starts full
81
Delay order Pulse input Step input
1
2
3
Infinite
(pipeline)
Delay Summary
inputoutput
82
Negative Feedback
• Negative feedback exhibits goal seeking behavior, or sometimes instability
• May represent hiring increase towards a staffing goal. The change is more rapid at first and slows down as the discrepancy between desired and perceived decreases. Also a good trend for residual defect levels.
zero goal
positive goal
Analytically:Level = Goal + (Level0 - Goal)e -t/tc
• rate = (goal - present level)/time constant
83
Orders of Negative Feedback
• First-order Negative Feedback
• Second-order Negative Feedback– Oscillating behavior may start out with exponential growth and level out. It could
represent the early sales growth of a software product that stagnates due to satisfied market demand, competition or declining product quality.
84
Positive Feedback
• Positive feedback produces a growth process• Exponential growth may represent sales growth (up to a
point), Internet traffic, defect fixing costs over time • rate = present level*constant
Analytically:
exponential growth: Level=Level0eat
exponential decay: Level=Level0e-t/TC
85
S-Curves
• S-curve: graphic display of a quantity like progress or cumulative effort plotted against time that exhibits an s-shaped curve. It is flatter at the beginning and end, and steeper in the middle. It is produced on a project that starts slowly, accelerates and then tails off as work tapers off
• S-curves are also observed in the ROI curve of technology adoption, either time-based return or in production functions that relate ROI to investment.
86
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
87
Brooks’s Law Modeling Example
• “Adding manpower to a late software project makes it later” [Brooks 75].
• We will test the law using a simple model based on the following assumptions: – New personnel require training by experienced personnel to
come up to speed
– More people on a project entail more communication overhead
– Experienced personnel are more productive then new personnel, on average.
• An effective teaching tool
88
Model Diagram and Equations
89
Model Output for Varying Additions
Sensitivity of Software Development Rate to Varying Personnel Allocation Pulses (1: no extra hiring, 2: add 5 people on 100th day, 3: add 10 people on 100th day)
Days
Function points/day
90
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
91
1. INTRODUCTION AND BACKGROUND
Foreword by Barry BoehmPreface1.1 Systems, Processes, Models and Simulation1.2 Systems Thinking1.3 Basic Feedback Systems Concepts Applied to the
Software Process1.4 Brooks’s Law Example1.5 Software Process Technology Overview1.6 Challenges for the Software Industry1.7 Major References1.8 Chapter 1 Summary1.9 Exercises
92
2. THE MODELING PROCESS WITH SYSTEM DYNAMICS
2.1 System Dynamics Background2.2 General System Behaviors2.3 Modeling Overview2.4 Problem Definition2.5 Model Conceptualization2.6 Model Formulation and Construction2.7 Simulation2.8 Model Assessment2.9 Policy Analysis2.10 Continuous Model Improvement2.11 Software Metrics Considerations2.12 Project Management Considerations2.13 Modeling Tools2.14 Major References2.15 Chapter 2 Summary2.16 Exercises
93
3. MODEL STRUCTURES AND BEHAVIOR FOR SOFTWARE PROCESSES
3.1 Introduction
3.2 Model Elements
3.3 Generic Flow Processes
3.4 Infrastructures and Behaviors
3.5 Software Process Chain Infrastructures
3.6 Major References
3.7 Chapter 3 Summary
3.8 Exercises
94
4. PEOPLE APPLICATIONS
4.1 INTRODUCTION
4.2 OVERVIEW OF APPLICATIONS
4.3 PROJECT WORKFORCE MODELING
4.4 EXHAUSTION AND BURNOUT
4.5 LEARNING
4.6 TEAM COMPOSITION
4.7 OTHER APPLICATION AREAS
4.8 MAJOR REFERENCES
4.9 CHAPTER 4 SUMMARY
4.1 EXERCISES
95
5. PROCESS AND PRODUCT APPLICATIONS
5.1 INTRODUCTION
5.2 OVERVIEW OF APPLICATIONS
5.3 PEER REVIEWS
5.4 GLOBAL PROCESS FEEDBACK (SOFTWARE EVOLUTION)
5.5 SOFTWARE REUSE
5.6 COMMERCIAL OFF-THE-SHELF SOFTWARE (COTS) - BASED SYSTEMS
5.7 SOFTWARE ARCHITECTING
5.8 QUALITY AND DEFECTS
5.9 REQUIREMENTS VOLATILITY
5.1 SOFTWARE PROCESS IMPROVEMENT
5.11 MAJOR REFERENCES
5.12 PROVIDED MODELS
5.13 CHAPTER 5 SUMMARY
5.14 EXERCISES
96
6. PROJECT AND ORGANIZATION APPLICATIONS
6.1 INTRODUCTION
6.2 OVERVIEW OF APPLICATIONS
6.3 INTEGRATED PROJECT MODELING
6.4 SOFTWARE BUSINESS CASE ANALYSIS
6.5 PERSONNEL RESOURCE ALLOCATION
6.6 STAFFING
6.7 EARNED VALUE
6.8 MAJOR REFERENCES
6.9 PROVIDED MODELS
6.1 CHAPTER 6 SUMMARY
6.11 EXERCISES
97
7. CURRENT AND FUTURE DIRECTIONS
7.1 Introduction7.2 Simulation Environments and Tools7.3 Model Structures and Component-Based Model
Development7.4 New and Emerging Trends for Applications7.5 Model Integration7.6 Empirical Research and Theory Building7.7 Mission Control Centers and Training Facilities7.8 Chapter 8 Summary7.9 Exercises
98
Appendices
Appendix A: Introduction to Statistics of SimulationA.1 RISK ANALYSIS AND PROBABILITYA.2 PROBABILITY DISTRIBUTIONSA.4 ANALYSIS OF SIMULATION INPUTA.5 EXPERIMENTAL DESIGNA.6 ANALYSIS OF SIMULATION OUTPUTA.7 MAJOR REFERENCESA.8 APPENDIX A SUMMARYA.9 EXERCISES
Appendix B: Annotated System Dynamics BibliographyAppendix C: Provided Models
99
Examples of Provided Models (Ch. 6 Only)
Application Area Model Filename Description and External Source
inspections.itm dynamic project effects of incorporating formal inspections
insp.itm version of Abdel-Hamid's integrated software project dynamics model (includes switch for inspections) (also see base.itm for incremental processes and inspections) Provided by John Tvedt
Peer Reviews
wrkchain.itm simple work flow chain where tasks undergo inspections using a conveyer model modified from High Performance Systems
Software Reuse reuse and language level.itm Impact of reuse and language levels Provided by Kam Wing Lo
Global Process Feedback global feedback.itm illustration of global feedback to software process (simplified version of Wernick-Lehman 98 model) Provided by Paul Wernick
COTS-based Systems COTS glue code integration.itm
Dynamics of glue code development in COTS-based systems Provided by Jongmoon Baik
base.itm dissertation model for incremental development and inspections Provided by John Tvedt
project increments.itm three increment project model Provided by Doug Sycamore
Incremental and Iterative Processses
simple iterative process.itm highly simplified software development structure using arrays to model iterations
Software Architecting MBASE architecting.itm software architecting using the MBASE framework; also includes iterations Provided by Nikunj Mehta
Quality COQUALMO.xls spreadsheet version of the Constructive Quality Model (COQUALMO) Provided by the USC Center for Software Engineering
SEASauth.itm organizational process improvement Provided by Steven Burke
Software Process Improvement
Xerox SPI.itm Xerox adaptation of Burke’s process improvement model Provided by Jason Ho
.
.
.
100
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
101
Validation to Empirical Data
• Using 329 Litton inspections and 203 JPL inspections
Project/Test Case Test Effort Reduction
Test Schedule Reduction
Litton Project a compared to previous project
50% 25%
Test case 11 with Litton productivity constant and job size compared to test case 1.3 with Litton parameters
48% 19%
Test case 1.1 compared to test case 1.3
48% 21%
Project/Test Case
Effort Ratio of Rework to Preparation and Meeting
Litton project .47
JPL projects .45
Test case 1.1 .49Simulated ROI within 15% of actual ROI
102
Sample Project Progress Trends
• From [Madachy 94]
8:18 AM 11/3/28
0.00 75.00 150.00 225.00 300.00
Days
1:
1:
1:
2:
2:
2:
3:
3:
3:
4:
4:
4:
5:
5:
5:
0.00
266.65
533.30
0.00
266.63
533.27
0.00
266.63
533.27
0.00
0.50
1.00
0.00
130.12
260.25
1: cum tasks design… 2: cum tasks coded 3: tasks tested 4: fraction done 5: actual completio…
1
1
1
1
2 2
2
2
3 3 3
3
4 44
4
5 5
5
5
task graphs: Page 1
103
Error Multiplication Effects
0
2000
4000
6000
8000
10000
12000
14000
1 2.5 5 7.5 10
Design Error Multiplication
Pro
jec
t E
ffo
rt
w ith inspections
w ithout inspections
104
Risk Analysis
• A deterministic point estimate from a simulation run is only one of many actual possibilities
• Simulation models are ideal for exploring risk• test the impact of input parameters• test the impact of different policies
• Monte-Carlo analysis takes random samples from an input probability distribution
105
Monte-Carlo Example
• Results of varying inspection efficiency:
01
23
45
67
89
3991 4058 4126 4193 4260 4328 4395
Effort Bin (Person-days)
Fre
qu
en
cy
106
Contributions of Inspection Model
• Demonstrated dynamic effects of performing inspections.– Validated against empirical industry data
• New knowledge regarding interrelated factors of inspection effectiveness.
• Demonstrated complementary features of static and dynamic models.
• Techniques adopted in industry.
107
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
108
Software Cost/Quality Tradeoff Tool (NASA)
• Orthogonal Defect Classification (ODC) COQUALMO system dynamics model working prototype
• ODC defect distribution pattern per JPL studies [Lutz and Mikulski 2003]
• Includes effort estimation• Includes tradeoffs of different detection
efficiencies for the removal practices per type of defect
109
Software Cost/Quality Simulation Tradeoff Tool Demo
3
peer reviews
Graph 1
3
automated analysis
SLOC 10000
Size
6U
execution testing and tools
Graph 2
1:12 PM Sat, Mar 25, 2006
Defect Types
0.00 3.75 7.50 11.25 15.00Months
1:
1:
1:
2:
2:
2:
3:
3:
3:
0
100
2001: timing defects 2: interface defects 3: data value defects
1
1
112
2
2 2
3
3
3 3
1:12 PM Sat, Mar 25, 2006
Total Defects and Effort
0.00 3.75 7.50 11.25 15.00Months
1:
1:
1:
2:
2:
2:
3:
3:
3:
4:
4:
4:
0
100
200
0.00
200.00
400.00
0
25
50
1: generation rate 2: detection rate 3: defects 4: effort at completion
1
1
1 12
2
2
2
3
3
3 3
44 4 4
(1=very low, 2=low, 3=nominal, 4=high, 5=very high, 6=extra high)
110
Backup Slide Outline
• Research introduction– Processes and system dynamics– Example model structures and system behaviors – Brooks’s Law model demonstration– Software Process Dynamics book chapters
• Examples– Inspection model supplement– Software cost and quality tradeoff simulation tool
(NASA)– Process concurrence modeling
111
Process Concurrence Introduction
• Process concurrence: the degree to which work becomes available based on work already accomplished– represents an opportunity for parallel work– a framework for modeling constraint mechanics
• Increasing task parallelism is a primary RAD opportunity to decrease cycle time
• System dynamics is attractive to analyze schedule– can model task interdependencies on the critical path
112
Trying to Accelerate Software Development
development rate
software tasks
restricted channel flow
tasks todevelop
completedtasks
personnel
(partially adapted from Putnam 80)
113
Limited Parallelism of Software Activities
• There are always sequential constraints independent of phase: analysis and specification; figure out what you're supposed to do development of something (architecture, design, code, test plan, etc.) assessment: verify/validate/review/debug possible rework recycle of previous activities
• These can't be done totally in parallel with more applied people– different people can perform the different activities with limited
parallelism, but downstream activities will always have to follow some
of the upstream
114
Lessons from Brooks in The Mythical Man-Month
• Sequential constraints imply tasks cannot be partitioned.– applying more people has no effect on schedule
• Men and months are interchangeable only when tasks can be partitioned with no communication among them.
115
Process Concurrence Basics
• Process concurrence describes interdependency constraints between tasks– can be an internal constraint within a development stage or an
external constraint between stages
• Describes how much work becomes available for completion based on previous work accomplished
• Accounts for realistic bottlenecks on work availability– vs. a model driven solely by resources and productivity that can
finish in almost zero time with infinite resources • Concurrence relations can be sequential, parallel, partially
concurrent, or other dependent relationships
116
Internal Process Concurrence
• Internal process concurrence relationship shows how much work can be done based on the percent of work already done.
• The relationships represent the degree of sequentiality or concurrence of the tasks aggregated within a phase.
117
Internal Concurrence Examples
0
25
50
75
100
0 25 50 75 100
Percent of Tasks Completed and Released
Per
cen
t o
f Ta
sks
Co
mp
lete
or
Ava
ilab
le t
o C
om
ple
te
Simple conversion task where tasks can be partitioned with no communication
Complex system development where tasks are dependent due to required inter-task communication.
0
25
50
75
100
0 25 50 75 100
Percent of Tasks Completed and Released
Per
cen
t o
f Ta
sks
Co
mp
lete
or
Ava
ilab
le t
o C
om
ple
te
initial work on important segments; other segments have to wait until these are done
region of parallel work
less parallelintegration
118
External Process Concurrence
• External process concurrence relationships describe constraints on amount of work that can be done in a downstream phase based on the percent of work released by an upstream phase.
• See examples on following slide– More concurrent processes have curves near the upper left
axes, and less concurrent processes have curves near the lower and right axes.
• Tasks can be considered to be the number of function points demonstrable in their phase-native form
119
0
25
50
75
100
0 25 50 75 100
Percent of Inception Tasks Released
Per
cen
t o
f E
lab
ora
tio
n T
asks
Ava
ilab
le
to C
om
ple
te
3 2
1
4
1 - a linear lockstep concurrence in the development of totally independent modules
2 - S-shaped concurrence for new, complex development where an architecture core is needed first
3 - highly leveraged instantiation like COTS with some glue code development
4 - a slow design buildup between phases
External Concurrence Examples
120
Roles Have Different Mental Models
• Differing perceptions upstream and downstream (Ford-Sterman 97)
• Group visualization helps identify disparities to improve communication and reduce conflict.
121
RAD Modeling Example
• One way to achieve RAD is by having base software architectures tuned to application domains available for instantiation, standard database connectors and reuse.
• The next two slides contrast the concurrence of an HR portal development using two different development approaches 1) from scratch and 2) with an existing HR base architecture.
122
Example: Development from Scratch
Inception (System Definition) Elaboration (System Design)Requirements Released % of Inception
Tasks Released% of ComponentsReady to Elaborate
Overall % ofTasks Ready toElaborate
About 25% of the core functionality for theself-service interface supported byprototype. Only general database interfacegoals defined.
30% 20% UI10% core5% DB
11%
About half of the basic functionality for theself-service interface supported byprototype.
55% 40% UI20% core20% DB
26%
Interface specifications to Peoplesoftdefined for internal personnel information.
60% 40% UI30% core40% DB
37%
More functionality for benefits capabilitiesdefined (80% of total front-end)
75% 75% UI60% core40% DB
57%
Interface specification to JD Edwards andSAP systems for life insurance andretirement information.
85% 75% UI80% core80% DB
79%
Rest of user interface defined (95% oftotal), except for final UI refinements aftermore prototype testing.
95% 95% UI95% core80% DB
89%
Timecard interface to Oracle systemdefined.
98% 95% UI95% core100% DB
97%
Last of UI refinements released. 100% 100% UI100% core100% DB
100%
123
Architecture Approach Comparison
0
25
50
75
100
0 25 50 75 100Inception Tasks Released (%)
Ela
bo
rati
on
Ta
sk
s A
va
ilab
le t
o
Co
mp
lete
(%
)no architecture base
with architecture base
Opportunityfor increased taskparallelism and quickerelaboration
124
Rayleigh Curve Applicability
• Rayleigh curve was based on initial studies of hardware research and development
– projects resemble traditional waterfall development for unprecedented systems
• Rayleigh staffing assumptions don’t hold well for COTS, reuse, architecture-first design patterns, 4th generation languages or staff-constrained situations
• However an “ideal” staffing curve is proportional to the number of problems
ready for solution (from a product perspective).
Time Time Time
problems ready for solution product elaboration remaining work gap
Rayleigh staffing profile problems ready for solution
product elaboration tasks complete or available to complete (from process concurrence)
= *
125
Process Concurrence Advantages
• Process concurrence can model more realistic situations than the Rayleigh curve and produce varying dynamic profiles
• Can be used to show when and why Rayleigh curve modeling doesn’t apply
• Process concurrence provides a way of modeling constraints on making work available, and the work available to perform is the same dynamic that drives the Rayleigh curve– since the staff level is proportional to the problems (or
specifications) ready to implement
126
External Concurrence Model
the time profile of tasksready to elaborate ~ “ideal” staffing curve shape
127
Simulation Results and Sample Lessons
Task Specification InputProfile
Concurrence Type12:49 PM Sun, Oct 21, 2001
Untitled
0.00 5.00 10.00 15.00 20.00Months
1:
1:
1:
0
10
201: specification rate
1 1
1 1
10:34 AM Sun, Oct 21, 2001
Untit led
0.00 5.00 10.00 15.00 20.00Months
1:
1:
1:
0
10
201: specification rate
1
1
1 1
10:27 AM Sun, Oct 21, 2001
Untitled
0.00 5.00 10.00 15.00 20.00Months
1:
1:
1:
0
20
401: specification rate
1
1
1 1
Linearconcurrence
12: 45 PM Sun, Oct 21, 2001
linear
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
10
201: elaborat ion availabilit y rat e
1 1
1 1
2: 06 PM Fr i, Oct 19, 2001
Unt it led
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
10
20elaborat ion availabilit y rat e: 1 -
1
1
1 1
2: 13 PM Sat , Oct 20, 2001
linear
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
20
40elaborat ion availabilit y rat e: 1 -
1
1
1 1
Slow designbuildup
12: 45 PM Sun, Oct 21, 2001
slow
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
15
301: elaborat ion availabilit y rat e
1
1
1 1
2: 01 PM Fr i, Oct 19, 2001
Unt it led
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
15
30elaborat ion availabilit y rat e: 1 -
1
1
1 1
10: 23 AM Sat , Oct 20, 2001
slow design buildup
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
15
301: elaborat ion availabilit y rat e
1
1
1 1
Leveragedinstantiation
12: 44 PM Sun, Oct 21, 2001
leveraged
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
15
301: elaborat ion availabilit y rat e
1
1
1 11: 53 PM Fr i, O ct 19, 2001
Unt it led
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
15
30elaborat ion availabilit y r at e: 1 -
1
1
1 1
10: 27 AM Sat , O ct 20, 2001
lever aged
0. 00 5. 00 10. 00 15. 00 20. 00Mont hs
1:
1:
1:
0
50
1001: elabor at ion availabilit y rat e
1
11 1
S-shapedconcurrence
12:46 PM Sun, Oct 21, 2001
s
0. 00 5. 00 10.00 15.00 20.00Months
1:
1:
1:
0
20
401: elaborat ion availabilit y rate
1
1
1 1
2: 05 PM Fri, Oct 19, 2001
Unt it led
0. 00 5. 00 10.00 15.00 20.00Months
1:
1:
1:
0
25
50elaborat ion availabilit y rat e: 1 -
1
1
1 110:28 AM Sat , Oct 20, 2001
s shape
0. 00 5. 00 10.00 15.00 20.00Months
1:
1:
1:
0
20
401: elaborat ion availabilit y rate
1
1
1 1
flat Rayleigh COTS pulse at front
N/A
Critical customer decision delays slow progress - e.g. can’t design until timing performance specs are known
Early stakeholder concurrence enables RAD - e.g. decision on architectural framework or COTS package
128
Additional Considerations
• Process concurrence curves can be more precisely matched to the software system types
• COTS by definition should exhibit very high concurrence
• Mixed strategies produce combined concurrence relationships
• E.g. COTS first thennew development:
129
Process Concurrence Conclusions
• Process concurrence provides a robust modeling framework– a method to characterize different approaches in terms of their
ability to parallelize or accelerate activities
• Gives a detailed view of project dynamics and is relevant for planning and improvement purposes– a means to collaborate between stakeholders to achieve a shared
planning vision
• Can be used to derive optimal staffing profiles for different project situations
• More generally applicable than the Rayleigh curve• More empirical data needed on concurrence relationships
from the field for a variety of projects