+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long...

[American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long...

Date post: 15-Dec-2016
Category:
Upload: mel
View: 214 times
Download: 1 times
Share this document with a friend
9
1 AIAA-2003-6523 Developing Hedging Strategies for Managing Space Systems Program Costs Mel Eisman Sr. Cost Analyst RAND Corporation Santa Monica, California Abstract Most of the major NASA and US Air Force space systems completed or underway today take upwards of three and potentially five or more years from the start of development to first launch of the objective system. The annual expenditures incurred or projected for these programs over this rather long timeframe can exceed several tens to hundred of millions of dollars each year. The government and contractor team’s program offices have a considerable challenge to ensure that the program stays within the proposed contract cost, annual program budget authorized, can still meet the expected launch window, while ensuring a relative measure of operational “success”. For the Air Force, one way of expressing this relative measure of “success” is determined by how well the space system(s) can meet the intended capability that the “end user” expects represented by a collective set of threshold and/or objective values of key technical performance parameters (KTPPs). When these major program investments by NASA, the Air Force and other stakeholders are underway, it is essential to monitor the space system’s progress in being able to meet KTPPs, while keeping within annual budget commitments, total contract costs, and on pace to deliver assets within the launch window expected. To ensure a reasonable return on this investment, hedging strategies should be developed and in place both at the government and contractor team program offices. This presentation provides further explanations of what hedging strategies are, why they are needed, the actions for developing these strategies, and a discussion on what decision support tools are available and the features needed to develop and implement these strategies. Copyright @2003 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Introduction Most of the major NASA and US Air Force space systems completed or underway today take upwards of three and potentially five or more years from the start of development to first launch of the objective system. The annual expenditures incurred or projected for these programs over this rather long timeframe can exceed several tens to hundred of millions of dollars each year. In June 2003, GAO released a report to the US House of Representatives Subcommittee on Defense Appropriations that provided an assessment of common problems affecting eight military satellites and other space-related programs over the past two decades. The majority of these programs ended up costing more than expected and taking longer to develop and launch than planned. According to the GAO, several factors contributed to these problems. DoD took a “schedule-driven” instead of a “knowledge-driven” approach to acquiring space systems 1 . Program management activities to control costs, maximize competition among contractors or subcontractors and test technologies early in the program were compressed or not done. Since some of the same systems had a diverse group of stakeholders with competing interests involved in overall satellite development, there were challenges in making the most cost- effective tradeoff design decisions to meet their expectations while still being able to meet the first launch window. In some instances, DoD program offices did not adequately oversee contractors, especially in measuring the technical progress on systems’ requirements since it was difficult to test satellite subsystem functionality in a realistic environment prior to first launch. 1 See the Reference (1) GAO report for more details. Space 2003 23 - 25 September 2003, Long Beach, California AIAA 2003-6253 Copyright © 2003 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Transcript
Page 1: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

1

AIAA-2003-6523

Developing Hedging Strategies for Managing Space Systems Program Costs

Mel EismanSr. Cost Analyst

RAND CorporationSanta Monica, California

Abstract

Most of the major NASA and US Air Force space systems completed or underway today take upwards of three and potentially five or more years from the start of development to first launch of the objective system. The annual expenditures incurred or projected for these programs over this rather long timeframe can exceed several tens to hundred of millions of dollars each year. The government and contractor team’s program offices have a considerable challenge to ensure that the program stays within the proposed contract cost, annual program budget authorized, can still meet the expected launch window, while ensuring a relative measure of operational “success”. For the Air Force, one way of expressing this relative measure of “success” is determined by how well the space system(s) can meet the intended capability that the “end user” expects represented by a collective set of threshold and/or objective values of key technical performance parameters (KTPPs).

When these major program investments by NASA, the Air Force and other stakeholders are underway, it is essential to monitor the space system’s progress in being able to meet KTPPs, while keeping within annual budget commitments, total contract costs, and on pace to deliver assets within the launch window expected. To ensure a reasonable return on this investment, hedging strategies should be developed and in place both at the government and contractor team program offices. This presentation provides further explanations of what hedging strategies are, why they are needed, the actions for developing these strategies, and a discussion on what decision support tools are available and the features needed to develop and implement these strategies.

Copyright @2003 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Introduction

Most of the major NASA and US Air Force space systems completed or underway today take upwards of three and potentially five or more years from the start of development to first launch of the objective system. The annual expenditures incurred or projected for these programs over this rather long timeframe can exceed several tens to hundred of millions of dollars each year.

In June 2003, GAO released a report to the US House of Representatives Subcommittee on Defense Appropriations that provided an assessment of common problems affecting eight military satellites and other space-related programs over the past two decades. The majority of these programs ended up costing more than expected and taking longer to develop and launch than planned.

According to the GAO, several factors contributed to these problems. DoD took a “schedule-driven” instead of a “knowledge-driven” approach to acquiring space systems1. Program management activities to control costs, maximize competition among contractors or subcontractors and test technologies early in the program were compressed or not done. Since some of the same systems had a diverse group of stakeholders with competing interests involved in overall satellite development, there were challenges in making the most cost-effective tradeoff design decisions to meet their expectations while still being able to meet the first launch window. In some instances, DoD program offices did not adequately oversee contractors, especially in measuring the technical progress on systems’ requirements since it was difficult to test satellite subsystem functionality in a realistic environment prior to first launch.

1 See the Reference (1) GAO report for more details.

Space 200323 - 25 September 2003, Long Beach, California

AIAA 2003-6253

Copyright © 2003 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Page 2: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

2

Consequently, the government and contractor team’s program offices have a considerable challenge ensuring that the program stays within the proposed contract cost, annual program budget authorized, can still meet the expected launch window, while achieving a relative measure of operational “success”.

This briefing and the discussion that follows focuses on the challenges of managing space systems programs and, specifically, identifies the “gaps” in the existing process of identifying, assessing and managing risks. We then identify what a hedging strategy is and some of the information and actions needed for developing the strategy. We also discuss why hedging strategies should be considered as part of the key milestone decisions within each program’s planning process. Finally, the presentation concludes with a description of the features of an integrated program risk management set of “tools” needed to monitor when the program’s costs, schedule and system performance exceeds “acceptable” levels.

Developing a “Workable” Risk Management Process

Figure 1 below lists the six steps involved in developing a “workable” program risk management process along with two feedback loops.

Figure 1. Representative Risk Management Process

TargetRespondReview

IdentifyAssessManage

1. The program offices have to identify known risks and risk sources, both external & internal.

2. Since risks are events that have not yet happened, management has to assess each risk by evaluating the chance (or likelihood) the

event will occur. If no remedial actions are taken, management has to determine the potential impacts or consequences that the “total” risk has on the program, especially on achieving operational “success”. A “total” risk assessment identifies the potential impacts in terms of –

• What additional costs will be incurred?

• What schedule delays will result, and

• What aspect of the system’s operational performance and specific functions will be compromised or degraded?

One representative way to assess risks is illustrated in the “stoplight” matrix on Figure 2, where the intersection of the probability of occurrence and the program impact results in a risk value between zero and one. The probability or likelihood of the risk event occurring can vary from a remote possibility or VLO to a near certainty or VHI. Given the risk is realised, the magnitude of the impact of the program can be considered in terms of system performance, program schedule and/or cost. If there is a consensus that the risk event has a minimal impact on these three terms than the event is assessed as VLO. On the other hand, if the system performance is unacceptable or the team can’t achieve major program milestones or the program cost will grow by 10 percent or higher, than the impact of the risk event occurring is VHI and a major disruption in the program is likely.

Each assessed value (or cell in the matrix) can be mapped into one of three assessment management categories – from low risks that need to be monitored to high risks requiring urgent attention. As displayed (by the blue arrow) on Figure 2, the significance of the risk increases as the assessment of probability and impact increases from the lower left to the upper right portion of the matrix.

Page 3: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

3

Figure 2. Representative Risk Assessment Matrix2

urgentattention

monitor

VLO LO MED HI VHI

0.05 0.18 0.36 0.72

0.04 0.07 0.14 0.28 0.56

0.03 0.05 0.10 0.20 0.40

0.02 0.03 0.06 0.12 0.24

0.01 0.01 0.02 0.04

VHI

HI

MED

LO

VLO

IMPACT

PR

OB

AB

ILIT

Y 0.09

0.10

0.1-0.20 =

HIGHRISK

MEDIUMRISK

LOWRISK

Set thresholds

> 0.20 =

< 0.1 =

regularreview

Significance

One of the most important aspects of this step in the process to keep in mind is that each risk should be assessed using a consistent set of government and contractor team agreed upon guidelines in order to provide relative, not absolute, objective assessments.

3. After assessing each risk consistently, the program offices have to manage risks by monitoring and updating each risk event as the development activities progress, and add new risks as needed, represented by the first feedback loop displayed in Figure 1.

4. To manage risks effectively, program management has to target (or prioritise) the realrisk issues that require immediate or more frequent attention from the rest.

5. Program management has to also be able to know when to respond, as necessary, to implement a fallback plan which is both achievable (technically doable) and tangible (won’t create huge cost overruns or major schedule delays).

6. Finally, program management (as part of the feedback look displayed on Figure 1) has to periodically review the progress of their fallback plans, and update the status of target risks, tasks on the program’s master schedule, estimates to completion (or ETCs), and progress toward meeting the system’s operational performance requirements.

2 The format for this matrix is cited as the risk rating approach displayed as the Figure 2-6 example that is part of Reference (2).

Measuring Program “Success” & Gaps in the Process

As early as possible and preferably as part of the requirements or concept definition set of activities, the government and contractor team program offices should document answers to the following questions:

• What are the program goals we want to achieve?

• What is the prioritized set of specific end user’s system-level requirements we will need to meet in order to achieve these goals?

• Do we understand these specific requirements?

• What are the end user’s expectations of the system being developed relative to meeting these specific requirements?

• What are the major assumptions for aligning both program offices’ priorities of meeting specific requirements with the end user’s expectations?

• How do risk management decisions affect the program offices’ goals?

Specific “gaps” and uncertainties in the risk management process occur when government and contractor team’s program offices do not understand–

• All the end user’s expectations;

• The pace of how quickly each relevant technology will mature in progressing from conceptual design to laboratory functional units to operationally qualified units;

For example, one of the issues facing NASA today is understanding how quickly each of the critical technologies will mature in going from today’s vertical take-off horizontal landing (VTHL) space shuttle systems to developing future horizontal take-off, horizontal landing (HTHL) reusable launch vehicles (as illustrated in Figure 3).

Page 4: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

4

Figure 3. Impact on Implementing Changing Technologies

Space Shuttle (VTHL) Reusable Launch Vehicle (HTHL)

The risk analysis community has assessed the current maturity levels of technology at the subsystem and lower levels on a consistent basis through the use of Technology Readiness Levels (TRLs). Table 1 provides a representative list of nine TRLs and definitions relevant for space systems.

Technology Readiness Level

Description

1.Basic principles observed and reported.

Scientific research begins to be translated into applied R&D. Examples might include paper studies of a technology’s basic properties.

2.Technology concept and /or application formulated.

Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies.

3. Analytical & experimental critical function and/or proof of

Active R&D is initiated. This includes analytical studies & laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative.

4. Component and/or breadboard validation in relevant

Basic technology components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory.

5.Component and/or breadboard validation in relevant environment.

Fidelity of the breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components.

6.System/subsystem model or prototype demonstration in a relevant

Representative model or prototype system, which is well beyond thebreadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in a simulated operational environment.

7.System prototype demonstration in a space environment.

Prototype should be near or at the scale of the planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment. Examples include assembly, testing & possible launch of the first prototype space system.

8.Actual system completed and “flight qualified” through test and demonstration.

Technology has been proven to work in its final form & under mission conditions. In all cases, this TRL represents the end of true system development. Examples include developmental test, launch & evaluation of the proto-flight space system to determine if it meets design specifications.

9.Actual space system “flight proven” through successful mission operations.

Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of last ‘bug fixing’ aspects of true ‘system development’. Examples include using launch of the first “objective” space system under operational mission conditions.

Table 1. Technology Readiness Levels & Definitions

• The potential impact that updates to supplier quotes can have on keeping within the allocated budget;

• The potential impact that the stability (or instability) of each of the contractor teams’ organizations and/or workforces has on increasing the program’s cost and/or slipping the schedule; and

• The extent and potential impact that external organizations (i.e., end users, stakeholders) interface and participate in influencing the day-to-day operations of the government and contractor team’s program offices.

The paragraphs that follow discuss approaches for how the government and contractor team’s program offices can address both internal and external risks.

Addressing Internal Risks

In most circumstances, internal risks will be introduced by either government or contractor team’s program office decisions on what actions to take and the processes needed to implement the system-level design requirements needed for achieving the program’s goals.

As stated above, the program office staff has to identify the potential cost, schedule, and operational performance (or technical) risks, given the end user’s priority list of specific requirements that need to be satisfied. Systems’ engineering has to assess the technical maturity of the equipment needed and the resources available to finalise the design. Using established techniques, the program office’s level of exposure to reducing risk(s) should be assessed, as a minimum, across each of these requirements.

The program office should also consider implementing “what if” scenarios to compare the potential risk exposure of designing different system alternatives. In other words, can the program office, do something different (i.e., change the design baseline) to reduce the exposure of potential risks?

Addressing External Risks

Concurrently, external risks may occur to prevent both program offices from achieving their requirements and goals. They have to always assess what external threats exist to prevent the program offices from achieving their goals or requirements? The program office staff has to be able to identify all the known risks or potential problems, which are outside, program office control. Using established

Page 5: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

5

techniques, the program office can assess their level of exposure to risks for each of the external issues.

Just like internal risks, the program office staffcan, if applicable, consider implementing “What if” scenarios and compare alternatives that consider how changing operational performance requirements can potentially mitigate external risk issues? Are the operational capabilities of the system compromisedwhen requirements are changed? Is it “acceptable” & worth compromising the system at the current stage in the development process, given the amount of cost incurred and time invested to date?

Hedging Strategies

Regardless of whether the risks are internal or external, the compounding issues whether related or unrelated may invoke the need for the government program office to implement hedging strategies.

What hedging strategies can be invoked? Can the government program office redirect the scope of the current contract by reducing the number of deliverables and/or relaxing the original delivery schedule? Should the government program office even consider terminating the current contract and restructuring the program to salvage the mature technologies and/or implement a new concurrent alternate plan or other acquisition strategy options.

Identifying “Early Warning Indicators”

One of the most important first steps in determining the need to plan and implement hedging strategies is to be able to first identify specific risks that may serve as “early warning indicators” or EWIs. These EWI risks will drive the majority of the program’s cost growth or schedule slips and end up compromising operational “success”. Next, it is imperative to get to the “root” cause(s) for each risk event and to assess the possible alternative paths toward “fixing” each. EWIs can also be viewed, as decision points to assess whether hedging strategies are needed and determining the “root” causes of these risks. Once a military space system program is underway there are several approaches to identifying risks as EWIs. One of these approaches is described below.

A periodic government program-level assessment of the contractor team’s cumulative cost performance reports of actual expenditures against the budget from the Authority to Proceed (ATP) date

through the end of the most recent fiscal year reported is advisable. Before cost and/or schedule overruns require the contractor team to go through an extensive re-baselining activity, the contractor team should report updated annual estimates to completion (ETC) that the team projects through the end of the contract. The government and contractor team should also hold “shoulder-to-shoulder” program reviews to identify an additional total program cost for mitigating that may have surfaced since the last review.

The government program office should assess the magnitude of the cost growth estimated as a percentage of the updated ETC remaining through the end of the contract. If the ETC cost is more than 60 percent of the total revised program estimate after adding the actual expenditures to date and the cost growth projected is less than 10 percent of the ETC, this may be an EWI that more risks and additional cost growth can be expected. In general, it also brings up a more compelling question - How far out is it reasonable, with certainty, to identify known risks & projected cost growth through the end of the contract?

Major Areas of Cost Growth & Program-Level Assessments

Besides the issue of being able to project the cost growth of mitigating potential risks over the remaining four or more years of the contract, it is also important to identify the major areas that contribute to the majority of the current cost growth.

Extending the principles of Pareto’s Law3, the majority of the estimated cost growth should be focused within a few (five or less) major areas of the system’s work and/or cost breakdown structure’s product- or functionally-related areas of the program.

For example, if the government program office identifies the Systems Engineering, Integration and Test (SEIT) set of tasks as one of the major areas of program cost growth, they should identify whether some or all of these tasks are also on the critical path.

3 Pareto’s Law or principle is cited in many sources as "the 80:20 rule", where "A minority of input produces the majority of results." One of the sources citing this quote, Reference (3), also provides further details on its specific origin.

Page 6: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

6

If tasks are on the critical path, any additional cost growth and/or schedule slips could delay and increase the cost of completing the design of the space system, payload, and/or ground segments. This “ripple effect” could be identified as another EWI where added cost growth is likely to occur and, therefore, should bequantified by the contractor team.

If controlling cost growth and risks in the SEIT area is pivotal to minimizing the potential “ripple effect” impacts, a top-down decomposition of the high cost contributors within this cost growth area may provide some additional insights into the “root” cause” of relevant technical and programmatic risks.

The government and contractor team program offices can work together to determine approaches to minimizing known risks. As an example, one approach to minimizing risks or eliminating any potential risks associated with integrating and testing a space systems is to successfully model and/or simulate the space segment’s systems integration process early in the development phase. Similarly, the success of completing the design of the other system segments could depend to a significant extent on the ability to successfully verify and validate all the new and modified hardware and/or software code needed.

What Actions are Needed?

First, it is critical for the government and contractor team’s program offices to understand how the progress on one set of tasks (e.g. SEIT) effects others, especially those dependent tasks that are part of the critical path. Until an integrated risk management system is in place and being implemented, there will be limited visibility into identifying all the known risks and potential issues that could possibly occur as the program moves forward.

What hedging strategies should be considered in order to preclude gaps in key operational capabilities? If the military space system’s operational “success” is essential to national security, this means that some or all of the capabilities being developed for the system are essential. For each of these critical programs, the Air Force and DoD should consider developing and implementing hedging strategies as needed to ensure successful implementation of these essential capabilities. Building in hedging strategies guarantees continuity

in achieving space system’s operational capabilities in the face of plausible risks.

Establishing EWIs as test milestone events that can be inserted into the integrated master program schedule (IPMS) to ensure that mission needs (listed in the Mission Needs Statement or MNS) and operational requirements (in the Operational Requirements Document or ORD) are both being satisfied. The government program office should ask the contractor team several key questions. For example – As a minimum, are high-risk performance specifications being tested to sufficiently demonstrate that risks are being reduced as planned?

Implementing Hedging Strategies

Below are a few of the key steps for evaluating the need for developing and implementing hedging strategies -

1. Identify which of the highest priority mission capabilities to protect with hedging strategies, since annual budget constraints may prohibit implementing hedges for all missions.

2. Assess the extent that hedging strategies are needed or desired.

• It may not be necessary to develop hedging strategies that cover the full range of addressing all the values for all the program’s KTPPs.

• A minimum operational capability, perhaps lessthan an operational requirements document’s stated “threshold” value, may be sufficient in extreme circumstances for addressing essential missions.

• Hedging strategy requirements could be satisfied after evaluating the operational performance experienced with systems currently providing that operational capability.

• Even the least capable active satellites might still provide the minimum necessary operational capability that is essential to meeting National Security needs.

Page 7: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

7

Integrating Triggering Events Into Program Schedules & Budgets

First, pace the start of hedging strategies in parallel with the latest updated version of the contractor team’s IPMS.

Decisions on when to initiate use of funds for hedging strategies should be “triggered” by either –

• Explicit, pre-planned program milestone events or “signposts” that are clearly identified on the program’s IMS.

• Unfavorable events that sufficiently compromise or degrade the current program’s expected operational performance (e.g., a failed launch, a serious projected delay in achieving initial operational capability (IOC), etc.).

• Known changes in the operational availability of the current constellation of on-orbit satellites or launches falls below acceptable levels. This known changes serve as a forcing function to accelerate the development of next-generation satellites.

The government program office or acquisition staff should acknowledge and identify up-front costs to develop and implement hedging strategy initiatives and set-aside budgets, if and when unfavorable events sufficiently affect the on-going space systems program.

Hedging strategy costs should be estimated with sufficient lead-time to allow smooth transition with minimal gap in capability (i.e., low-level funding with ability for rapid ramp-up).

Recommendation: Even though the potential cost of hedging strategies may not be part of the current program’s contract, they should be part of the total government program budget.

What Tools Should be Used?

At the start of a new program, the government program office should direct or as a minimum insist that the contractor team including subcontractors and major suppliers all monitor and track program risks using the same set of management “tools”.

Each of the risk items should be assigned with a unique identifier number that can be linked back to

relevant tasks on the IPMS. In addition, an ordinal value for both the probabilities of occurrence and the program impacts should be assessed for each risk item listed along with an overall qualitative and quantitative risk assessment value.

Besides directly linking each risk item identified to all the relevant tasks on the IPMS, the integrated risk management tool should be –

• directly linked to an established requirements breakdown structure and set of KTPPs; and

• assessed against not only the impact on the program’s cost and schedule, but aggregated together to assess the ability to meet each specific system, subsystem and lower level set of top-down requirements.

Given today’s advances in information technology, the integrated risk management tools can be resident on government and/or contractor servers and web-based for easy assess by various key members of the integrated product teams that are assigned across the major areas of a space system’s program. Several commercial-based computer programs can be integrated together that are functionally comprised of a scheduling system, a requirements tracking system, a quantitative cost and/or schedule risk uncertainty tool, and a management report generator.

Figure 4 represents a notional view of an integrated risk management set of tools4. In addition to the four programs displayed, an integrated risk management system (in the middle of Figure 4) is needed to link the information from these programs together by cross-referencing key fields (e.g., cost breakdown structure, task identifier, requirements breakdown structure, hardware work breakdown structure, etc.).

4 Even though specific commercial software programs are listed on this figure and Figures 5 and 6 that follow, the author is only displaying them for notional purposes and does not endorse or infer that these specific ones are recommended.

Page 8: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

8

Figure 4. Notional View of Integrated Risk Management Tool Set5

Cost Risk Tools

In an effort to provide a timely and complete response to both the government and contractor team’s program manager, integration of the four programs listed on Figure 4 are critical. A failure to do so can often lead to risk issues being missed and/or not fully assessed.

As part of the management reporting process, the integrated risk management system should include a waterfall chart displayed as Figure 5 that provides a visual that identified the progress of reducing each risk item over the current schedule. The waterfall chart helps the program office monitor each risk to ensure it has a score that reflects the ability to achieve technical milestones on-time and meet the customer/end user’s KPPs values.

5 For further information, Reference (4) provides details on one of the web-based integrated commercial software packages, Active Risk Manager (ARM)TM. ARM can be linked to Microsoft ProjectTM, Primavera P3TM, etc.; Telelogic DoorsTM; and Microsoft ExcelTM /Crystal BallTM or @ RISKTM.

In addition, Reference (5) describes another software package, Probability/Consequence Screening (P/CS) tool, developed by the Air Force Aeronautical Systems Center (ASC) that they are planning to tie more tightly with Microsoft ProjectTM /Risk+TM and ExcelTM /Crystal BallTM.

Figure 5. Waterfall Chart Provides Progress of Reducing Risk Over Current Schedule

Finally the integrated risk management system should have the ability to link each risk item listed on the waterfall chart shown above with those relevant tasks identified as part of the critical path on the master schedule. Each risk should include an assessment of the ability to meet the span times allocated for each relevant task. The system’s reportgenerator should provide a visual display that represents an aggregate span time schedule uncertainty assessment that covers all the relevant tasks both on and off the critical path.

Figure 6 below illustrates a useful visual for program management to assess the level of uncertainty in the overall program schedule. One of the key functions of an integrated risk management tool is to have the ability to perform “what if” sensitivity analyses to determine the impact on the span times on those tasks still remaining to be completed on the program.

Figure 6. Schedule Range Uncertainty Assessment

Page 9: [American Institute of Aeronautics and Astronautics AIAA Space 2003 Conference & Exposition - Long Beach, California ()] AIAA Space 2003 Conference & Exposition - Developing Hedging

9

References

(1) “Military Space Operations: Common Problems and Their Effects on Satellite and Related Acquisitions”, GAO Report (GAO-03-825R), June 2, 2003.

(2) “Risk Management Guide For DoD Acquisition Fourth Edition”, Department of Defense, Defense Acquisition University, Defense Systems Management College, February 2001

(3) “Pareto’s Law”, located at the Pareto Law Firm’s (Barfield House, Alderley Road, Wilmslow SK9 1PL and Sovereign House, 361 King Street, London W6 9NA) web site with the following URL address: http://www.paretolaw.co.uk/principle.html

(4) “Active Risk Manager (ARM) © –Communicating Tomorrow’s Risk Today” presentation prepared by Tamara Sear, Business

Development Consultant, Strategic Thought Limited, The Old Town Hall, 4 Queens Road, London, UK SW19 8YA, United Kingdom. Additional information is located on their web site with the following URL address: http://www.strategicthought.com or http://www.arm-risk.com or by telephone at +44 (020) 8410 4000 or fax at +44 (020) 8410 4030.

(5) “An Integrated Approach to Risk management and Risk Assessment”, G. Jeffrey Robinette, Engineering Directorate, Systems Engineering Division, Air Force ASC, Janet S. Marshall, Comptroller Directorate, Acquisition Cost Division, Air Force ASC, INCOSE INSIGHT, Vol. 4, Issue 1, page 23, Approved for Public Release, ASC-011-1698, April 2001.


Recommended