+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA SPACE 2008 Conference & Exposition - San...

[American Institute of Aeronautics and Astronautics AIAA SPACE 2008 Conference & Exposition - San...

Date post: 16-Dec-2016
Category:
Upload: mel
View: 212 times
Download: 0 times
Share this document with a friend
15
American Institute of Aeronautics and Astronautics 139901 1 Keys to Evaluating and Conveying Credible Space Systems Cost Estimates to Acquisition Management Mel Eisman. * RAND Corporation Santa Monica, California 90407-2138 The credibility of military space systems’ cost estimates are called into question at the start of the program when annual program budgets are established by the program office, compared with the service cost analysis agency’s position; and for most major programs, when reviewed by an Independent Program Assessment (IPA) team, and evaluated against an independent estimate generated by the OSD Cost Analysis Improvement Group (CAIG). Program cost estimates are also challenged during program execution when the program office requires additional funds in the out-years; and when the contractor team’s and/or the program office’s assessments of the updated estimates at completion (EAC) increase beyond the planned and projected forecasts, and especially when a program re-baseline is required. Although recent military space system programs have experienced substantial cost and schedule overruns, the original estimates for each program were deemed to be credible by the government analysts who perform these assessments or by the acquisition managers who ultimately granted authority to proceed/continue with development. Why were the estimates not challenged as unrealistic? Where did the process fall short? Our analysis suggests that the evaluation process broke down in two key areas. In the broader context, the “estimating team” (which the government analysts are a part of) failed to fully disclose the complete set of ground rules and assumptions and account for all the known technical risks and cost uncertainties associated with their space system cost estimate. As part of the government review process before conveying a recommended space systems program cost to acquisition management, the cost communities either didn’t conduct formal peer reviews or if they did, they did not fully reconcile the major underlying differences with the other estimates generated. This paper focuses on identifying the problems associated with estimating space systems and the recommended keys to successfully evaluating the program cost estimates to determine how credible they are, and then more effectively conveying these recommended results to acquisition management. Introduction The credibility of military space systems’ cost estimates are called into question at the start of the program when annual program budgets are established by the program office, compared with the service cost analysis agency’s position; and for most major programs, when reviewed by an Independent Program Assessment (IPA) team, and evaluated against an independent estimate generated by the OSD Cost Analysis Improvement Group (CAIG). Program cost estimates are also challenged during program execution when the program office requires additional funds in the out-years; and when the contractor team’s and/or the program office’s assessments of the updated * Senior Cost Analyst, Project AIR FORCE, 1776 Main Street, Mail Stop M2N. AIAA SPACE 2008 Conference & Exposition 9 - 11 September 2008, San Diego, California AIAA 2008-7790 Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc. The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed herein for Governmental purposes. All other rights are reserved by the copyright owner.
Transcript

American Institute of Aeronautics and Astronautics

139901

1

Keys to Evaluating and Conveying Credible Space Systems Cost Estimates to Acquisition Management

Mel Eisman.* RAND Corporation

Santa Monica, California 90407-2138

The credibility of military space systems’ cost estimates are called into question at the start of the program when annual program budgets are established by the program office, compared with the service cost analysis agency’s position; and for most major programs, when reviewed by an Independent Program Assessment (IPA) team, and evaluated against an independent estimate generated by the OSD Cost Analysis Improvement Group (CAIG). Program cost estimates are also challenged during program execution when the program office requires additional funds in the out-years; and when the contractor team’s and/or the program office’s assessments of the updated estimates at completion (EAC) increase beyond the planned and projected forecasts, and especially when a program re-baseline is required.

Although recent military space system programs have experienced substantial cost and schedule overruns, the original estimates for each program were deemed to be credible by the government analysts who perform these assessments or by the acquisition managers who ultimately granted authority to proceed/continue with development. Why were the estimates not challenged as unrealistic? Where did the process fall short?

Our analysis suggests that the evaluation process broke down in two key areas. In the broader context, the “estimating team” (which the government analysts are a part of) failed to fully disclose the complete set of ground rules and assumptions and account for all the known technical risks and cost uncertainties associated with their space system cost estimate. As part of the government review process before conveying a recommended space systems program cost to acquisition management, the cost communities either didn’t conduct formal peer reviews or if they did, they did not fully reconcile the major underlying differences with the other estimates generated.

This paper focuses on identifying the problems associated with estimating space systems and the recommended keys to successfully evaluating the program cost estimates to determine how credible they are, and then more effectively conveying these recommended results to acquisition management.

Introduction The credibility of military space systems’ cost estimates are called into question at the start of the program when

annual program budgets are established by the program office, compared with the service cost analysis agency’s position; and for most major programs, when reviewed by an Independent Program Assessment (IPA) team, and evaluated against an independent estimate generated by the OSD Cost Analysis Improvement Group (CAIG). Program cost estimates are also challenged during program execution when the program office requires additional funds in the out-years; and when the contractor team’s and/or the program office’s assessments of the updated

* Senior Cost Analyst, Project AIR FORCE, 1776 Main Street, Mail Stop M2N.

AIAA SPACE 2008 Conference & Exposition9 - 11 September 2008, San Diego, California

AIAA 2008-7790

Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed herein for Governmental purposes.All other rights are reserved by the copyright owner.

American Institute of Aeronautics and Astronautics

139901

2

estimates at completion (EAC) increase beyond the planned and projected forecasts, and especially when a program re-baseline is required†.

Although recent military space system programs have experienced substantial cost and schedule overruns, the original estimates for each program were deemed to be credible by the government analysts who perform these assessments or by the acquisition managers who ultimately granted authority to proceed/continue with development. Why were the estimates not challenged as unrealistic? Where did the process fall short?

Our analysis suggests that the evaluation process broke down in two key areas. In the broader context, the “estimating team” (which the government analysts are a part of) failed to fully disclose the complete set of ground rules and assumptions and account for all the known technical risks and cost uncertainties associated with their space system cost estimate. As part of the government review process before conveying a recommended space systems program cost to acquisition management, the cost communities either didn’t conduct formal peer reviews or if they did, they did not fully reconcile the major underlying differences with the other estimates generated.

This paper focuses on identifying the specific problems associated with estimating space systems and the recommended keys to successfully evaluating the program cost estimates to determine how credible they are, and then more effectively conveying these recommended results to acquisition management. Specifically, the paper is divided up into three sections that respond to each of the following compelling questions:

What environmental factors have exerted pressures on space systems cost analysts? How can the government cost community improve the review & evaluation of space system program estimates? How can the government cost community do a better job of communicating space systems estimates to decision-

makers? Responses for each section will also include suggested actions the government space systems cost analysts should consider in coping with and mitigating pressures, improving the quality of the review and evaluation of estimates, and for more effectively communicating cost results to acquisition managers.

Environmental Factors Exerting Pressure on Today’s Space Systems Cost Analysts

Assessments of Space System Cost Process & Estimates

Over the past few years, RAND and our Project AIR FORCE (PAF) business unit has performed in-depth case study examinations of past & on-going military space systems programs using Selected Acquisition Reports (SARs), program review briefings, etc‡. We have conducted extensive interviews with the Air Force Space and Missile Systems Center (SMC) cost and financial management staff, cost chiefs and their staffs from the system program offices (SPOs), and other technical support staff. We have also organized and held discussions with individuals that estimate and/or review the costs of acquiring space systems from other agencies & organizations (Aerospace Corp, the Air Force Cost Analysis Agency (AFCAA)), the OSD CAIG office, the National Reconnaissance Office (NRO), and the National Aeronautics and Space Administration (NASA). Under non-attribution, we have gathered lessons learned and identified issues & concerns about the cost estimating processes employed. We have identified and compared differences in the magnitude (percentages) & primary causal factors for space systems program cost growth and schedule slips (from contractors’ authority to proceed (ATP) through first satellite launch dates) experienced over each acquisition phase. We have assessed whether changes in cost estimating processes over the years have contributed to differences in program cost growth percentages. Finally, we evaluated how effective space systems cost results were being used by the acquisition managers to support either – Major acquisition milestone reviews, Changes in annual program budgets, or As early warning indicators for proactively taking corrective actions before major cost growth occurs. † Program baseline updates are required either when there are program office-directed scope changes, as part of interim baseline reviews (IBRs) or during major preliminary or critical design reviews (PDR, CDR, etc.), when earned value management (EVM) cost and schedule performance indices fall below acceptable levels, or when the SPO is notified of a Nunn-McCurdy program cost breach. ‡ See the RAND reference (1) and (2) reports for further details on the assessment approach research and program cost growth and schedule clip findings.

American Institute of Aeronautics and Astronautics

139901

3

Figure 1 displays the list of major military space systems programs that we reviewed in a PAF Fiscal Year (FY)

2006 study for each of five different areas of space-based capabilities beginning with the Air Force DSCS III wideband MILSATCOM program with an ATP in 1977 to a more recent Navy Multiple User Objective Systems (MUOS) MILSATCOM narrowband space system program with an ATP in 2004§. As part of a more recent FY-2007 study, we added the Global Positioning System (GPS) Block IIR and IIF systems as another case study program and revisited the SBIRS-High missile-warning program**.

Figure 1. Past and Present Military Space Systems Programs Reviewed

Changes in the Space Systems Acquisition Environment Table 1 highlights the major changes in the way programs have been executed, the extent of oversight

implemented and the expanded variety of cost estimating techniques that have been used within the military space systems acquisition environment over the same forty-six year period from 1960 through 2006 divided into the same five segments displayed in Figure 1.

§ See the RAND reference (1) report for specific assessment of each one of the programs listed in Figure 1 across each on of the five space –based capabilities listed. ** See the RAND reference (2) report for further details on the observations and findings on these two case study programs.

American Institute of Aeronautics and Astronautics

139901

4

Table 1. Changes In Space Systems Acquisition Environment

Programs with ATP Before 1995 Programs with ATP After 1995

Acquisition Environment 1960-1975 1976-1989 1990-1995 1996-2001 2002-Present

Program Execution

Meeting critical

National Security Needs

On-performance

highest priority over program costs

(Early DSP, CORONA)

Except for MILSTAR I, most systems including

follow-on MILSTAR II system based on low-risk, evolutionary technology

SPO technical team controlled requirements,

closely monitored risks, and separately managed both satellite bus and mission payload contractors’ costs and schedules

(DSP Follow Blocks)

Replaced legacy systems with more enhanced systems meeting multi-mission capabilities across wider base of users

Constrained

program budgets & set unrealistic first launch schedules

(SBIRS, NPOESS)

Initiating spiral

development approaches to acquiring new systems, especially 1st satellite in block

(GPS III)

Implementing hedging strategy against potential operational gaps

(3rd Gen IR System)

Program Oversight

Mgmt advocates

clearing the way (DSP)

SECDEF initially

directed Services to perform ICEs as part of DAB milestone reviews (1971)

Established OSD

CAIG office (1972)

Began submitting

Selected Acquisition Reports to Congress for all major acquisition programs (1985)

Began enforcing Nunn-

McCurdy cost breaches Congress directed

SECDEF perform ICEs as part of PPBS process (1986)

AF advocated TSPR

& CAIV †† acquisition reforms (early 90s)

AFCAA performed

SCPs & Sufficiency Reviews

SMC/FMC

primarily assisting SPOs (thru mid-90s)

Dissolved

SMC/FMC office & transferred cost functions to SPOs

Increased focus on

INTEL program budgets

Started up NRO

Cost Group & IC CAIG performing ICEs

Reestablished

SMC/FMC centralized cost function & adding staff (2005)

Released initial &

updated NSSP 03-01 space policy

IC CAIG now

under ODNI PA&E office

Cost Estimating Techniques

Primarily based

on top-down analogs and/or Dollars or lbs.

Emerging use of first commercial cost models (e.g.,

PRICE, SEER, etc.) & later government parametric cost models (e.g., USCM, Passive Sensor Cost Model, etc.)

Began expanding & sharing AF & NASA space

system historical databases

Estimates adjusted to reflect - Merger of space

contractors down to ~ 3 to 4 primes (around 1997)

Projected, but never

realized, savings from implementing NWODB, commercial “best practices”, & increased use of COTS parts

Applying more

robust cost risk uncertainty techniques

Developing new

schedule estimating tools

Including costs of

adding back mission assurance (QA), & other MIL-Spec tasks

(SBIRS, AEHF)

Over the past ten to fifteen years, there have been institutional, cultural, budgetary and other environmental

factors that have exerted pressures on both contractor and government space systems cost analysts that have hampered their ability to maintain objectivity. In addition, we have observed that the majority of program advocates have a vested interest in ensuring analysts produce optimistic program cost estimates by assuming minimal technical and schedule risks early in the acquisition phase, and especially prior to contract award.

†† The CAIV or cost as an independent variable acquisition reform initiative was implemented with the objective of encouraging contractors as part of the contract award and once under contract to continue to reduce system costs, while still providing the government and the end users with the maximum “best” value in terms of the system meeting or exceeding the expected system performance.

American Institute of Aeronautics and Astronautics

139901

5

Furthermore, the majority of both contractor and government space systems cost analysts we have spoken with have based estimates on a new space system’s design assumed to be technically feasible and more capable of meeting multi-mission performance than the predecessor legacy satellites they are intended to replace. Experienced systems engineers have stated that closing in on the baseline satellite design in order to meet a potentially conflicting set of user requirements cannot be easily achieved on first principles and in some circumstances even violates the “laws of physics”. When contractors’ have attempted to propose other lower risk system alternatives, the typical response from program directors has been that these potential solutions are non-compliant and that the ultimate success of the program is defined in terms of “satisfying and delivering nothing less than the 100 percent design solution”. Finally even if the system design baseline is clearly defined and each of the subsystems and major components are assessed as fairly mature with a high technology readiness level (TRL) having previously functioned separately in a comparable space environment; there is still the challenge of credibly estimating the cost of the system-level final assembly, integration and test (AI&T) effort. In most cases, there are minimal if any historical relevant data points of analogous systems to base the estimate on that are close to the same increased complexity level of system integration needed for the new system.

Suggestions for Space Systems Acquisition Managers

Space systems acquisition management have in some instances implemented better policies and practices to reduce some of the pressures the cost analysis community have been and are continuing to experience. Management has to ensure that government program directors are responsible for and have the means and sufficient resources to manage the space system acquisitions more effectively. Management has to also continue to endorse acquisition reform guidance that remains focused on holding program directors accountable, and by implementing effective oversight of contractor team’s activities. The Air Force leadership should establish guidance for acquiring space systems that is driven by: Shifting primary program emphasis from cost and schedule to operational performance Identifying and prioritizing the urgency of capabilities needed for each mission of multiple-mission systems, and Where possible, initiating an evolutionary, block design approach to satellite procurement. If an evolutionary design approach is not possible and there is an urgency for acquiring new systems to fill potential operational gaps in coverage or providing needed capability to the warfighters, then program directors and their technical and schedule staff should be aware of and acknowledge the known risks, so cost analysts can estimate the additional costs needed to mitigate these risks over the program’s development tasks and timelines. Even though progress has been made, the author feels that the pressures still remain and there suggested actions described in the remainder of this paper that government space system cost analysts should consider in improving the way estimates are reviewed and evaluated to ensure they are credible, and then more effectively convey these results to decisionmakers.

Improving the Review and Evaluation of Space Systems Cost Estimates

Evaluating the Credibility of the Estimates

For many military government formal requests for proposals (RFPs) released for contractor teams to bid on, there are specific directions included for submitting separate technical, cost and management proposal volumes back to the customer. In most cases, there is an Evaluation Criteria (identified as Section M) that typically provides a description of the government’s basis for judging the credibility of the proposed development cost estimates that each prospective contractor team submits and should attempt to demonstrate proof on in three separate and distinct ways - Is the estimate complete, reasonable and realistic? Even though these terms are adjectives and the three criteria on the surface may appear subjective, these same guidelines can also be applied to government internal reviews of estimates prepared by the program office, the acquisition command’s financial management (FM) staff, the service’s cost analysis agency (AFCAA), the OSD CAIG and other offices involved in the preparation and review of space system estimates. Below are a recommended sample of suggested actions for government space systems cost analysts to consider in ensuring that the:

American Institute of Aeronautics and Astronautics

139901

6

1. Estimate is Complete? Get a consensus from the program’s chief engineer or systems engineer and as needed subject matter experts (SMEs) that the space systems cost estimates – Map into and reflect the current and correct technical system baseline configuration defined at the work-

breakdown structure (WBS) hardware and software level of detail that will meet the end user’s set of requirements and flow-down of top-level design specifications, and

Include the development, I&T costs of meeting all the internal interfaces with, for example, government

furnished equipment, host satellites, etc.; and external interfaces with the launch vehicle, ground control segment, user equipment, etc. that are included (or agreed to be excluded) as part of the total system costs.

Ensure that the costs estimated for mitigating each of the known risks is accounted for and tracked across the

defined WBS level of subsystems and major components. 2. Estimate is Reasonable? Compare and crosscheck the current system’s cost estimates top-down at the highest to lowest WBS levels with the costs from a legacy set of analogous systems and subsystems. Cost differences should be reconciled to assess whether differences in the estimating ground rules and assumptions, historical databases, and estimating methods used could have significantly impacted the results. After reconciling and making adjustments for major differences and for determining if the current system’s estimate is reasonable; one could compare cost results with an analogous system cost based primarily on actual or adjusted expenditures to assess how close the two costs are and whether they are at least within, for example, the same 20 to 30 percent “ballpark” of each other. 3. Estimate is Realistic? The process of determining whether the estimate is realistic will require the government cost analyst to be able to assess the contractor team’s current & projected space systems’ business base and their collective ability to – Ramp-up the team’s anticipated total workforce assigned to the program as proposed shortly after ATP, and Meet their estimated peak-spending rate later on in the program, given each contractor’s projected available

workforce and demand for the same expertise on other concurrent space system programs.

Instituting Independent Technical & Schedule Program Reviews

Since there’s a continuing shortage of experienced systems engineers assigned to both contractor and government program offices, the need for independent technical & schedule reviews of on-going space system programs is critical. Therefore before finalizing cost estimates for major government program reviews and upcoming milestone decision meetings, the findings from an independent technical & schedule review should enable government space systems cost analysts to draw from key experts not assigned to the contractor & government program offices’ staff that can - Confirm and/or identify the likelihood and consequences of existing and potentially new technical risks and the

corresponding impacts of potentially not completing key tasks as planned,

Evaluate “cascading” impacts of each technical risk for a given component or subsystem, which can potentially trigger a chain of other interrelated technical problems or schedule delays, and

An example of a cascading impact occurred prior to the start of SBIRS-High Geosynchronous (GEO) satellite production, when the contractor team’s AI&T personnel were put on standby until the Higher Elliptical Orbit (HEO) payload’s electro-magnetic interference problem was resolved. This risk item resulted in increased labor costs to charged to the government and resulted in additional program cost growth sometimes referred to as the “standing army” effect. Therefore, it’s extremely important to for government costs analysts to reassess where the critical path is and the projected workload and the rate of expenditure over time (in weeks or months) across these affected tasks.

American Institute of Aeronautics and Astronautics

139901

7

Conducting Formal Government Space System Cost Estimate Peer Reviews Figure 2 illustrates the notional process that typically goes on when space systems cost estimates are reviewed by

management within the contractor community (as the Corporate Division Signoff process displayed on the left side) in contrast with the government (OSD) management program cost review (illustrated on the right side).

Figure 2. Comparing Contractor and Government Space System Cost Review Processes

There are similarities in the process of conducting both contractor and government space system cost reviews. The contractor during a cost proposal sign-off at the division management level will review estimates based on cost analyst’s results using parametric models and/or an analogous approach as well as marketing’s values for coming up with the projected best (lowest) bid to beat the competition and be part of the winning team, and the proposal manager’s “bottoms-up” detailed estimates usually assembled within contractor’s pricing or finance department. This corporate division sign-off can occur at this management level or at a lower level depending on whether the estimate is for a new start proposal, an ECP or as part of an updated re-baseline activity for the customer. After some level of reconciliation and discussions, the final product is the submittal of the contractor team’s cost proposal to the SPO program director. The SPO with the contractor team’s cost proposal volume in-hand and after some interaction with the government cost analysis staff, then generates a program office estimate (POE) based on reviewing the contractors’ estimates; and adding dollars where there are known technical and schedule risk items that they believe are either underestimated by the contractor team or not adequately covered. If the POE has been generated as a result of a major program re-baseline due to requirements or scope changes or a pending or cited program cost or schedule breach, then the AFCAA concurrently generates a service cost or non-advocate estimate and the OSD CAIG office an independent cost estimate (ICE). In addition for major cost breaches, an IPA team is formed to provide their technical and program schedule risk assessment review findings, and also a cost assessment (sometimes qualitative) either independent from or with the support of the OSD CAIG office. Even though the Air Force and OSD CAIG space cost analysts have made progress in reviewing each other’s estimates before presenting their final results to their respective chain of commands, these reviews are usually done very informally. The government space cost analysis community should consider initiating and conducting more formal peer reviews; and cover, as a minimum, fully reconciling differences, especially in all of the key representative areas described below.

Key Representative Areas for Reconciling Government Cost Estimates Below is a list of five major areas that the author suggests government space systems cost community should

discuss and attempt to more fully resolve or reconcile differences on as a part of their formal review process before finalizing their cost estimates to their acquisition management.

American Institute of Aeronautics and Astronautics

139901

8

1. Is there a Common Understanding of the Purpose and Scope of the Estimates? Does the estimate cover only the acquisition cost of the contractor team’s period of performance, or the total acquisition costs across RDT&E and procurement (different “colors of money”) budgets from concept development through end of on-orbit life of last satellite launched? Does the estimate cover the full life cycle cost including the cost for operating, maintaining, decommissioning or disposing of the on-orbit satellites through the end of their mission life? Does the scope of the estimate include costs covering in-scope or out-of-scope requirements changes or ECPs; or the added costs for designing in and meeting space systems to terrestrial segment interface requirements/specifications? Should transition costs from operating legacy systems to new system be included as part of the total system cost estimate, including the cost of government furnished equipment; and new or upgraded terrestrial communication systems, which link information transmitted from satellites that is processed, exploited and disseminated out to the end users? On this latter point, an approved concept of operations (CONOPS) schematic from the end user community and the program office is an excellent “blueprint” for providing a common understanding of what system-oriented WBS set of cost elements to include as part of the total program cost estimate.

2. Different Technical Baseline Maturity Assessments? Even if the government cost community is using the same most feasible system baseline design to meet the requirements, are there differences in the assessed level of technical maturity or technology readiness levels (TRLs)? Even if all the different government space system cost estimating organizations are using the same TRL criteria, differences in readiness levels for subsystem and/or assemblies can drive the percent new design assumptions and the non-recurring engineering cost estimates, especially since the differences in percent new design have been proven to not be linearly proportional to these estimates.

3. Different Basis for Estimating System Engineering, I&T (SEIT) and Program Management (PM) Costs?

The SEIT & PM cost for space systems (as well as other weapon systems) have typically been based on summing up the total costs estimated at either each major subsystem or at the system level, and then using a linear multiplier or percentage factor as a representative estimate of the level of effort required for performing these functions. There have been other estimating methods used that separate out SE and PM based on analogs of actual expenditures on similar predecessor space system’s completed development programs.

There has also been uncertainty inherent in estimating I&T costs using either linear multiplier values derived from the subsystem or system level based on a mix of assessed TRL values at the lower level assemblies and components assessed that, in some cases, have never been integrated and tested together. Furthermore, the I&T costs can prove to be significant high, especially for estimating the cost and potential technical risks of integrating and testing a very complex multi-functional subsystems designed by a team of second and third tier subcontractors into the overall system.

Since the total or individual costs of SE, I&T and PM can be based on applying different estimating approaches, it is worth the government cost teams time to reconcile and understand these differences.

4. Different Basis for estimating Space and Ground Segments’ Software Development Efforts?

Estimating the costs of space and ground segment software development is another area that has resulted in widely varying results across not only the government space systems cost community, but the contractor cost community, writ large, on estimating the software costs of all major weapon systems. Sizing the effort using either function points or software lines of code (SLOC) count estimates can yield different results. Even if software size metric results are almost equivalent, there is usually no standard commercial software cost model used across the government space system cost communities. Software development cost estimates do not always converge due to –

Each cost model having a different number of input attributes for covering the software’s functional complexity,

personnel skill levels, development tools available, etc.; and A lack of available standard historical parametric input data & actual costs to access and use from similar

application-specific software development programs.

5. Use of Different Schedule Assessments Can Significantly Drive Up Total Program Cost? Finally, there are different and still evolving new methods of estimating program schedule span times from ATP

to major program review milestones (e.g., PDRs or CDRs) to first satellite launch. Span time estimates and

American Institute of Aeronautics and Astronautics

139901

9

corresponding time-phased program costs over a given estimate timeframe can vary significantly depending on what estimating approach is used. Span times can be estimated and costs time-phased by either adjusting the actual timelines from analogous predecessor space systems development programs based on differences in missions and system’s complexities with the new system being estimated; or parametrically as a function of satellite’s peak power, pointing accuracy or other stressing requirements.

Effectively Communicating Space Systems Estimates to Government Decisionmakers After fully reviewing each of the government space system cost estimates and ensuring they are credible,

accounting for and including costs based on the findings of an independent technical and schedule review team, and reconciling differences with other concurrent estimates within the government cost community; the program cost results are ready to be presented to AF and OSD acquisition management. This section covers three suggested approaches to consider for more effectively communicating space systems cost results to decisionmakers.

Covering Key Points Tailored to Putting Acquisition Managers on Same Wavelength The first approach relies on the government space system costs analyst’s understanding of acquisition managers’

knowledge of the system the program cost review covers and his or her ability to tailor the briefing slides and the verbiage to provide an overview of the purpose and scope of the cost analysis; a top-level description of the space system being estimated, what missions and functions it performs; and, if relevant, what improved capabilities the new system is expected to provide over the predecessor system it will be replacing.

Below is a representative example of a few unclassified slides that were part of a larger RAND presentation on a Cost Assessment of Alternative Space-Based Radar (SBR) System Options from FY-1998. The first slide, displayed as Figure 3 below, provides management with a concise up-front understanding of what’s included and what’s not included in the space system’s cost estimates.

Assessing Costs of Alternative SBR Options

Covers All space segment costs (including launch) to reach full operational capability

− Demonstration, pre-EMD, EMD, production All replacement costs to maintain FOC and satellite availability over 20-year period All sustaining O&S costs to maintain reusable space assets

Does not cover

Terrestrial costs to process, exploit, and disseminate radio–frequency imagery data − Integration with other collectors’ intelligence data

Communications “info-structure” costs to “close the link” Cost differences of satellite and mission control

Figure 3. Scope of Cost Analysis

The next slide, displayed as Figure 4 below, identifies the baseline case, which is important to disclose in the beginning of the briefing, as the costs for this system design will be estimated and compared with other SBR system alternatives described later in the presentation. This slide identifies how many satellites (including on-orbit spares) will be included in the estimate, the launch lift (weight) and size (volumetric) constraints of each satellite, and the length of the mean mission duration (MMD) that the satellites should be designed to last for once on orbit. The combination of this slide along with the previous slide (Figure 3 above) enables the acquisition managers present to quickly be on the “same wavelength” with you and provides the common understanding needed to proceed on with the rest of the briefing.

American Institute of Aeronautics and Astronautics

139901

10

Figure 4. System Baseline Description

Next, we presented a slide (not shown) that provides background on the lack of technical heritage and the extent that new design features will be required to produce these satellites. The next slide, displayed as Figure 5 below, provides the government managers with a list of the key major cost drivers that will impact and shape the magnitude of the estimates.

SBR System Weight-Based Assumptions That Drive Estimates

Reduced antenna array weights using ultra lightweight structures Assumes 75% weight reduction over current state-of-art parts area density of 6 kg/m2

− Assumes 66% weight reduction over current state-of-art TR Modules

Improved efficiencies in power generation & distribution subsystems − Assumes more than 6x increase in watts/kg at 85% efficiency (e.g., using thin-film GaA solar

array cells)

Increased supply of lighter weight, more capable space-qualified processors & data storage units

Advances in smaller satellite bus provides up to 25% weight savings − Depends in part on technical readiness & maturity of use of multi-function structures (e.g., with

embedded flex-circuit active electronics) − Relies on increased use of components used for commercial-like satellite procurements

Figure 5. Cost Drivers & Estimating Assumptions

In this briefing, the estimating assumptions listed above were used for first estimating the SBR satellite subsystem and lower level weights based on the new technology (e.g., active phased array elements, power generation efficiencies, etc.) being developed and the rate of maturity of this technology to yield projected reductions in the satellite’s total size when on orbit and empty weight. The next slide that logically follows is a table, Table 2 below, of RAND’s independent assessment of the baseline SBR satellite’s total dry weight and mission (SAR/GMTI radar) payload weight estimates without and with margins compared to previously estimated weights by DARPA on the same satellite “as is” baseline design.

American Institute of Aeronautics and Astronautics

139901

11

Table 2. Independent Assessment of SBR Satellite Weights

The weight assessment table is followed by a slide of another table, Table 3 below, summarizing RAND’s SBR satellite’s unit recurring cost range estimates.

Table 3. SBR Satellite Unit Recurring Cost Range Estimates

In both of these tables, the results are clear and logical. In table 4 above, the lower end of the range estimate of

$111M is based on the estimated weights with a 25 percent margin and the upper end of the estimate of $125M is based on the additional cost of $14M estimated for mitigating those known aggregated technical risks assessed and summarized at a qualitative high and moderate level for designing and producing respectively the mission payload and the bus. For the purposes of comparing the SBR satellite baseline with other alternatives, the assumptions and basis for this range estimate is relatively straightforward. However, the majority of the space system estimates requiring a government management cost review do not require the need to compare alternative designs with a baseline, and a recommended range estimate may not be sufficient enough for the acquisition managers present to base their decisions on. Consequently, the next section provides a discussion on how to more effectively communicate technical risk/cost uncertainty assessments results to acquisition managers.

American Institute of Aeronautics and Astronautics

139901

12

Communicating Cost Risk Uncertainty Assessments Effectively to Acquisition Managers

Understanding What Drives the Shape of Cost Risk Analysis Results Figure 6 displays a slide of the technical risk / cost uncertainty analysis results for the baseline SBR satellite’s

unit recurring cost, which were previously listed as a range estimate in Table 3 above. The cost estimate is displayed as a cumulative distribution function (CDF) “S” curve, where the lower end of the range estimate of $111M falls close to the 0.5 or 50 percent confidence level or 50/50 probability or chance that the actual cost on the program will overrun or under run the estimate; while the upper bound estimate of $125M has an 80 percent confidence level or 80 percent chance of under running and 20 percent chance (80/20) of overrunning the $125M estimate.

Figure 6. SBR Satellite Unit Recurring Costs Risk Analysis Results

This representation of the results of the cost risk analysis may be very understandable across the cost community and with the majority of today’s acquisition managers, but the analyst be prepared to respond to acquisition managers questions, since there may likely be a lack of confidence in the results. Even if the managers present understand and appreciate this type of display, the government cost analyst should have a slide or two available at this point in the briefing that clearly demonstrates the credibility and reasonableness of the underlying risk analysis. The analyst should be to breakout and identify each known system-specific technical and program schedule-related risk. If there is a question on which program risks are more worrisome than others, the government cost analyst with assistance from the program director, lead systems engineer, and SMEs should be prepared to have a slide on the results of the risk assessment that displays the likelihood of each risk event occurring and the impact not mitigating the risk will have on meeting the system’s expected performance and/or the ability to meet the first launch on the date planned. On the other hand, if the acquisition managers present have preconceived notions or prior knowledge of what the five to ten most compelling risks are, the cost analyst should be prepared to identify the cost estimated to mitigate each risk and what the resulting shape and steepness of slope of the total program cost “S” curve will be in terms of reducing the overall uncertainty in the total system cost estimate. In the representative SBR satellite example above, the lead government cost analyst should be prepared to anticipate and respond to a manager’s question on whether the total non-recurring development estimates includes sufficient costs for mitigating their most compelling risks? Regardless of which approach to take, the slides should provide the decisionmaker with the ability to calibrate the credibility of the cost risk results and to understand the tradeoffs in the importance of funds to allocate in mitigating various levels of risk reduction plans.

American Institute of Aeronautics and Astronautics

139901

13

Selecting The Most Appropriate Cost Risk Analysis Method In preparing for the government management review of the cost results and their potential questions, it may prove to be too challenging or there may not be sufficient time, detailed information or SMEs available to generate credible inputs for displaying cost risk analysis results on a probabilistic or Monte Carlo approach similar to the results displayed in Figure 6 above. Consequently, even though there are many approaches listed in Figure 7 below, there is no one cost risk assessment method that fits all circumstances‡‡.

Figure 7. Different Cost Risk Analysis Methods

Even though the majority of space systems cost analysts are now using Monte Carlo methods, especially since it

fits well with the roll-up of total costs estimated at WBS lower; there are some current issues in correctly using this approach. Justifications for selecting input & cost probability distributions (i.e., triangular, log-normal, etc.) lacks a solid statistical basis, or are not always linked to details provided during the interview process with systems engineers and SMEs. Correlation values between input variables or across different WBS cost elements are either missing or based on qualitative “best judgments” and without any statistical basis. Furthermore, not all known technical risks can easily be directly accounted for across specific WBS elements. It’s also difficult to map the effects of known schedule risks to specific WBS hardware and software-related cost elements. Furthermore, a few of the more senior government cost analysts felt the Monte Carlo analysis process is “broken” in terms of producing “common sense” results. “The majority of cost distributions are far too narrow to represent actual aggregate levels of known risks”. This type of result and the program costs recommended “leads management into a false sense of security that the program is properly funded”. Even if decisions are made to fund all programs at the 80 percent or higher confidence level, it may only serve as a “band-aid for generating estimates based on unrealistic system baseline design that won’t meet the requirements”. Finally, even “if programs were funded to a true 80 percent level, the AF total space systems budget would be unaffordable and/or excessive.” In summary, the overall Monte Carlo approach to performing cost risk uncertainty analyses may end up portraying more analytic rigor than justified or warranted. Presenting Effective Cost Risk Analysis Results The author suggests that government cost analysts should develop a standard cost risk set of results that most managers can readily understand, which is focused on estimating and recommending the “necessary & sufficient” level of management reserve to fund the space systems program at. Unless the time and quantitative analysis is completed and the cost risk results are thoroughly understood, it’s easier for the majority of cost analysts to explain

‡‡ See RAND reference (3) report for further detailed discussions on guidelines for using different cost risk analysis methods.

American Institute of Aeronautics and Astronautics

139901

14

and justify a 50/50 estimate than an 80/20 estimate. The historical total cost expenditures of analogous completed program(s) can be used to support and/or justify the most likely estimate for the new system.

The cost results for the new system should also include slides that fully disclose the assumptions made on the –

Technical maturity and producibility readiness levels assessed, Limited availability of space-qualified vendors to supply key parts, and Potential parts obsolescence, technology refreshes and/or rework needed within specific subsystems,

assemblies, etc.

In addition, the total estimated cost plus the total management reserve recommended should be considered as “suitable” program funds needed for ensuring program “success”. “Suitable” funds in this context can be defined as the amount estimated for sufficiently reducing the expected overall uncertainty and mitigating all known program risks to an acceptable level; and for covering the costs of implementing possible fallback or contingency plans needed as a hedge to accelerate the program and/or come up with alternative approaches in the event that there are potential operational gaps in the constellation caused by unexpected launch and/or premature on-orbit failures of the predecessor satellites. The main messages from this section on conveying more effective space systems cost results to acquisition managers are to – Present a balanced, complete analysis and do not sensationalize. Clearly provide managers with the insight and knowledge of how the results were generated. Provide a historical perspective by comparing the estimate with program costs & performance of past

predecessor space systems, and Despite using different estimating and assessment methods, be consistent in displaying cost results.

Summary In summary, all the suggested actions described in this paper require a reminder of what some of the most important roles and responsibilities of government space systems cost analysts are in the process of producing more credible cost estimates and more effectively conveying these results to acquisition managers. Government cost analysts should consider - Getting a consensus from the lead systems engineer and the SMEs on the “best” system baseline design that is

clearly defined to meet requirements. Always challenging the program assumptions on the schedule, design heritage, etc. Establishing a link between system cost at the WBS cost element level and the program schedule at the task

level. Performing sensitivity analyses to measure changes in project, system, subsystem and/or component costs by

varying values of key drivers most important in determining the credibility of the cost estimates. Presenting results to management that includes a comparison of estimated costs to actual program costs of

analogous and/or most recent legacy systems. In addition, each government cost community should consider establishing and presenting to management a track record how close their initial estimates came to actual expenditures and/or the final budget on completed programs.

In a similar vein and in order to enable acquisition managers to more easily comprehend the cost results presented, they should consider –

Judging the credibility of cost estimates presented by asking how the costs compare to similar historical

programs; Insisting that the government cost community use an easily understandable standard cost risk nomenclature; Recommending that the government cost community establish a consistent approach for assessing top-down

technical maturity or readiness levels; and.

American Institute of Aeronautics and Astronautics

139901

15

Most importantly, being an advocate for program directors to receive the sufficient funds necessary to successfully build executable programs.

References

(1) Hura, Myron, Eisman, Mel, et al, Space Capabilities Development: Implications of Past and Future Efforts for Future Programs, RAND Corporation; MG-578-AF, 2007

(2) Younossi, Obaid, Brancato, Kevin, et al, Improving the Cost Estimation of Space Systems: Past Lessons and Future Recommendations, RAND Corporation; MG-690-AF, 2008

(3) Arena, Mark, Younossi, et al, Impossible Certainty: Cost Risk Analysis for Air Force Systems, RAND Corporation, MG-415-AF, 2006


Recommended