+ All Categories
Home > Documents > The ITEA Journal

The ITEA Journal

Date post: 07-Nov-2021
Category:
Upload: others
View: 21 times
Download: 0 times
Share this document with a friend
69
Journal The The Evolving T&E Infrastructure Published quarterly by the International Test and Evaluation Association March 2008 Volume 29, Number 1 ITEA
Transcript
Page 1: The ITEA Journal

The Evolving T&E Infrastructure

JournalThe

The Evolving T&EInfrastructure

Published quarterly by the InternationalTest and EvaluationAssociation

March 2008

Volume 29, Number 1

ITEA

Volume 2

9 N

o.1 M

arch 20

08

, pages 1–1

14

The ITEA Journal

Page 2: The ITEA Journal

The Evolving T&E Infrastructure

JournalThe

The Evolving T&EInfrastructure

Published quarterly by the InternationalTest and EvaluationAssociation

March 2008

Volume 29, Number 1

ITEAVolum

e 29 N

o.1 M

arch 20

08

, pages 1–1

14

The ITEA Journal

Page 3: The ITEA Journal

The ITEA JournalMarch 2008

Volume 29, Number 1

BOARD OF DIRECTORS

John Smith, PresidentRussell ‘‘Rusty’’ L. Roberts,

Vice PresidentDr. John B. Foulkes, SecretaryStephanie Clewer, TreasurerGeorge M. AxiotisScott P. FoisyRobert T. FullerKirk S. JohnsonGeorge RyanBrian M. SimmonsDrexel L. SmithMark E. SmithJames R. TedeschiJohn L. Wiley

SENIOR ADVISORY BOARD

Robert J. Arnold, ChairBrent M. BennittJohn V. BolinoEdward R. GreerGeorge B. HarrisonDr. Charles E. McQuearyDr. J. Daniel StewartDr. Marion L. Williams

COMMITTEE CHAIRS

AwardsJames A. Tedeschi

Chapter & IndividualMembership Development

Mark E. SmithCommunications

Lawrence CamachoCorporate Development

Drexel L. SmithEducation

Scott P. FoisyElections

Gary L. BridgewaterExhibits

Douglas D. MesserEvents

Richard ShelleyHistorian

Dr. Michael GornPublications

Dr. J. Michael BartonStrategic Planning

Matthew T. ReynoldsTechnology

George J. RumfordWays and Means

Michael A. Schall

STAFF

Executive DirectorLori Tremmel Freeman

Assistant DirectorEileen G. Redd

Managing Editor, ITEA JournalRita A. Janssen

Office ManagerJean Shivar

Coordinator, Office Supportand ServicesBonnie Schendell

ITEA BEST PAPERS

17 P-8A ‘‘Poseidon’’ Collaborative Simulation and Stimulation for ElectromagneticEnvironmental Effects Test & Evaluation.....................................................Paul Achtellik

23 Design of the Ballistic Missile Defense System Hardware-in-the-Loop ....... James Buford,

John Pate, and Bernard Vatz

29 Innovative Technologies and Techniques for In-Situ Test and Evaluation of SmallCaliber Munitions ............................... Andre Lovas, T. Gordon Brown, and Thomas Harkins

TECHNICAL ARTICLES

37 Test and Evaluation: Department of Defense and Private-Sector Resources—Assessing andResolving the Modernization Paradox ....................................................... Drexel L. Smith

45 Testing and Training 2020: From Stovepipes to Collaborative Enterprises .... Jim Sebolka,

David Grow, and Bo Tye

51 Evolving Enterprise Infrastructure for Model & Simulation-Based Testing of Net-Centric Systems...Steven Bridges, Bernard P. Zeigler, Ph.D., James Nutaro, Ph.D., Dane Hall,

Tom Callaway, and Dale Fulton

63 Towards Better Control of Information Assurance Assessments in Exercise Settings ........David J. Aland

67 Best Practices for Developmental Testing of Modern, Complex Munitions ......................Capt Joshua Stults

DEPARTMENTS

1 PRESIDENT’S CORNER

3 GUEST EDITORIAL: 2003 NATIONAL DEFENSE AUTHORIZATION ACT—A U.S. ARMY DEVELOPMEN-

TAL TEST COMMAND PERSPECTIVE ...James B. Johnson, U.S. Army Developmental Test Command,

Aberdeen Proving Ground, Maryland7 TECHNOTES

9 INSIDE THE BELTWAY

11 FEATURED CAPABILITY: PHYSICAL ARCHITECTURE FOR VIRTUAL IMMERSION LEARNING IN LEADERSHIP

DEVELOPMENT AND EMERGENCY OPERATIONS ........................................... Sonia S. Cowen, Ph.D.,Daniels Leadership Center, New Mexico Military Institute, Roswell, New Mexico

14 HISTORICAL PERSPECTIVE

76 T&E NEWS

89 2008 DIRECTORY OF ITEA CORPORATE MEMBERS/CAPABILITIES

113 CHAPTER DIRECTORY

114 ARTICLE SUBMISSION GUIDELINES

The ITEA Journal of Test and Evaluation jite-29-01-fmii.3d 31/3/08 13:46:47 fmii

ON THE COVER: Disruptive technology arises not as a linear evolution of the existing state of the art but as a non-intuitive leap

in thinking and capability—the ultimate manifestation of innovation—and its impact is often far beyond that expected. Wireless

devices, robotics, digital technology and the Internet, biotechnology, hydrogen fuel cells, nanotechnology, and nuclear weapons

are all examples of past (or potentially future) disruptive technology. Some of them have brought about entirely new industries

and changed society. Cultivating innovation, preparing for and responding to breakthrough ideas and capabilities, closing gaps

between the research and test communities, and more rapidly transferring technology and capability from science and technology

programs to T&E are challenges and opportunities for the test community. (Photo of unmanned robotic system undergoing

testing in the deep snows of Fort Greely, Alaska, at the Cold Regions Test Center courtesy of Yuma Proving Ground Public

Affairs; hydrogen fuel cell photo courtesy of Matt Stiveson, National Renewable Energy Laboratory, Golden, Colorado;

scanning tunneling microscope image courtesy of J. A. Stroscio, A. Davies, D. T. Pierce, and R. J. Celotta, National Institute of

Standards and Technology. Cover design courtesy of Headquarters, U. S. Army DTC, Aberdeen Proving Ground, MD.)

& ITEA Headquarters: 4400 Fair Lakes Court, Suite 104, Fairfax, Virginia 22033-3899; Tel: (703) 631-6220;

Fax: (703) 631-6221, E-mail: [email protected]; Web site: http://www.itea.org.

& ITEA is a not-for-profit international association founded in 1980 to further the development and exchange of

technical information in the field of test and evaluation.

& The ITEA Journal (ISSN 1054-0229) is published quarterly by the International Test and Evaluation Association

at 4400 Lakes Court, Suite 104, Fairfax, Virginia 22033-3899. Single issue cover price for The ITEA Journal is $20.

ITEA membership dues are $45 for individuals, $25 for full-time students, and $745 for corporations. Annual dues

include a one-year subscription to The ITEA Journal. The annual subscription rate for libraries and other

organizations providing timely reference material to groups is $60. All overseas mail (air mail or AOA) requires an

additional $20. The ITEA Journal serves its readers as a forum for the presentation and discussion of issues related to

test and evaluation. All articles reflect the individual views of the authors and not official points of view adopted by

ITEA or the organizations with which the authors are affiliated.

’ Copyright 2007/2008, International Test and Evaluation Association, All Rights Reserved. Copyright is not

claimed in the portions of this work written by U.S. government employees within the scope of their official duties.

Reproduction in whole or in part prohibited except by permission of the publisher.

POSTMASTER: Send address changes to: ITEA, 4400 Fair Lakes Court, Suite 104, Fairfax, Virginia 22033-3899.

C O N T E N T S

fmii ITEA Journal

Lfreeman
Highlight
Lfreeman
Sticky Note
Rita, insert: Thomas J. Macdonald just before George Ryan.
Lfreeman
Highlight
Lfreeman
Sticky Note
Rita, Please change James A. Tedeschi TO Denise De La Cruz
Lfreeman
Highlight
Lfreeman
Sticky Note
Rita, For now, please remove Strategic Planning and Matthew T. Reynolds entirely.
Lfreeman
Sticky Note
Rita, Please add: Manager, Exhibits and Corporate Development Bill Dallas
Lfreeman
Highlight
Page 4: The ITEA Journal

2003 National Defense Authorization Act—AU.S. Army Developmental Test

Command Perspective

James B. Johnson

U.S. Army Developmental Test Command, Aberdeen Proving Ground, Maryland

The U.S. Army Developmental TestCommand (DTC), a subordinate com-mand activity of the Army Test andEvaluation Command (ATEC), ischarged to plan, conduct, and report

developmental and production tests(including Title 10, United States Code,Live Fire; virtual; simulated; and othertests), across the full spectrum ofenvironments, in addition to verifyingthe safety of Army systems. To fulfillthis mission, DTC maintains and oper-ates seven test centers throughout thecontinental United States. Seven siteswithin these test centers are designatedas Major Range and Test Facilities Base(MRTFB) activities.1

The Last Decade. During the 1990s,DTC’s institutional operating budgetsreflected the downward profile experi-enced by most activities within the Department ofDefense (DoD) following the First Gulf War.Submissions for the Army’s Program ObjectiveMemorandums (POMs) reflected DTC’s concern that,as stewards of the bulk of the Army’s MRTFBactivities, it would violate the intent of DoD Directive3200-11, Major Range and Test Facility Base thatspecifies activities ‘‘shall be sized, operated, andmaintained primarily for DoD T&E (Test andEvaluation) support missions.’’ Specifically, institu-tional funding was insufficient to adequately supportthe infrastructure and operating costs required toprovide the testing services our DoD customersrequired. To compensate for this funding shortfall,DTC ranges were forced to pass on the critical portionof this institutional funding shortfall to those custom-ers. At the time, DTC also was ‘‘making ends meet’’with the help of our customers; Program and ProductManagers (PMs) were exceptionally forthcoming inmaking capital investments in DTC range instrumen-tation and facilities in cases where required capabilities

did not exist and DTC was unable to fund theseinvestments.

Public Law 107-314. In December 2002, Congressenacted the Bob Stump National Defense AuthorizationAct for Fiscal Year 2003 (2003 NDAA). Among its

provisions, the Act sought to ensure the‘‘institutional funding of test and evalua-tion facilities’’ by FY06. The provisions ofthe Act directed that

N Both institutional and overhead costsof facilities or resources within theMajor Range and Test Facility Baseshall be fully funded through themajor investment accounts of themilitary departments, the DoD Cen-tral Test and Evaluation InvestmentProgram (CTEIP) account, and oth-er appropriate accounts, and

N Charges to DoD users of MRTFBactivities are limited to not more than

the direct costs of use.The provisions of the Act and the details of its

implementation were the subject of much discussion inthe Army Test and Evaluation community both fromthe standpoint of legality—what practices were andwere not permitted—as well as fiscal policy andfunding. In particular, DTC and its seven MRTFBactivities were deeply involved in the discussions anddecisions given the huge potential the law had forimpacting test operations at the affected sites. Aspolicy emerged, it became clear that the formerpractices of passing on institutional costs to customersas well as garnering capital investments from thecustomer base were no longer viable means of replacinginstitutional shortfalls. Foremost, a clear understandingthroughout the command of what costs appropriatelycould be passed on to a customer, as well as whatinvestments by a customer were legal, was needed. Toanswer these questions, ATEC obtained the assistanceof the Deputy Assistant Secretary of the Army for Costand Economics (DASA-CE) in developing common

James B. Johnson

Guest EditorialITEA Journal 2008; 29: 3–5

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 3

Page 5: The ITEA Journal

definitions for direct versus indirect costs to ensurecompliance and consistency. These definitions becameof particular importance not only in consideringappropriate customer charges, but also in addressingthe issue of support to multiple customers as it relatesto investments on DTC’s ranges.

DTC’s FY06-11 POM submission reflected thoseunfunded requirements essential for compliance withthe 2003 NDAA. Recognizing the Army’s responsi-bility to meet the provisions of the law, the AssistantSecretary of the Army for Acquisition, Logistics, andTechnology (ASA[ALT]) responded by realigninginternal funds from the Army program executiveofficer or project manager organizations into theDTC operating budget. In addition, ASA(ALT)served as the Army advocate for realigning funds fromother DoD agencies that use DTC’s facilities.Beginning, as required, in FY06, DTC began operat-ing under the requirements of the 2003 NDAA.

Post-2003 NDAA Operations. One of the mostmarked changes after the law took effect was the receiptof increased institutional funding. This increasedfunding provided DTC numerous advantages. Amongthese was greater flexibility, and as a result, increasedeconomy in letting contracts, because funding now wasavailable ‘‘up front.’’ It also permitted the routinemaintenance that in the past had often been neglectedbecause necessary funding first had to be obtained byreimbursement. Another advantage was that it providedDTC’s ranges with a remedy for an issue often viewed asunfair by test customers, but that was forced on us byreimbursable funding practices—specifically, if a majoritem of test instrumentation broke down during acustomer test, the costs of repair or replacementhistorically had been passed on to that customer. Yetanother positive change affected by the law’s introduc-tion was a decline in test costs; DTC’s customers have,on average, paid markedly less for services since thebeginning of FY06 when the law went into effect.

While DTC’s experiences operating under the 2003NDAA for the past two full fiscal years have generallybeen positive, two key challenges that affect testoperations have emerged. The two challenges are: (1)if workload exceeds institutional and overhead budgetguidance, and (2) if customers wish to invest in DTCrange instrumentation and facilities.

Budget guidance at the start of each fiscal year fullyfunds the institutional and overhead costs of test centeroperations and supports execution of a specific level ofdirect test effort. Issues ensue when that level of effortis exceeded. To fully understand, it must be realizedthat all direct labor effort carries some indirectcomponents, for example, mandatory training. Assuch, any increase to the DTC workforce—not

uncommon in reacting to the Army’s operationalneeds—results in an unanticipated and, hence, un-funded burden on the institutional/overhead operatingaccounts. Since, to comply with the law, theseoverhead labor costs cannot be charged to a DoD testcustomer, test centers must either delay test needs tothe start of the next fiscal year or identify some otherbudget requirement that can be reduced or deferred toabsorb the unanticipated institutional labor bill.Because few critical test needs can be delayed whensupporting an Army at war, we have been forced todefer the much needed technology improvements forour ranges until later. If this trend continues, our testcenters will become hollow—less capable of providingstate-of-the-art and timely test services. In short, nomechanism exists within the process to recoup year-of-execution increases to overhead costs that cannot beanticipated or passed on to the customer.

As previously mentioned, prior to the 2003 NDAA,test customers made investments to DTC rangeinstrumentation and facilities when needed capabilitiesdid not exist. Such investments currently are notpermitted under the law if the resulting capability createsfor the test center an asset that subsequently may be usedto support multiple test customers. While not unknown,it is, however, rare that a capability/facility would be usedexclusively to support a single test customer. In addition,should the investment prove to be acceptable (i.e.,supports only a single customer); DTC then mustconsider the downstream costs of accepting thatcapability into the test range inventory. Since, for themost part, it will require out-year maintenance, sustain-ment, and revitalization costs; it becomes yet anotherasset competing for scarce institutional funding.

As of this writing, the latter concern soon may beremedied. Language has been included in the currentdraft of the DoD Financial Management Regulation,Volume IIA, Chapter 12, Major Range and TestFacilities, to allow

‘‘by mutual agreement, investments in new orexisting T&E facilities … in whole or in part,by one or more DoD customers of an MRTFB … .’’Such agreements will ‘‘… delineate responsibilitiesfor funding, staffing, operating, and maintainingthe facility and must be approved by all parties ….’’

As we proceed through the current decade and intothe next, anticipated funding profiles for DTC operat-ing and investment budgets reflect a pattern similar tothat of the 1990s. Heeding the historical profile, thisalerts DTC to the challenges in complying with theprovisions of the 2003 NDAA, while steadfastlymaintaining its mission of providing world class testingcapabilities for our DoD test customers. %

Johnson

4 ITEA Journal

Page 6: The ITEA Journal

JAMES B. JOHNSON was appointed to the Senior ExecutiveService and assumed his present position as executive directorat the U.S. Army Developmental Test Command, AberdeenProving Ground, Maryland in 2007. He has managementresponsibility for the Command’s test and technology missionand all associated resources. He is responsible for planning,executing, and reporting 1,700 tests supporting more than300 weapons programs annually, with a total budget of $2billion and a workforce of more than 8,000 employees and forensuring operational readiness of the Army’s developmentaltest range infrastructure. Prior to this position, Johnsonserved as director of the U.S. Army Redstone Technical TestCenter (RTTC), Developmental Test Command.

He previously served multiple assignments in the MissileDefense Agency (MDA), Ground-Based Midcourse DefenseJoint Program Office in Huntsville, Alabama and wasdirector of Test Operations; deputy product manager of theTest, Training and Exercise Capability (TTEC) ProductOffice; and chief of the Test Products Division in TTEC.

Prior to his MDA service, Johnson served at RTTC asteam leader of the Radar Systems Group. He began his

government career with the Aviation and Missile Research,Development and Engineering Center (AMRDEC) locatedat Redstone Arsenal, Alabama.

Johnson graduated with a bachelor’s degree in electricalengineering from the University of Alabama in Huntsville(UAH) and a master’s degree in systems engineering, alsofrom UAH. He also holds a master of strategic studies degreefrom the Army War College. Mr. Johnson is a graduate ofthe Army Management Staff College and the AdvancedProgram Management Course. Johnson’s awards include theArmy Achievement Medal for Civilian Service, the EdwardH. Gamble Award, the TECOM Professional Certificate,the AMRDEC Extraordinary Performance Award, anda U.S. patent. E-mail: [email protected]

Endnotes1Of the Army’s nine Major Range and Test Facilities Base (MRTFB)

activities, seven are Developmental Test Command (DTC) components:

White Sands Test Center; Yuma Test Center; Cold Regions Test

Center; Tropic Regions Test Center; West Desert Test Center;

Aberdeen Test Center; and the Electronic Proving Ground.

Developmental Text Command

29(1) N March 2008 5

Page 7: The ITEA Journal

A Dramatic Approach to High-Power Fiber Lasers

Burke Nelson Ph.D. and Sami Shakir Ph.D.

Northrop Grumman Information Technology, Albuquerque, New Mexico

The expansion of the market share offiber lasers for industrial applications isfueled by some of the significantadvantages of fiber lasers. These ad-vantages include high efficiency, com-

pactness, good beam quality, and low cost of maintenanceand operation. For military applications, in which powerlevels of 100 kW or more are necessary, fiber lasers arebeginning to show promise as credible candidates. From amilitary standpoint, lasers form a class of weapons calledDirected Energy Weapons. Laser beams travel at thespeed of light, which makes their target effectsinstantaneous. The power level attainable with fiberlasers is continually increasing. We have a currentcontract with the U.S. Navy where we will passivelyphase two 1-kW Nufern amplifiers. We will be ready fordemonstrations on a test range within a few years. We donot foresee any insurmountable barriers to high power,the time to a prototype demonstration being primarilydependent on available funding.

What’s a fiber laser?A fiber laser utilizes very thin glass fibers doped with

special materials to convert poor-quality laser lightgenerated by diode lasers to a high-quality laser lightbeam. The fiber waveguide, which is composed of aglass core region doped with a rare-earth material suchas ytterbium (Yb) surrounded by a regular glasscladding region, confines the laser beam to the fibercore region where laser gain and amplification takeplace (Figure 1). Depending on the type of doping

material used, the laser operating wavelength can rangefrom 0.8 mm to 2.3 mm.

Why fiber lasers?Fiber lasers are driven by electrical power. Modern

aircraft, naval vessels, and ground combat vehiclesincorporate significant amounts of electrical power.Using that electrical power, the only logistical supplyrequired is fuel. While there are other electricallydriven lasers, fiber lasers have significantly higherefficiencies than bulk solid-state lasers. Fiber lasersoffer better waste-heat management and are relativelyimmune to the deleterious effect of heat on the beamquality of the laser. (Beam quality is a measure of howwell a laser beam can be focused at a target.) Incontrast, the major impediment to bulk solid-statelasers is the sensitivity of the bulk laser materials totemperature gradients, which causes the beam qualityto deteriorate. Fiber lasers also have the advantage thatthe system can be monolithic in the sense that the laserbeam is confined within the flexible fibers of thesystem and requires no alignment or free-space bulkyoptics such as lenses and mirrors. This is a significantadvantage in harsh military environments

Passive phasing’s demonstrated successPassive phasing is a coherent phasing process in which

an array of high-power fiber amplifiers is locked andphased automatically by the system itself. A small fractionof the output beams of the array are sampled and fed backinto a single-mode feedback fiber in a ring configuration,

Figure 1. Fiber laser. The fiber

core that guides the laser beam

(red) is pumped by diode-pumpbeams (pink). A portion of the

laser beam is reflected back by

a reflector (Fiber Bragg

Reflector, FBG) to form a fiberlaser. The doped core forms the

gain medium, while the fiber

and the end reflectors (FBG)form the resonator. Unlike

conventional lasers, the fiber

and end reflectors form an all-

fiber monolithic laser, which is asignificant advantage for

fiber lasers.

TechNotesITEA Journal 2008; 29: 7–8

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 7

Page 8: The ITEA Journal

as shown in Figure 2. This concept is covered by U.S.Patent 7,130,113. The feedback signal is split equally,and each portion serves as the input signal for one of theamplifiers in the system. Since the system wavelength isnot fixed, the system runs at a wavelength that has thehighest feedback signal. By design, the feedback signal ishighest when the beams are in phase. Therefore, thispassive phasing approach locks the fiber amplifiers to thesame wavelengths and also causes the output beams tohave the same phase. Frequency locking and phasing arenecessary requirements for effective coherent combiningof laser beams.

The effect of passive phasing on the performance ofthe system is shown in a dramatic way in Figure 3,which compares the intensity profile when thefeedback loop is blocked (i.e., no passive phasing) tothe case when it is restored.

Having successfully phased 16 fiber lasers, and withour near-term plans to phase two 1-kW amplifiers, it is

time to plan prototype demonstrations of high-power,high-intensity lasers at a range test facility. %

BURKE NELSON is the deputy program manager of theAirborne Laser Advisory and Assistance Services contract. Heis also program manager of a U.S. Navy contract, jointlywith Northrop Grumman Electronic Systems, to develop fiberlasers for weapon system applications. His areas of expertiseinclude chemical and solid-state lasers, as well as large-opticsapplications. Dr. Nelson has been with Northrop Grummanin Albuquerque, New Mexico, for over 22 years. His previousassignments included associate director of Research andDirector of Engineering, PerkinElmer, Inc., and executivedirector of the American Society of Mechanical Engineers(ASME). He was also a congressional fellow for ASME. Hehas been awarded five U.S. patents, including, most recently,a patent on passive phasing of fiber lasers. He holds a bachelorof science degree in mechanical engineering from MichiganState University, a master of science in engineering degree inaeronautics and astronautics from the University ofWashington, and a Ph.D. in materials science from DrexelUniversity. E-mail: [email protected]

DR. SAMI SHAKIR is a senior scientist with NorthropGrumman Information Technology (NGIT). He obtained hisPh.D. in optical sciences from the Optical Sciences Center inTucson, Arizona, in 1980. Before joining Northrop Grum-man, he was a professor at the University of New Mexicountil 1986, when he joined R&D Associates, which was lateracquired by Northrop Grumman. His interests are high-power solid-state and fiber lasers and beam propagation andcontrol. He is the inventor of the Northrop Grumman passivephasing approach. E-mail: [email protected]

Figure 3. An array of 16 fibers demonstrates the dramatic

increase in central intensity provided by passive phasing

Figure 2. The patented passivefeedback drives the output to

the highest intensity level

Nelson and Shakir

8 ITEA Journal

Page 9: The ITEA Journal

Assuring the Future—How We Gained Access to AdditionalRadio Spectrum for Flight Testing

John B. Foulkes, Ph.D.Test Resource Management Center,

Office of the Under Secretary of Defense, (Acquisition, Technology, and Logistics), Arlington, Virginia

In the mid-1980s, spectrum used by the Department of Defense (DoD) test community

began to be reallocated for the rapidly growing consumer electronics market. By the mid-

1990s, the DoD had lost access to 30 percent of the spectrum used to carry data during testing

of aerospace vehicles. At the same time, advances in computer technology and onboard sensor

electronics resulted in an exponentially increasing demand for additional spectrum to provide

useful test results in real time during test events. The DoD joined with the National

Aeronautics and Space Administration (NASA) and the aircraft manufacturers’ industry

group, the Aerospace and Flight Test Radio Coordinating Council (AFTRCC), to address

this shared problem. Operating as the ad hoc Range Spectrum Requirements Working Group

(RSRWG), the three partners developed a plan that ultimately led them to the International

Telecommunications Union (ITU), an international treaty organization based in Geneva,

Switzerland. This challenge required the three partners to use nontraditional approaches to

address this issue. This article discusses the working relationships and approaches used to

ensure that we successfully addressed the issues of spectrum encroachment.

The Range Spectrum RequirementsWorking Group (RSRWG) planningprocess began in 1996 with the devel-opment of a three-pronged approach: (a)defend against further losses of spec-

trum; (b) develop new technologies to more effectivelyuse the spectrum; and (c) devise new approaches andprocesses, including gaining access toadditional spectrum. While most of theplan called for straightforward use ofinternal Department of Defense (DoD)and interagency processes in Washington,gaining access to additional spectrumpresented unique challenges for the testcommunity. This feat would require buy-in of the senior leadership in each of theorganizations represented, both govern-ment and private industry. The RSRWGspearheaded the effort to implement thisplan, which involved foreign governments and ultimatelyresulted in the consortium’s participation in theInternational Telecommunications Union’s (ITU’s) in-ternational radio frequency spectrum regulatory arm, theWorld Radiocommunication Conference (WRC).Changes in radio frequency (RF) allocations aretantamount to revisions to an international treaty.

The RSRWG plan called for a globally harmonizedRF band or set of bands to allow interoperability of testassets, reduced equipment cost through commonality,global testing, and increased protection against RFspectrum encroachment. The National Aeronautics andSpace Administration (NASA) had earlier submitted aproposal to the ITU to consider ‘‘spectrum for wideband

telemetry in the 3 to 30 Gigahertz (GHz)region’’ at some future WRC. Suchundertakings invoke all the machinery ofstate, and the process takes years. TheRSRWG partners began by giving asmany educational briefings on the require-ment for additional spectrum as needed toall of the stakeholders in spectrum man-agement to include the Department ofCommerce, the Department of State, andthe Federal Communications Commis-sion. Furthermore, the NASA and DoD

representatives were responsible for communicating thenew requirements to their respective spectrum manage-ment offices. For the DoD, that required gettingapproval from the Director of Spectrum Policy withinthe Office of the Assistant Secretary of Defense (ASD)for Command, Control Communications, and Intelli-gence (C3I) (now ASD for Networks and Information

Dr. John B. Foulkes

Inside the BeltwayITEA Journal 2008; 29: 9–10

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 9

Page 10: The ITEA Journal

Integration or NII) and the spectrum managementagencies of the military services.

The road to WRC-2007The road to WRC is a very long process. The first step is

developing a case to justify placing a proposal on theagenda for a future WRC. A proposal first goes to a WRCas a recommendation that it be placed on the agenda forconsideration at the following WRC. Typically the WRCconvenes every three to four years, and changes are agreedto only if a consensus is reached among the 191 membernations. Therefore, it was necessary to ensure that asufficient number of nations would support the U.S.telemetry agenda item to ensure consensus. Accordingly,the RSRWG had to develop a plan that allowed it toeducate foreign nations. The RSRWG, while representinga powerful U.S.-based coalition, needed to become aninternational force. As a result, the group teamed with theInternational Foundation for Telemetering (IFT) to worktogether to garner the international support necessary tomake the essential additional spectrum allocations a reality.Together, the RSRWG and the IFT worked to charter theInternational Consortium for Telemetry Spectrum(ICTS). The ICTS membership encompasses represen-tatives from most major aircraft manufacturers andmilitary and flight test establishments throughout theworld. The goal of the ICTS was to facilitate thedevelopment of a set of internationally agreed-upontechnical recommendations and implementation alterna-tives. The sharing of information within the ICTSprovided the foundation for the ICTS members toconvince their national spectrum managers that thetelemetry spectrum proposal for WRC was important totheir nation’s interest. As a result of RSRWG and ICTSefforts, the United States succeeded in getting thetelemetry spectrum proposal approved at WRC-2003 asAgenda Item 1.5 for WRC-2007.

A grassroots effort was required to communicate thedetails of Agenda Item 1.5 directly to as many membernations as possible prior to the WRC. This wasaccomplished by informational briefings at regional levelforums. The ITU has divided the world into three regions.Groups of nations within these regions have formedregional organizations. These regional organizations haveofficial standing within the ITU and submit a singleconsolidated set of positions for their respective organi-zations. Building support within these regional organiza-tions was one of the keys to success at WRC-2007.

RSRWG members knew that even with extensivegrassroots efforts, more work was required to ensurethe success of Agenda Item 1.5. Approximately 3,000delegates from more than 150 countries attend theWRC. Many of these delegates arrive with littleknowledge about the agenda items of other delegates.

To secure the highest probability of success for AgendaItem 1.5, the ICTS developed an information booth toeducate delegates on the initiative.

Because ITU is an international treaty organization,each country’s delegation at WRC is led by anambassador. Another key aspect of the RSRWG’soutreach efforts was to brief the leader of the U.S.delegation. A few weeks before the WRC, representa-tives of the U.S. DoD test ranges led by the TRMC,along with representatives of the commercial aircraftmanufacturers, met with the U.S. WRC Ambassador inWashington, D.C., to brief him on WRC Agenda Item1.5. The Ambassador immediately grasped the signif-icance of the item and remained an effective advocatethroughout the duration of the WRC.

WRC-2007—October 22 to November16, 2007

The years of preparation by the RSRWG partnerscame to fruition in the four-week period beginning onOctober 22, 2007, in Geneva. Three representatives ofthe partnership were members of the U.S. delegation.ICTS colleagues from Germany and France weremembers of their respective nation’s delegations.

Agenda Item 1.5 took 23 of the 26 days of WRC-2007 to make it through the process. In the end, theinternational telemetering community gained access tosubstantial amounts of additional bandwidth, includingthe first globally harmonized band. The band of 5091–5150 MHz is now authorized for aeronautical telem-etry in every country in the world. Many regions of theworld have access to substantially more bandwidth inaddition to the global band. The United States andCanada have the ability to access up to 1.4 GHz ofadditional spectrum for telemetering applications.Although this may seem like an overly generousamount, a majority of this spectrum is already in useby incumbent users. However, the RF bands approvedby the WRC will provide all users with greaterflexibility to work within these allocations togetherwith minimal impact on each other.

The success of WRC-2007 Agenda Item 1.5 isclearly due to the strong partnership that led toeffective channels of communications and the willing-ness of senior leadership across government andindustry to make a long-term commitment to pursuinga common goal. The 10-year quest for telemetryspectrum ended in a resounding success. %

JOHN B. FOULKES is director, Test Resource Management

Center, Office of the Under Secretary of Defense (Acquisi-

tion, Technology and Logistics), Arlington, Virginia.

Foulkes

10 ITEA Journal

Page 11: The ITEA Journal

The Teardrop That Fell From the Sky: Paul Jarayand Automotive Aerodynamics

Prof. Guillaume de Syon

Albright College, Reading, Pennsylvania

‘‘What’s the difference be-tween an airship and theChrysler Airflow?’’ Thisquestion is taken neitherfrom a quiz show, nor the

opening line of a joke. The answer, according toengineer Paul Jaray, would be ‘‘not a lot, aerodynam-ically.’’ Paul Jaray’s name still appears in many popularworks on automobile design yet is often ignoredotherwise. Jaray, however, represents a successful cross-over not just in the realm of engineering (fromairplanes to ground vehicles), but also in that ofaesthetics as well.

Jaray’s background (he was born in 1889, one of fivechildren in a Viennese Jewish family) did notpredispose him to scientific study. His father was asalesman, but Paul’s artistic and mechanical inclina-tions, as well as the dynamism of Central Europeanhigher education before World War I, enabledintegrated Jewish families like his to send theirchildren into technical pursuits and other professions.Jaray trained as an engineer in Vienna and choseaerodynamics as his specialty. His first contact withaviation came in 1909, when he witnessed a Bleriotflight.

By 1912, the anticipation of war in Europeprompted the fledgling aviation industry to hire newhands. The outbreak of fighting found Jaray employedby the Zeppelin Works in Friedrichshafen, Germany.

Jaray’s aerodynamic insights began to assert them-selves in World War I. Unfortunately, the ZeppelinCompany—where the paternalism and conservatism ofits founder was felt in all design realms—did notwelcome innovation. Some of Zeppelin’s lieutenantsdid display business sense and even social conscience(by providing subsidized housing for workers, forexample), but the brilliant technical minds at Zeppelin,like Claude Dornier, chafed at the constraints. Notthat opportunities did not present themselves. Warconditions allowed for the transfer of airship patentsfrom Schutte-Lanz, a Zeppelin competitor, to Zeppe-lin as a means of improving the quality of airshipsdelivered to the German navy. Nonetheless, the shape

of airships remained faithful to the pencil-like design,thwarting improvements in speed.

Undeterred, Jaray began studying the best airshipform based on a combination of diameter, cross-section, volume, and stress points (Pfeiffer 1935). Thesystematic study of shapes led him to conclude thatteardrop cross-sections (in which increases in capacitywere achieved through thickening fuselages rather thanlengthening them) offered the best solution. Pencil-and tube-shapes had to go. It would be several yearsbefore this concept was accepted.

Presenting the calculations was one thing, but onlyactual flight tests could confirm Jaray’s assertions. Theearly results on a 1915 airship model proved disap-pointing. They yielded a speed lower than projected,and Jaray was ridiculed by pilots for his miscalculations.But after reexamining his work, Jaray realized that theproblem was simple: his instructions had not beenfollowed. The propellers, which needed to be changed,were the same as those from the earlier airships.Insisting on a redesign and supervising it himself, Jarayconvinced his superiors to retest. His calculationsproved to be off again, but this time in his favor. Thespeed of the airship exceeded 95 miles per hour, betterthan Jaray’s predictions. This delayed success helpsexplain why Jaray did not get the wind tunnel he hadbeen demanding until 1916, which became essential tothe design of two subsequent Zeppelin aircraft built forthe military (Kleinheins 1994). By 1917, Jaray hadbeen promoted to supervising engineer, and he oversawseveral improved designs in the naval airships com-pleted by the end of the war, Figures 1 and 2. He wasalso instrumental in the design of postwar transportairships, and his work there influenced the shape of allmachines including the Hindenburg long after he hadleft aeronautical design.

With the end of World War I, Jaray turned hisattention to streamlining ground vehicles. Manyaviation companies like Zeppelin, no longer sustainedby military contracts, pursued auto making andindustrial products after the conflict. Until this time,car manufacturing (with the notable exception of theFord Model T) had involved quasi-artistic craftsman-

Historical PerspectivesITEA Journal 2008; 29: 14–16

Copyright ’ 2008 by the International Test and Evaluation Association

14 ITEA Journal

Page 12: The ITEA Journal

ship (based on individual orders for a chassis, a body,and so on). But new materials—sheet steel andaluminum—changed the equation of auto manufac-turing and design. Jaray wedded these materials to hisaeronautical concepts starting in 1919 and introduced astreamlined car body in 1921. Jaray also took out apatent on his own streamlined vehicle, although heremained on the Zeppelin payroll.

Jaray had little success with his initial efforts becauseany aerodynamic solution, as he discovered himself,could not simply rely on redesigning the car’s body.Aesthetics also mattered, as did comfort and practi-cality. His first model—constructed in 1921 butidentified as a 1922 Ley T-6 by Zeppelin—lookedridiculous because of its high stance and narrow cross-section. Still, compared with a standard Ley model, itwas more fuel efficient and could negotiate climbsbetter (Curcio 2000). Jaray had taken as a departure

point a split airship cross-section, adapted to therequirements of the automobile. But there were seriouscomplications. The chassis had proven a problemduring the whole design phase. It was not streamlined,and considerable modifications were necessary toachieve a proper flow between the vehicle and theground. That said, wind tunnel testing yielded animpressive drag coefficient of .28.

Jaray finally left Zeppelin in 1923, and although hewas out of a job, he was not out of ideas. He quicklyformed a company in Switzerland (where he moved) topromote his automotive designs. He also set aboutobtaining a patent for the most promising market, theUnited States. The timing seemed right. The stream-line shapes derived from wartime aerodynamics had bythen jumped into the realm of culture, resulting indesigns for commercial goods that combined the questfor speed with an aesthetic that incorporated func-tionality. Popular culture had assimilated the teardropshape advanced by Jaray. Indeed, the interwar years sawan unimaginable number of attempts at incorporatingthe teardrop into designs ranging from cars to kitchenutensils. In the United States, as historians of the‘‘Airstream’’ trailer have noted, such eminent minds asRaymond Loewy and Glenn Curtiss believed, some-what naively, that the teardrop held the solution to allproblems of drag (Burckhart and Hunt 2000).

Despite such popularity, the only firm willing totransform Jaray’s ideas into production automobileswas the Czechoslovak Republic’s Tatra Company.(Others had built test vehicles but never brought themto market.) In 1934, Tatra introduced a V-8 modelthat included pontoon panels, integrated headlamps,

Figure 1. LZ120, one ofJaray’s projects, under

construction in

Friedrichshafen, Germany(Astra, reproduced

with permission)

Figure 2. A 1920 postcard showing one of Jaray’s lastZeppelin projects, the LZ120 transport: the teardrop-shape is

emblematic of Jaray’s work (Astra, reproduced with permission)

Historical Perspectives, Paul Jaray

29(1) N March 2008 15

Page 13: The ITEA Journal

and a tapered rear end. Its drag coefficient (.36) madeit one of the most aerodynamic mass-produced carsever conceived (de Noblet 1993). Yet, the combinationof political and economic turmoil of the 1930s, as wellas the Tatra’s cost, made it a short-lived experiment.Still, it inspired others to emulate it.

As for the American audience, although Jaray hadestablished a company to market his hard-earned 1927patent on aerodynamic streamlining (it took almostfive years to obtain), he was dismayed to discover thatChrysler brought out its Airflow model in 1934without any acknowledgment of his work. By then,Jaray was far from alone in advocating such approachesto automobile design. Norman Bell Gedes, for one,was particularly famous for his own models. But Jaray’swork was among the best known in automotive circlesand publications. His lawsuit in 1935 against Chryslerwas momentous, but not very profitable. Chrysleragreed to pay Jaray’s company just $5,000 in damages.

In Germany, Austrian-born Ferdinand Porsche alsorealized when examining pictures of the ChryslerAirflow, the Peugeot 402, and of course the Tatra thatJaray was on the right track. What was missing, however,was affordability and practicality. The Devil’s pact thatPorsche made with Adolf Hitler resulted in theVolkswagen Bug, a car familiar now, but one that lookedradically different from typical small cars of that era.

Jaray led a quiet life after World War II, focusing onconsulting for the automobile industry. He died in1974. As one automobile historian summarized hislegacy, it was not that he was responsible for specificdetails on aerodynamics and streamlining; manyothers, like him, offered these. Jaray’s main contribu-tion was his insistence that all production automobilesincorporate aerodynamics into their designs (Figure 3)(Sloniger 1975). His work also represents a symboliclink, not just between aeronautics and ground vehicles,but also between formal engineering practices andculturally oriented design. %

GUILLAUME DE SYON teaches history at Albright Collegein Reading, Pennsylvania. He is the author of Zeppelin!Germany and the Airship, 1900–1939, recently issued inpaperback by the Johns Hopkins University Press. E-mail:[email protected]

ReferencesBurckhart, B. and Hunt, D. 2000. Airstream: The

History of the Land Air Yacht. Chronicle Books: SanFrancisco. p. 60.

Curcio, V. 2000. Chrysler: The Life and Times of anAutomotive Genius. Oxford University Press: NewYork. p. 527.

Juchet Gaston. 1993. ‘‘Car Design.’’ IndustrialDesign. Edited by Jocelyn de Noblet.

Kleinheins, P. 1994. LZ 120 ‘‘Bodensee’’ and LZ 121‘‘Nordstern’’. Zeppelin Museum: Friedrichshafen.pp. 26–27.

Pfeiffer, E. A. 1935. Fahren und Fliegen. Ein Buchfur alle von Auto, Flugzeug, Zeppelin. Franckh’scheVerlagshandlung: Stuttgart. pp. 82–83.

Sloniger, J. 1975. ‘‘The Slippery Shapes of PaulJaray.’’ Automobile Quarterly. XIII. p. 3.

Figure 3. Jaray’s automobile for one, as demonstrated in

Switzerland (Corbis, reproduced with permission)

de Syon

16 ITEA Journal

Page 14: The ITEA Journal

P-8A ‘‘Poseidon’’ Collaborative Simulation and Stimulationfor Electromagnetic Environmental Effects Test

& Evaluation

Paul Achtellik

Naval Air Systems Command,

Integrated Battlespace Simulation and Test Department,

Integrated Combat Environments Division,

Electromagnetic Compatibility Branch, Patuxent River, Maryland

Over the past several decades, technological advances have provided the Naval Air Systems

Command (NAVAIR) with exciting opportunities while creating significant challenges to those

who design, test, and operate the complex mission systems found on today’s war-fighting

aircraft. The responses to these challenges are well underway and began with innovative

planning and cost-wise construction of various next-generation facilities, conceptual planning

of integrated and extensible network infrastructures, and the insistence on collaborative

engineering across all phases of the acquisition life cycle. Today, the challenge continues and, in

many aspects, has become even more difficult, stretching our fiscal, technological, and personnel

resources to their limits. This article addresses one of the more difficult aspects of today’s

challenges: Conducting Ground-Based Full Spectrum Test & Evaluation on Next-Generation

Systems.

Key words: advanced test facilities; complex operational systems; electromagnetic

compatibility; electromagnetic environmental effects (E3); network-centric warfare;

realistic mission environments; simulator/stimulator testing labs.

The Naval Air Systems Command(NAVAIR) has many robust, state-of-the-art test and evaluation (T&E)facilities that evaluate entire systemsbefore significant decisions are made to

deliver some of the world’s most advanced weaponssystems into the hands of our sailors and marines.Advanced Installed Systems Test Facilities, managedand operated by the Integrated Battlespace Simulationand Test (IBST) Department, provide realisticground-based test environments during various phasesof systems development to identify and reduce risksprior to more costly and rigorous flight-test phases. Amultitude of potential risks associated with overallsystem performance, personnel safety, and intra-systemelectromagnetic compatibility are identified during allphases of system development in a scientifically-controlled environment through the use of advanced

simulation and stimulation techniques. Test resultsprovide critical data to developers and programmanagers well before important program milestonedecisions, and provide insight into how our next-generation systems will function in joint and coalitionmission threads and future battle space environments.Facilities, such as the Air Combat Environment Testand Evaluation Facility (ACETEF), the Surface/Aviation Interoperability Laboratory (SAIL), theIntegrated Battlespace Arena, and a variety of ad-vanced electromagnetic environmental effects (E3)facilities were purposely designed to facilitate theimmersion of installed systems in an environment thatcan repeatedly replicate realistic mission environmentsand provide detailed data to evaluate potential systemeffectiveness during actual missions. Simulators andstimulators are designed to provide realistic ElectronicWarfare (EW) threat environments, authentic GlobalPositioning System (GPS) satellite signals, friendlyand hostile communications and data link signals, andaccurate electromagnetic environmental effects in a

This article was selected as the ITEA Best Paper—First Place at the Annual

ITEA International Symposium, November 12–15, 2007, Lihue, Hawaii.

ITEA Best PaperITEA Journal 2008; 29: 17–21

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 17

Page 15: The ITEA Journal

scripted, realistic and cohesive test event that replicatesany level of detail desired. Central to these test eventsare models, such as the Joint Integrated Mission Model(JIMM) and the Next Generation Threat System, that‘‘set the stage’’ and drive computer simulations andfacility stimulator hardware, sharing necessary datathrough standardized interfaces. From single-threaded,focused test vignettes to fully integrated wartimescenarios, the laboratories and facilities in NAVAIRare capable of emulating a wide range of realisticenvironments in a live, virtual, or constructive manner.As Major Range Test Facility Base (MRTFB) uniquenational assets, these advanced capabilities supportNAVAIR and U.S. Navy testing, but are also availableto support all joint-service programs.

A central component of advanced ground testcapability, the Advanced Systems Integration Labora-tory (ASIL) is a radio frequency (RF) - shieldedanechoic chamber measuring 1809 3 1809 3 609 withover one-hundred-thousand square feet of RF-absorb-ing material. This chamber provides ‘‘the stage’’ forsome of the most advanced test laboratories, distrib-uted simulation and stimulation hardware and soft-ware, and fully integrated aircraft and facility instru-mentation components. The resulting simulatedenvironment is capable of providing test articles withvirtual, scripted mission scenarios that provide flight-like realism to test the complex suite of communica-tion, navigation, identification, and mission systems.

As the complexity of tomorrow’s systems increases,so does the requirement for research, development, andtest and evaluation facilities to provide matching levelsof complexity to produce realistic testing environ-ments. Advanced weapons systems, such as the P-8A‘‘Poseidon’’ and the F-35 ‘‘Lightning II’’ boastunparalleled intra-system workings and will demandintegrated testing methodologies never before imag-ined. To illustrate the challenges and their potentialsolutions, we look at the early stages of test planningfor the P-8A ‘‘Poseidon,’’ focusing on the newcomplexity required for what was once straight-forward E3 testing.

Advanced electromagnetic compatibilitytesting for next-generation multi-missionmaritime aircraft

The P-8A ‘‘Poseidon’’ Multi-Mission MaritimeAircraft will become the newest addition to the U.S.Navy’s airborne surveillance and reconnaissance arse-nal, bringing unparalleled capabilities and complexitiesto the future of naval aviation (Figure 1). A corner-stone of the Navy’s ongoing transformation in navalwar-fighting doctrine, the P-8A brings forward-looking operational concepts of jointness, interopera-

bility, and full-spectrum dominance of sea-, air-,space-, and information-domains to its primarymission.

Keys to achieving full spectrum dominance areinformation superiority and operations, through theapplication of network-centric warfare. Information,information-processing, and communications networksprovide the core of every military activity, and sharingthis information seamlessly through robust communi-cation networks that provide common operational andtactical pictures to naval commanders is crucial to theNavy’s effectiveness in supporting national interests.The P-8A will be a major airborne asset providingintelligence, surveillance, and reconnaissance informa-tion; information processing; and communications innetwork-centric warfare.

Testing advanced systemsThe challenges of testing such a complex collection

of systems and subsystems are daunting, consideringthe interdependencies and interrelationships of each ofthe aircraft’s mission systems. These challenges arecombined with rigorous intrasystem electromagneticcompatibility (EMC) compliance requirements (Mil-itary Standard MIL-STD-464A 2002) and willdemand a great deal of collaboration and coordinationamong and across organizational boundaries, facilities,and test phases. This level of integrated testing is thereason NAVAIR needs such advanced T&E facilities,and while the facility’s architecture can provide criticaltools, the collaboration of the facility’s workforcebecomes equally critical to meaningful testing.

Figure 1. P-8A ‘‘Poseidon’’

Achtellik

18 ITEA Journal

Page 16: The ITEA Journal

The P-8A’s operational environment will be acomplex and adaptive blend of sensors, shooters,Command and Control assets and data links; inessence, a collection of nested systems and subsystemsoperating in unison. To properly test the effectivenessof such advanced weapons systems, the entire aircraftmust be stimulated as it would be in an actual missionenvironment. Stimulating only a few mission systemsleaves the remainder of the aircraft’s integrated systemsin a static state and represents unrealistic missionprofiles. Stimulating only a portion of the missionsystems also allows little chance of identifying adverseelectromagnetic interactions.

As an example, the Mission Computing and DisplaySystem (MCDS) (Figure 2) requires a blended GPS andAir Data Inertial Reference Unit’s (ADIRU) input forproper operation. The GPS/ADIRU can be energized,but without stimulating these systems with valid signalsand data, the GPS will ‘‘search the skies’’ and be unableto calculate a position. The GPS receivers must havevalid satellite and positional data that agrees with thelatitude and longitude entered into the ADIRU;

anything less will result in immediate ambiguities withinthe overall P-8A navigation system with unforeseeablecomplications for the MCDS and mission systems.

Various stimulators and simulators are required toexercise systems like those on the P-8A. In facilitieslike the ACETEF, many advanced electronic combatstimulation capabilities are co-located with the cham-bers and test assets, while others can be remotelynetworked to support testing. For example, the SAILhas remote connections via fiber optics to provideacoustic and RF ship data links to aircraft under test.Other Joint Service capabilities can be linked andutilized as needed. The table below lists some of thefacility’s current capabilities (Table 1).

IBST simulators, stimulators, and laboratories areintegrated into a single virtual dynamic environmentusing JIMM. JIMM becomes the executive run-timecontroller for the integrated assets and providescontrolled parallel simulation events, using advancedmulti-threading processes, to maintain a fully repeat-able ordering of events to all interfaced stimulators.The aircraft data bus is instrumented and the data flow

Figure 2. Mission computingand display system

Table 1. IBST simulation/stimulation laboratories supporting the P-8A test

Simulator/stimulator/lab Purpose

Automated Identification Friend or Foe

(IFF) Test Set

Simulates the SIF modes 1, 2, 3, C, and 4; two operating modes; interrogation mode; and

transponder mode.

Multiple Link Test and Training Tool Full network simulation of Link 11 and 16 data links has the capability to simulate any

combination of tactical digital information links simultaneously.

Strategic Data Link System (SDLS) A multi-channel UHF satellite communications (SATCOM)/line-of-sight radio system.

GPS Test Equipment (GPS/SPIRENT) Simulates a constellation of up to 12 satellites in both L1 and L2; the system under test can be

placed anywhere and at any time.

Advanced Multiple Environment Simulator

(AMES) III

A dynamic RF threat simulator capable of generating complex radar threat environments.

Infrared Sensor Stimulator Designed to support the design, development, integration, and testing of infrared electro-

optical sensor systems.

Joint Communications Simulator (JCS) Produces motion, range, and direction of arrival for hundreds of independent high fidelity CNI

emitters.

Surface/Aviation Interoperability Lab (SAIL) Provides tactical common data link and multiple sonobuoy signals.

Simulation and Stimulation for E3

T&E

29(1) N March 2008 19

Page 17: The ITEA Journal

is time-tagged and captured to provide before-and-after comparison of data processed by the P-8A. Thiscontinual real-time feedback allows for detailed post-test analysis of obvious and not-so-obvious adverseintrasystem EMC interactions. In this manner, anintrasystem EMC test of the P-8A can be efficientlyconducted while the mission systems and subsystemsare artificially immersed in ‘‘virtual flight’’ withrelevance to anticipated operational missions.

In order to achieve flight-like realism and missionrelevance, JIMM is programmed to run pre-scriptedwarfare ‘‘scenarios’’ which, for the purpose of thisarticle, refers to the textual depiction of P-8A crewactions, system functions, external activities or stimuli,and all preconditions in the course of accomplishing awhole or partial mission. Scenarios are based on actualOperational Situations and Tactical Situations (TAC-SIT) as defined in the P-8A Scenario DevelopmentStrategy, 2006 (Scenario Development Strategy13126/A1J1B/PMA-290/SE/1053 2006); the sameOperational Situations and TACSITs used for systemsintegration and crew training in the P-8A’s SystemsIntegration Laboratory (SIL). In practice, the missionsconducted inside an actual P-8A aircraft in the ASILwill mimic previous missions that have been rehearsedin the P-8A SIL.

To illustrate how a portion of the intrasystem EMCtesting will be performed in relation to these scenarios,TACSIT 5-4, a hypothetical search and rescue missionwill be utilized. But, before this search and rescuemission is conducted, EMC engineers will create anappropriate ‘‘communications plan’’ within TACSIT5-4 to satisfy one of the more critical facets of thesetests; to evaluate RF interference between all P-8Atransmitters and receivers. EMC engineers use astandalone internally developed software tool calledPrediction of Intra-system EMC to help predict whereRF interference will be at its worst. This is amathematical analysis and prediction program that isused in advance of testing to predetermine most likelyRF interference combinations.

The Prediction of Intra-system EMC programmakes the assumption that all receivers and transmit-ters are potential victims and sources of interferenceagainst one another and properly lists all frequencycombinations where interference is likely. Thesepredetermined ‘‘worst case’’ frequency combinationsare written into TACSIT 5-4 as part of the detailedcommunication plan. This mission scenario involvestake off, climb out, transit to an operating area,coordination of rescue efforts with Navy surface assets,and electronic surveillance measures to keep track ofunfriendly forces. Mission system avionics use involvesline-of-sight communications with encryption, various

data link operations, identification friend or foe,shipboard automatic information system, geo-locatingtargets with the electro-optical/infrared turret, inversesynthetic-aperture radar, and electronic surveillancemeasures. This four-hour mission scenario is flownover hostile littoral waters and concludes with the P-8A returning home safely.

Intrasystem EMC tests in the ASIL will beconcentrated on the integrated P-8A mission systems.Since a single source/victim test matrix listing theindividual mission system components would be toodifficult to manage, tests will be parsed into smallermore manageable matrices using a layered approach totest the whole mission system. Equipment such as line-of-sight communications, satellite communications,identification friend or foe, radar, navigation, sensors,MCDS, weapons systems, etc. will be logically groupedinto smaller matrices with a goal of (x) number ofvictim/source tests per hour or per scenario run. Eachscenario-driven test event is intended to allow for amanageable, but thorough evaluation of a smallnumber of systems and subsystems rather than riskthe potential chaos of doing too much at one time. Inthis manner, individual system-versus-system will bescrutinized for adverse EMC, while building up to andultimately achieving 100 percent-versus-100 percentoperation of the whole aircraft and mission systemssuite. We find it critical that EMC test engineers andscenario developers collaborate continually to ensuremission scenarios match EMC test requirements. Forall P-8A tests, attempts will be made to use pre-existing TACSIT scenarios. These scenarios orvignettes can be modified in accordance with the P-8A Scenario Development Strategy 13126/A1J1B/PMA-290/SE/1053 (2006) to satisfy the EMC testrequirements.

From an intra-system EMC perspective, all receiversand transmitters can be evaluated in this manner, alongwith search and rescue mission systems and subsys-tems. The hypothetical TACSIT 5-4 scenario includeselements critical to the intra-system EMC evaluationwhich are modifiable, yet can run as many times asnecessary until one of the previously mentioned victim/source test matrices is complete. Minor changes to thedetailed communication plan in the TACSIT willblend the software tools of the EMC engineeringdiscipline with the modeling and simulation tools ofIBST. This allows a thorough EMC evaluation of theintegrated P-8A systems and subsystems with rele-vance to the aircraft’s intended mission. AdvancedEMC cannot neglect the air/surface integrationchallenges nor ignore crucial joint interoperabilityissues. As programs evolve and plan for jointinteroperability and net-ready Key Performance Pa-

Achtellik

20 ITEA Journal

Page 18: The ITEA Journal

rameter, E3 and mission system performance testingwill evolve as well.

ConclusionCreating operationally relevant test scenarios in a

controlled environment is necessary to accomplisheffective and affordable testing on the extremelycomplex weapons systems of tomorrow. The P-8Amay be one of the first ‘‘next generation’’ systems toundergo testing in such an environment, but will befollowed by a surge of advanced programs in anincreasingly difficult and demanding T&E world. TheMRTFB, T&E communities, and NAVAIR havetaken proactive steps by creating the framework forfull-system collaborative and cooperative testing andare poised to take these concepts further as integratedsystems advance. For programs like the P-8A, we arelearning to leverage simulation expertise, tools, andfacilities across test phases. Collaboration between E3/EMC test engineers and flight/ground test engineersreduces cost by sharing simulation and stimulationassets and using common test methodologies. Signif-icant schedule improvements can also be realized byconducting tests concurrently. These types of advancedground tests have proven to reduce risk for programsand platforms undergoing developmental and opera-tional tests. The ability to transition from ground-to-flight test with the confidence that all systems work asexpected, that interoperability in stressing missions isassured, and that mission crews have fully rehearsedmissions is key to efficient and cost-effective execution.With the facilities, laboratories, and simulators inplace, the next challenge is to continue to strengthenworking relationships and collaboration between Sys-tems Engineering, Modeling & Simulation, Analysis,Training, and T&E communities, as well as strength-ening interfaces with the commercial developers oftomorrow’s weapons systems. The path to the future ofa usable Joint Mission Environment for all phases oftesting begins with small steps and innovative thought.For programs like P-8A and others, the process has

begun and collaborative facilities and infrastructure arecritical to future success. %

PAUL ACHTELLIK served in the U.S. Navy from 1968–

1980 as an aviation electrician prior to his present

involvement with electromagnetic compatibility (EMC)T&E. He’s now a senior member of the EMC Branch

within the Integrated Battle Space Simulation and Test

Department at the Naval Air Warfare Center Aircraft

Division, Patuxent River, MD. The EMC branch is

responsible for conducting electromagnetic environmental

effects (E3) tests on U.S. Navy aircraft, other DODaircraft and similar full scale integrated systems. With 27

years of ‘‘hands on’’ experience, he’s participated in over

150 E3 tests on a wide variety of aircraft and has covered

all aspects of E3 T&E including; EMC, EMV, P-Static,

Lightning, ESD, EMP, and EMI evaluations. Paul isNarte Certified in E3 and considered a ‘‘subject matter

expert’’ in his field. He currently serves as E3 project lead

and manager for all P-3 type aircraft, Unmanned Aerial

Systems and the Navy’s new P-8A aircraft. The

complexity of hardware, software, and C4ISR intra/

interoperability within these new aircraft systems cannotbe understated. Advanced systems of systems are data

driven and require complex inputs to determine if they are

working correctly. Paul’s E3 test methodology has evolved

to meet this challenge, where, the future of testing requires

a shift to ‘‘Operationally Relevant’’ test environments toaccomplish E3 T&E effectively and affordably. E-mail:

[email protected]

ReferencesMilitary Standard MIL-STD-464A. 2002. Electro-

magnetic Environmental Effects Requirements forSystems, of December 19, 2002.

Scenario Development Strategy 13126/A1J1B/PMA-290/SE/1053/. 2006. P-8A Scenario Develop-ment Strategy of August 22, 2006.

Simulation and Stimulation for E3

T&E

29(1) N March 2008 21

Page 19: The ITEA Journal

Design of the Ballistic Missile Defense SystemHardware-in-the-Loop

James Buford, John Pate, and Bernard Vatz

Systems Simulation & Development Directorate,

Aviation & Missile Command, Huntsville, Alabama

Test and evaluation (T&E) of geographically dispersed integrated systems are severely

constrained by cost, range safety restrictions, and ability to test while in an operational state.

The Missile Defense Agency has embarked on a hardware-in-the-loop (HWIL) framework

development that has the capability to characterize the performance of the Ballistic Missile

Defense System by integrating the operational software in a distributed laboratory architecture.

The HWIL framework is also intended to test the operational assets in their fielded

configuration and location. As more advanced radar discrimination algorithms are developed,

testing these algorithms and determining the impact on the system performance becomes

increasingly more difficult. The ability to stimulate radar signal processors with synthetic

signatures has also advanced over the last few years, thus enabling greater opportunity for

testing. The integration of separate defense programs, and thus independently developed

HWILs, has been a concern for the agency. The development of the Ballistic Missile Defense

System HWIL will provide the agency with a unified architecture across all Missile Defense

Agency programs, allowing consistent threat and environmental effects across all systems.

Key words: accreditation; advanced test facilities; complex operational systems; integrated

network; realistic mission environments; simulator/stimulator testing labs; verification &

validation.

Using the Ballistic Missile DefenseSystem (BMDS) as an example, thisarticle articulates the Missile DefenseAgency’s (MDA) hardware-in-the-loop (HWIL) framework design and

development for testing the BMDS. This frameworkwill allow MDA to establish a degree of confidence inthe expected performance of a very complex opera-tional system that cannot be evaluated by conventionaltests. The inherent difficulty in executing an opera-tional test in the conventional sense presents theOperational Test and Missile Defense Agencies withchallenges to field such a complex system.

This article examines the benefits and challenges ofimplementing a distributed HWIL framework andarticulates areas that are critical in design, implemen-tation, and execution of the BMDS HWIL. Inaddition, the framework test and control functions,

communication architecture, and interface require-ments are discussed. Topics include

N BMDS componentsN BMDS HWIL fidelity requirementsN Challenges of distributed simulation execution,

including data latency, data rates, and synchro-nization

N Management and coordination of complex testrequirements

N Common threat and environment for stimulationof simulation elements

N Methods for HWIL verification, validation, andaccreditation.

The ballistic missile defense systemThe BMDS Program is designed to provide

protection against limited ballistic missile attackstargeted at the United States. The MDA mission isto develop, test, and field this missile defense system.Using complementary interceptors; land-, sea-, air-,and space-based sensors; and battle management

This article was selected as the ITEA Best Paper—Second Place at the Annual

ITEA International Symposium, November 12–15, 2007, Lihue, Hawaii.

ITEA Best PaperITEA Journal 2008; 29: 23–28

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 23

Page 20: The ITEA Journal

command and control systems, the planned missiledefense system will be able to engage all classes andranges of ballistic missile threats. All ballistic missilesshare a fundamental characteristic—they follow atrajectory, which includes three phases: boost, mid-course, and terminal. By fielding a layered defensesystem and attacking the missile in all phases of flight,MDA can exploit opportunities to increase theeffectiveness of missile defenses and complicate anaggressor’s plans. The MDA has connected several testranges to form the BMDS Test Bed, which will addrealism to ground- and sea-based midcourse testing byallowing multiple engagements and different trajecto-ries and adding additional intercept areas. The BMDSTest Bed also includes boost and terminal segmenttests, which will demonstrate the viability of thelayered missile defense concept.

The potential boost-phase defense elements arehigh-power Air-Borne Lasers and kinetic energysystems. The primary elements in the midcourse phaseare the Aegis Ballistic Missile Defense and theGround-based Midcourse Defense (GMD). Theterminal elements are the Theater High Altitude AreaDefense (THAAD) and the Patriot Advanced Capa-bility 3 (PAC-3). Other elements include the exper-imental Space Tracking and Surveillance System alongwith its strategic and theater mission controller, theCommand & Control Battle Manager and Commu-nication system, and other agency experimental andoperational sensors.

The test and evaluation challengeClassical test and evaluation (T&E) of a new

weapon system entails repeated live ‘‘firings’’ by forcesthat would be employing the system against theexpected threats in an environment similar, if notidentical, to the expected battle space. Although theBMDS Test Bed provides for more realistic opera-tional testing and capability assessments, only a limitednumber of flight tests will be conducted. In support ofsystem assessment activities, the T&E community willuse flight test, digital simulation, and HWIL simula-tion data.

The BMDS HWIL framework provides a means totest the BMDS operational software in a controlledlaboratory environment. The HWIL framework is alsointended to test the operational assets at their fieldedsites and host country. As new advanced radaralgorithms are developed, the need to inject threatstimuli directly into the signal processor hardwareincreases. As much as possible, the architectureincorporates the component operational processinghardware and software that will be used in the field,

implementing the ‘‘Test What You Fly, Fly What YouTest’’ paradigm.

As the BMDS Block upgrades are developed, theimpact on system-level performance must be deter-mined. The HWIL framework will allow MDAmanagement to evaluate the upgrades before fielding.

The MDA is requiring the BMDS HWIL tosupport BMDS system-level performance-based as-sessments and support BMDS system-level concurrenttest training and operations functions. The HWILframework will allow simultaneous execution ofengagement sequence groups; testing both theaterand strategic assets. MDA can use test data to assessinteroperability of MDA elements, demonstrate theCommand & Control Battle Manager and Commu-nication system capability to control and manageBMDS communication networks, sensor management,and display situational awareness to the warfighter.

The Operational Test Agency also uses this test datato characterize BMDS operational capability, whichincludes threat detection, tracking, discrimination,engage, intercept, and destroy. Other objectivesinclude characterization of information exchangecapabilities among BMDS elements. The warfighteradditionally wants to verify courses of action, tactics,techniques, and procedures.

Benefits to HWIL testingWith the complexity of the BMDS, integrating

multiple systems into a joint fighting force is achallenge. Each element is a completely differentacquisition and each has somewhat different require-ments. Being separate, each element does not knowexactly what dependencies and needs it requires forinteroperability with the other elements. Independenttesting and verification of the elements does notnecessarily fully verify the BMDS or fully assess thesystem capabilities. If, for instance, the boost-phaseelements cannot destroy the threat, their tracking datacould be used to enable the midcourse battle-managerto use earlier and more accurate data to cue themidcourse element radars. The benefits of the BMDSHWIL are to help in flight test planning, interoper-ability, and performance assessment.

Flight test planning includes development of flighttest concept of operations, timeline analysis for themission director, determination of when to filter orinclude range radar track reports, evaluation of theexclusion of test range assets, pre-mission testing,verification of element interfaces, predicting theprobability of mission success, and testing of off-nominal excursions.

The BMDS HWIL may also be instrumental in thedesign and development of the BMDS Battle Manag-

Buford, Pate, & Vatz

24 ITEA Journal

Page 21: The ITEA Journal

er, which will have to interface with all element battlemanagement systems. Areas of interest include mes-sage translation, message traffic analysis, situationalawareness, allocation of interceptors, track correlation,search cueing, drop track reasoning, estimates of sensorcovariance, and hand-over strategies between sensorsof different elements during different engagementphases.

The most critical benefit is determining systemcapability and testing of block upgrades. The results ofHWIL testing can be used to demonstrate and verifythat system requirements are met. Analysis effortsinclude system capability assessment, kill vehicle andsensor acquisition, tracking and discrimination, andsystem battle-space evaluation.

HWIL descriptionThis article provides a construct for implementation

of a BMDS HWIL and is defined to include as muchas possible the tactical hardware and software. HWILfacilities consist of space-based and radar sensors,interceptors, and battle management and communica-tions. Obviously the radar antenna and the interceptorbooster cannot be implemented in their entirety.Typically, the radar HWIL consists of the dataprocessors and, in some instances, the signal proces-sors. The interceptor HWIL usually consists of thedata processors, which execute the guidance softwareand the software utilized to process the seeker imageryand determine the interceptor’s acquisition, tracking,and discrimination performance. Typically the BattleManager is represented by the actual tactical hardwareand software, with the communication interfaces andsimulated delays and timing.

The BMDS HWIL will integrate laboratoryfacilities in locations across the United States andintegrate the fielded operational assets, including thosein other countries and at sea. The BMDS HWIL willcontain a network to transmit simulation truth data tothe elements; a tactical communication network is alsoavailable to exercise and evaluate the real communica-tion between elements. The simulation network usesthe simulation protocol messages, while the tacticalnetwork uses satellite and fiber-optic links, with avariety of tactical message types.

The development of the BMDS HWIL frameworkwill provide the agency with a unified architectureacross all MDA programs, allowing consistent hard-ware, environment, and threat stimulation. Common-ality is needed in order to reduce risk. The benefits toachieving commonality in the target generator include:

N Ensuring confidence and control of target data—‘‘Single Source of Models.’’

N Ensuring consistent target representation acrossmultiple elements—‘‘ALL right or ALL wrong.’’

N Minimizing the difference in performance be-tween elements—‘‘Level Playing Field.’’

N Reducing development/modification cost andschedule—‘‘One Time Fix.’’

N Reducing cost and schedule for element projectoffices (provides elements with HW/SW to drivestand-alone element testing/verification).

N Reducing target & environmental model verifi-cation & validation (V&V) cost and schedule.

N Maximizing reuse of target development effortsand code.

N Reducing risk of interpretation.N Maximizing configuration control.N Providing linkage and heritage between elements.Depending on whether the test is for interoperability

or performance verification significantly drives thefidelity and commonality of the target generator.

HWIL framework. The fidelity of the simulationrepresentations can vary across different programs;however, the BMDS system engineer and integratormust determine the fidelity of the configurationneeded based on the requirements and intended useof the simulation output data.

The element representations should at a minimumhave the operational software integrated into thesimulation or hosted on the actual tactical dataprocessor hardware. In addition, the signal processorcould be added, along with the missile HWIL, and in-band injection of scenes to the sensor.

The basic BMDS HWIL architecture will consist ofthe test, execution, and control (TEC) module, theTest Interface Unit, and the element HWIL repre-sentations.

Test, execution, & control (TEC). The importance ofthe TEC module is to establish the connectivity anddetermine the particular test cases and setup required.The TEC module must synchronize all participants’simulation time and provide the necessary initializationand start commands to each representation. The TECmodule also provides updated interceptor state infor-mation from each element to the other elementsparticipating in the exercise.

The TEC conducts three major functions: pre-mission, mission, and post-mission execution. Ingeneral, the BMDS HWIL pre-mission TEC providessingle point control in defining test cases and providingthe capability to specify test simulation start time (past,present, future).

During the actual test event execution, the BMDSHWIL mission TEC provides displays that summarizeBMDS HWIL framework and element health and

Hardware-in-the-Loop Framework for Testing

29(1) N March 2008 25

Page 22: The ITEA Journal

status, situational awareness of BMDS elements undertest (element positions, sensor coverage, and threat),and framework and system events for monitoring.BMDS HWIL mission TEC also provides thecapability to monitor and display run-time testintegrity metrics to include framework and tacticalmessage traffic, message latency, and loss.

After completion of the test case, the BMDSHWIL post-mission TEC provides the capability toimport raw and/or processed data to a centralizeddatabase management system. This data will beprovided to the MDA and Operational Test Agency(OTA) communities for analysis.

Test interface unit. Another critical piece of anyHWIL is the target generator module. The testinterface unit comprises modules to generate threattrajectories and dynamics, radar signatures, threatplume intensities, and interceptor signatures. Inconjunction, common environmental libraries areutilized to induce effects to the signatures. Theenvironmental effects include ionosphere, earth limb,refraction, attenuation due to standard atmosphere,and rain. Other celestial objects modeled includesatellites, the sun, and the moon. Interceptor debrisis also modeled. The resultant signatures are thenprovided to the component representations.

As more advanced radar discrimination algorithmsare developed, testing these algorithms and determin-ing the impact on the system performance has becomeincreasingly more difficult. The ability to stimulateradar signal processors with synthetic signatures hasalso advanced over the last few years, thus enablinggreater opportunity for testing. The test interface unitwill have the ability to drive both the data processorand the signal processor to minimize the cost impactsof replacing all element representations.

Having a distributed, HWIL simulation architectureonly amplifies the need for adequate timing analysis.Bandwidth often limits the data rates between facilitiesand elements. The HWIL system architectural engineermust determine the data rates at each level of thesimulation from the TEC, to the target generator, to theelement interface, and even the rates associated withtactical communications between the elements. A testinterface unit will be co-located with each component tominimize data latency. Each component will have tohave an element-specific interface to incorporate thedifferent radar waveforms and integration rates needed.

MDA test eventsThe MDA has embarked on a test campaign for

each year and block upgrade. The campaign consists oflaboratory testing and operational asset testing.

Ground Test–Integrated (GTI) will be a distributedlaboratory system-level test, utilizing MDA elementHWIL facilities. The purpose of the test is todemonstrate the performance capability of the BMDS.The GTI will provide data for element and system-level assessments by executing a variety of scenarios andconditions, and evaluating sequences of events fromthe BMDS kill chain (e.g., detection, tracking,engagement, etc.).

Ground Test–Distributed (GTD) will be a distrib-uted fielded system-level test. Each BMDS elementhas incorporated into the tactical operational softwarethe ability to execute simulated tests, similar to theHWIL laboratories. The major difference between theGTD and GTI is that the GTD will exercise thetactical communication links from the actual fieldedlocations. In general, the test cases in the GTD are asubset of the GTI. The GTD is a progression of theGTI testing. GTD are intended to double check thatthe performance of the operational assets replicate theperformance evaluated during the GTI test campaign.

The concurrent test training and operations conceptwill capitalize on the GTD architecture to allow thewarfighter the opportunity to train and test on theoperational assets, while maintaining operational capa-bility to defend the nation. This concept will increasethe requirements on both the HWIL framework andthe operational system. However, the benefits to thewarfighter to train while on station will significantlyincrease troop efficiency. The crews will be able toevaluate their tactics, techniques, and procedures andthe command structure communications.

EvaluationThe test requirements process is a large and complex

job. The challenge of writing good test requirementscan be lessened if the flow down process is used todefine overall objectives and operational scenarios.These will flow down to the system requirements,which will flow down to the subsystem requirements,and so on down to the test requirements. Simulta-neously while developing a flow down process for therequirements, each requirement must be verifiable andable to fit into specifications. Good test requirementswill be very specific and reflect the functionality of thecomponents and, in turn, the system.

The primary objective of any evaluation activity is todetermine if the test objectives and requirements havebeen met. This requires that any observed or potentialsystem performance shortfalls be identified. A com-prehensive set of system performance measurements,applied on a per-run basis is used to verify that systemperformance is maintained within established margins.

Buford, Pate, & Vatz

26 ITEA Journal

Page 23: The ITEA Journal

These margins define the limits of system performancerelative to ensuring successful test implementation.

During each test case run, the critical missiontimeline and the expected results for key system eventswill be documented on the test case run log for eachtest case. As the test case run is completed, the testdirector will indicate on the log sheet if the key systemevents occurred as predicted and if the expected resultswere obtained. All test case anomalies will be recordedon the test case run log and will be provided to thepersonnel performing the analysis. After the test caseruns are completed, a post-test analysis will beperformed. The analysis determines if the missionobjectives were met and what the system performancemargins are relative to the requirements. In the eventof an anomaly, further analysis will be performed onthe test case to determine the root cause of the problemand to provide a resolution. A daily assessment reportsummarizes the information collected during the post-test data analysis activities.

At the completion of the test, the evaluation teamwill produce a test evaluation report. The contents ofthis report will include a comprehensive evaluation andanalysis of all test objectives and test requirementsalong with the system level assessment. The results willbe made available to the BMDS systems engineer who,in turn, directs future development to improveperformance and capability.

HWIL integration and accreditation processThere are four phases in an HWIL integration and

accreditation process. The first phase is the delivery ofelement representations and their stand-alone, checkouttesting. During this phase, it is the responsibility of theelement integrated process team to deliver V&V datacertifying that the model is a valid representation of theelement within specified limitations and usage con-straints. The second phase is the integration of theelement representations into the BMDS HWIL frame-work, in accordance with jointly defined integrationplans. Both the framework and element representationsverify the interface control documents have been met.

The third phase includes two distinct activities: (a)element-to-element integration buildup, and (b) testreadiness. The integration buildup part of this phaseincludes testing each element with system BattleManager and then testing with all elements scheduledto participate in the HWIL configuration. Afterintegration buildup, test readiness activities are conduct-ed including regression testing, dry run execution, andfinally lock-down of the HWIL configuration baseline.

All anomalies found during integration, regression,and engineering tests will be documented in Test

Incident Reports (TIR). Each TIR will be isolated toan operator, framework, or element issue. The TIR is amanagement process used for documenting, disposi-tion, and tracking test incidents for future developmentthroughout the testing life cycle.

The output of phase 3 is a signed certification letterfrom each participating element stating their respectiveelement has been successfully integrated into theHWIL in compliance with the Interface ControlDocument and can support the test objectives and testrequirements. Collectively, the MDA and BMDSelements are executing an ongoing suite of V&Vactivities to establish the credibility of the element testarticles. Each element program manager is responsiblefor reviewing the V&V data and the integration testingresults, after which caveats and limitations aregenerated. This recommendation is to be delivered tothe accreditation agent at the Preliminary TestReadiness Review (PTRR).

The fourth phase is the accreditation of theintegrated HWIL test configuration. During thisphase, the accreditation agent produces an acceptabilityassessment and accreditation recommendation, whichis provided to the MDA directors of systemsengineering and test and evaluation. The directorsevaluate the accreditation recommendation and deter-mine if the configuration is ready for test. A signedaccreditation letter is then prepared and presented atthe Test Readiness Review, which allows the formalstart of test execution.

Inherent in this proposed accreditation paradigm isthe execution with due diligence of commonly acceptedmodeling and simulation (M&S) V&V practices.

Verification and validation (V & V)Verification is the evidence of compliance with

requirements for a system (i.e., ‘‘Did I build it right?’’).Simulation verification is confirmation that all datainputs, logic, calculations, and engineering representa-tions within the simulation accurately portray theintended characteristics and interactions. Validation isthe evidence of the system successfully achieving itsintended purpose, or function (i.e., ‘‘Did I build theright thing?’’) Validation confirms that a simulationreflects real world expectations and is generallyaccomplished by comparing simulation results to actualflight test results or other external data. V & V shouldbe implemented in the initial stages of the HWILdevelopment and followed throughout its life cycle.

Failure to plan for proper V&V activities can lead tocostly design and schedule ramifications. A clear processfor the flow-down of accreditation needs into V&V dataproducts and findings is required. The specific V&Vactivities identified for execution and the resultant V&V

Hardware-in-the-Loop Framework for Testing

29(1) N March 2008 27

Page 24: The ITEA Journal

documentation is explicitly identified in a formal V&Vplan. All V&V activities should be selected for executionwith the goal of satisfying the fundamental data neededto support an accreditation decision.

Caveats and limitationsA key feature of any accreditation decision is the

identification of the caveats and limitations associatedwith the simulation configuration. Caveats caution ana-lysts on the proper use of the test data, while limitationsidentify capability shortfalls in the test configuration.These caveats and limitations are linked to the specifictest objectives and test requirements of a given test.

AccreditationIn accordance with MDA policy, all core M&S will

be accredited to support acquisition decisions. M&Sare abstractions and may not duplicate all actual,observed phenomena; however, they can providereasonable approximations. Based on V&V activitiesand integration testing, an assessment is performed todetermine the extent to which the HWIL configura-tion can meet specified test objectives and require-ments. Accreditation is the official determination thatthe test resource provides credible data that can beapplied to meet the intended uses within the statedcaveats and limitations.

Summary and conclusionThis article articulates how fundamental test objec-

tives can be met for a very complex system of systems,which cannot be evaluated fully through conventionaldevelopmental or operational tests. It examines thebenefits and challenges of implementing a distributedHWIL to support such assessments using the BMDSas an instance. Areas that are critical in design,implementation, and execution of the BMDS HWILare addressed. Based on V&V activities and integrationtesting, an accreditation assessment is performed todetermine the extent to which the HWIL configura-tion can meet specified test objectives and require-ments and to establish a degree of confidence in theexpected performance. %

JIM BUFORD is an electronics engineer in the Aviationand Missile Research, Development, & Engineering Center(AMRDEC) of the U.S. Army Aviation & MissileCommand at Redstone Arsenal, Alabama. He is the chiefof the Missile Defense Functional Area and the BMDSHWIL system integrator working under the guidance anddirection of Missile Defense Agency director of modeling andsimulation (MDA/DES), specifically HWIL branch(DESH) or the MDA Model and Simulation Center ofExcellence – Huntsville (M&S CoE-HSV). Buford’sprimary role is as the technical and programmatic lead forthe BMDS HWIL Test Framework. He has a bachelor ofscience degree in electrical engineering with a double minorin mathematics and computer engineering from theUniversity of South Alabama and has completed anextensive array of graduate and professional developmentcourses at the University of Alabama in Huntsville. E-mail:[email protected]

JOHN PATE is an aerospace engineer in the AMRDEC ofthe U.S. Army Aviation & Missile Command at RedstoneArsenal, Alabama. He is assigned to the System Engineeringand Integration Directorate, Ground-Based MidcourseDefense (GMD) Joint Program Office in Huntsville,Alabama and serves as the government’s verification,validation and accreditation lead for GMD HWILsimulations. Pate has a bachelor of science degree inaerospace engineering from Auburn University. E-mail:[email protected]

BERNARD VATZ is the BMDS HWIL system integrationlead, working under the guidance and direction of MDA/DES, specifically DESH or the MDA M&S CoE-HSV.The missile defense system exerciser (MDSE) provides thetest framework for MDA’s BMDS ground tests, exercises,training, and continuous test training and operations(CTTO) venues. Mr. Vatz leads the HWIL testingdevelopment and integration of defense system components.He is an electronics engineer at the AMRDEC of the U.S.Army Aviation & Missile Command at Redstone Arsenal,Alabama. Vatz has a master of science degree in engineering,systems engineering, from the University of Alabama inHuntsville and a bachelor of science degree in electricalengineering from the University of South Alabama. E-mail:[email protected]

Buford, Pate, & Vatz

28 ITEA Journal

Page 25: The ITEA Journal

Innovative Technologies and Techniques for In-Situ Testand Evaluation of Small Caliber Munitions

Andre Lovas

Georgia Tech Research Institute

T. Gordon Brown and Thomas Harkins

Army Research Laboratory

The Georgia Tech Research Institute and the Army Research Laboratory have collaborated in

the Defense Advanced Research Projects Agency–sponsored SCORPION program exploring the

application of microadaptive flow control techniques to small caliber munitions. This article

discusses innovative techniques and technologies created in pursuit of the development, test, and

evaluation of this new control technology. Tools developed include the use of g-hardened sensors,

processing and actuator control electronics in 25 mm and 40 mm munitions. Inertial

measurement units meeting all survival, packaging, and power requirements were designed and

implemented using low-cost commercial off-the-shelf sensors including micro-electromechanical

systems accelerometers and rate sensors and solid state magnetometers. Using resources integrated

on the processor, flight data were recorded and stored for post-flight retrieval. An innovative

projectile soft capture system allowed the projectiles to be safely recovered and reused multiple

times. Data analysis techniques were extended to evaluate the in-flight performance of the

microadaptive flow control technology. Further, the data served as a diagnostic tool to compare

system flight performance with ground-based tests.

Key words: dynamic engagement test environment; guidance and control; integrated

electronics; Maneuverable munitions; microadaptive flow control technology; SCORPI-

ON Program; spinning projectiles.

The Future Force Concept for the U.S.Army clearly outlines a strategy foroperational scenarios that feature com-bined-arms operations in a multi-threat, dynamic engagement environ-

ment. Precision small to medium caliber munitions areintegral and necessary elements of this strategy. Tomeet this vision, innovative techniques and technolo-gies are needed for both the realization and test andevaluation of small, spinning, guided projectiles.

With support and direction from the DefenseAdvanced Research Projects Agency (DARPA), theGeorgia Institute of Technology and the U.S. ArmyResearch Laboratory (ARL) have teamed on theSCORPION (Self CORrecting Projectile for Infantry

OperatioN) program to explore and develop theapplicability of Microadaptive Flow Control (MAFC)technology for aerodynamic steering of spinningprojectiles.

The SCORPION program was a multi-phase effortthat comprised an initial technology feasibility phase, atechnology demonstration phase, and a follow-onextension. The objectives of the feasibility and demon-stration phases were accomplished through the successfulintegration of MAFC into a 40 mm infantry grenadesurrogate, while providing sufficient divert controlauthority and adequate guidance and control to correctfor projectile delivery errors and achieve required targetimpact accuracies. The work in the follow-on phaseexplored advanced microgenerator actuator technologyand application of adequate MAFC-based divertcapability in a high subsonic velocity 25 mm projectile(McMichael 2004). Program objectives included

N Develop g-hardened gas generator actuators andfabrication technology;

The Defense Advanced Research Projects Agency sponsored this work under

grant No. DAAD19-00-1-0518. This article was selected as the ITEA Best

Paper—Third Place at the Annual ITEA International Symposium, November

12–15, 2007, Lihue, Hawaii.

ITEA Best PaperITEA Journal 2008; 29: 29–36

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 29

Page 26: The ITEA Journal

N Design, build, integrate, and test power, proces-sor, and driver electronics for gas generatoractuator systems;

N Research the nonlinear aerodynamics associatedwith the application of MAFC gas-generatoractuators to high subsonic spinning projectiles;

N Integrate actuators and electronics into the flightcontrol system;

N Miniaturize and g-harden the driver and flightcontrol system for launch in a surrogate 25 mmprojectile;

N Perform an open loop divert validation flightexperiment of a gas generator actuator systemusing 40 mm projectile at Mach 0.25; and

N Perform an open loop divert validation flightexperiment of a gas generator system using a25 mm projectile at Mach 0.6 to 0.8.

This article summarizes the latest work concerningthe open loop divert flight experiment of the highsubsonic 25 mm projectile.

Integrated system descriptionWhile the program focus was on the development of

MAFC technology, significant progress was made in thetools, techniques, and integration of technology for theguidance and control of small-caliber projectiles. Using acombination of commercial off-the-shelf componentsand components originally developed within the Hard-ened Subminiature Sensors Systems program for use inARL’s diagnostic fuze, an on-board inertial measure-ment system was designed and assembled (Lyons 2004).

The block diagram, Figure 1, shows the integratedelectronics on board the 25 mm projectile. These

electronics are hardened to withstand the in-boreacceleration forces experienced during gun launch.

Inertial sensor suiteThe sensor suite contains two axes of rate sensors,

three axes of accelerometers, and three axes ofmagnetometers oriented parallel to SCORPION’sprincipal axes, and two additional radially orientedaccelerometers. Outputs from these sensors combinedwith timing information from the oscillator were used bythe processor to initiate commanded maneuvers. Sensoroutputs were also stored in the processor for post-flightanalysis and diagnostics. The processor and oscillatorboards are shown in Figure 2. In Figure 3, the oscillatorboard, processor board, and the board-mounted sensorsuite are combined (bottom to top) in a stack thatfunctionally includes all the components of the inertialsensor suite (ISS) and the command guidance.

With the addition of batteries and a driver board, theelectronics assembly is complete. This assembly alongwith the 25 mm SCORPION main body is shown inFigure 4 with the driver board, batteries, inertial sensor

Figure 1. System block diagram showing sensor suite, processing, oscillator, and driver board

Figure 2. 25 mm SCORPION processor board andoscillator board

Lovas, Brown, & Harkins

30 ITEA Journal

Page 27: The ITEA Journal

boards, processor board, and oscillator/daughter boardused for interface connection (from left to right,respectively). The diameter of each board is 17.5 mm,and the volume of the electronics package is 0.79 cu in.

On-board processingPrior to launch, the on-board electronics are

functionally checked and programmed with the initialflight conditions. The processor senses launch usingthe longitudinal accelerometer and starts the flighttiming and data recording. The radial magnetometer isused to measure roll position and rate. For the openloop divert tests, the processor controls the start of theactuator firing sequence, the timing between firings,and the orientation of firing.

Calibration of inertial sensors was performed atvarious stages during the assembly process. However,careful attention to measuring the scale factor and biaswas made before final assembly. Sensors were individ-ually tested and aligned to assure that performance metrequirements for bias and scale factor before integra-

tion with the electronics assembly. By using amethodical procedure of assembly and test from thecomponent to the board level to the unit level, the needfor corrective rework was reduced in the final assembly.Checkout and calibration of the integrated electronicsincluded spin, magnetic, rate, and acceleration perfor-mance tests. Data from calibration performed afterfinal assembly and potting were used to convert theinertial sensor outputs to engineering units.

Data acquisition systemAn on-board data recording capability was devel-

oped and integrated into the SCORPION design. Thedata recorder stored 8064 records of data at program-mable sampling rates from 1 kHz to 6 kHz. In typicalconditions, the 4 kHz sample rate was used giving fullcoverage over the duration of flight lasting one to twoseconds. The data system recorded 11 analog channelsand four additional vehicle state channels. The datarecord had a 256 sample prelaunch record with thebalance of recording data during flight.

Projectile designThe design of the 25 mm SCORPION was

established with safety, reliability, and functionalityin mind. The projectile is composed of two sections:the electronic control module and the actuator module.To meet the functional and safety criteria, the actuatormodule was separated from the rest of the assembly.This design allows for the separation of any potentiallyhazardous material, such as propellant, from thecontrol electronics until just prior to firing. Theelectronics module is a potted cylindrical sectionhousing the power, driver, IMU, processor, andconnector boards, and a removable ogive (wind-shield)allowing access to the connector for communication,programming, and down-loading of data. The propul-

Figure 3. 25 mm SCORPION inertial sensor suite

Figure 4. 25 mm SCORPION hardware and electronics assembly

Small Caliber Maneuverable Guided Munitions

29(1) N March 2008 31

Page 28: The ITEA Journal

sion system consists of a cartridge case housing thepropellant, and an obturator/pusher assembly to sealthe high pressure combustion gases in bore whiletransmitting torque for spin stabilization and distrib-uting axial force to accelerate the projectile within thegun tube as shown in Figure 5.

Flight experimentsInitial flight experiments were conducted at ARL

using a 25-mm barrel, shown in Figure 6, for interiorballistic design and soft recovery design. The primaryobjective for these tests was to establish an under-standing of the propellant and cartridge case designrequired to launch the projectile at 0.8 Mach. Thisphase of experimentation was very successful atestablishing a charge weight needed to meet thevelocity requirements. Another goal was to establish amethod of soft recovery for the 25-mm projectile.None of the techniques used in the past to recoversmall caliber projectiles was suitable for these testsbecause of the large standoff distances and othersafety concerns. However, the idea of using layers ofdraped Kevlar to nondestructively absorb the kineticenergy of the bullet was explored and tested. Thiscapture method proved successful as both the

projectile and the pusher were slowed and captured.One of the captured projectiles, shown in Figure 7,was recovered after sustaining a launch acceleration of25,000 g’s.

Spark shadowgraphs taken during a test flighttrajectory are shown in Figure 8. The initial yaw ofthe projectile at launch is approximately two to threedegrees. Yet, after the maneuver, the resulting angle ofattack is approximately 17.5 degrees. This result closelymatches predictions from modeled trajectory simula-tions of approximately 18 degrees computed beforeflight testing.

Data from two of the sensor channels recorded on-board the projectile during a representative flightexperiment are shown in Figure 9. These data beginjust prior to launch and continue until shortly afterimpact. Thus, data from the launch event and theentire free flight motion of the projectile before,during, and after maneuver are included. The com-manded divert was a single initiation at a timed delayfrom the launch. The launch was internally detectedthrough comparison to an on-board accelerometer.Depicted are the two of the three axes of magnetic fieldmeasurement. Also recorded are angular rate in boththe pitch and yaw directions, accelerations in all threeorthogonal directions, and outputs from an additionalpair of accelerometers used to estimate the projectilespin rate. From this raw data, post-processing could beaccomplished.

Post-flight processingFormulations of projectile flight dynamics; guidance,

navigation, and control; and strap-down sensor loca-tions, orientations, and outputs are most often done ina so-called ‘‘projectile-fixed’’ or ‘‘body-fixed’’ coordi-nate system. This system is right-handed Cartesian

Figure 6. 25-mm barrel used for interior ballistic design andobturator efficiency evaluation

Figure 7. Recovered projectile after successful in-flight silicon

chip bridge initiation

Figure 5. 25 mm SCORPION assembly with cartridge case,

obturator/pusher, and projectile

Lovas, Brown, & Harkins

32 ITEA Journal

Page 29: The ITEA Journal

Figure 8. Orthogonal spark shadowgraphs depicting angle of attack before and after maneuver

Small Caliber Maneuverable Guided Munitions

29(1) N March 2008 33

Page 30: The ITEA Journal

with its origin at the center of gravity (cg) of the flightbody. The body-fixed (I,J,K) coordinate system has itsI axis lying along the projectile axis of symmetry, i.e.,the spin axis (with positive in the direction of travel atlaunch). The J and K axes are then oriented so as tocomplete the right-handed orthogonal system (Fig-ure 10).

Among the many varieties of magnetic sensors,‘‘vector’’ magnetometers are devices whose outputs areproportional to the magnetic field strength along thesensor’s axis(es). SCORPION is equipped with a tri-axial vector magnetometer oriented with the sensoraxes parallel to the projectile’s principal axes. The

projections of the earth’s magnetic field onto each ofthe sensor axes are given by the following equations:

MI ~cos hð Þcos yð ÞMnzcos hð Þsin yð ÞMe

{sin hð ÞMv

ð1Þ

MJ ~ sin hð Þsin wð Þcos yð Þ{cos wð Þsin yð Þ½ �Mn

z sin hð Þsin wð Þsin yð Þzcos wð Þcos yð Þ½ �Me

zcos hð Þsin wð ÞMv

ð2Þ

MK ~ sin hð Þcos wð Þcos yð Þzsin wð Þsin yð Þ½ �Mn

z sin hð Þcos wð Þsin yð Þ{sin wð Þcos yð Þ½ �Me

~cos hð Þcos wð ÞMv

ð3Þ

where M�!

N ~ MN ,Me,Mvð Þ is the magnetic fieldvector in a north, east, down earth–fixed navigationsystem, and (h,y,w) is the Eulerian projectile orienta-tion vector in elevation, azimuth, and roll, respectively.

Because SCORPION’s spin rate is large with respectto the yawing rates, the output from a magnetometeraxis oriented parallel to the K body axis, designatedMag_K, is a sinusoid whose frequency varies with theprojectile spin rate. For spin-stabilized and rolling

Figure 9. Recorded flight history of actuator initiation at approximately 0.06 seconds

Figure 10. Body-fixed coordinate system

Lovas, Brown, & Harkins

34 ITEA Journal

Page 31: The ITEA Journal

projectiles, the roll orientation must be known in orderto properly execute desired maneuvers. With knowledgeof the magnetic field, and knowledge of projectileelevation (h) and azimuth (y), the roll angles at whichMag_K crosses the field (wM) correspond to the Mag_Kextrema within a period. Ergo:

wM~

tan{1 sin yð ÞMn{cos yð ÞMe

sin hð Þcos yð ÞMnzsin hð Þsin yð ÞMezcos hð ÞMv

� � ð4Þ

Evaluating Equation 3 at the principal value solution forwM shows whether Mag_K is at a maximum orminimum. Projectile roll orientation (w) is estimatedby computing wM at the times of each local maximumand minimum and then interpolating at intermediatetimes. Having thus produced a projectile roll angle

history, the roll orientations at times of interest duringflight can be computed. The output from an axisoriented parallel to the I body axis, Mag_I, varies

directly with the angle between the spin axis and M�!

N .This is called the magnetic aspect angle (sM). Timehistories of sM provide information on projectilestability, yawing motion, damping characteristics, andmaneuverability. An example of magnetometer datafrom a SCORPION experiment, annotated to highlightidentifiable events during flight, is shown in Figure 11.

Post-flight application of these techniques to themagnetometer data yields critical information on ma-neuver mechanism performance and airframe response(see Figure 11). For this experiment, a 25 mm SCOR-PION projectile was programmed to execute a three-thruster divert to the right when looking downrange.After establishing the roll orientation of the Mag_K axis

Figure 11. Representative magnetometer outputs: (left panel) radial magnetometer data—Mag_K, (right panel) axial magnetometer

data—Mag_I

Figure 12. Performance measures derived from magnetometer data: (left panel) thruster orientations and projectile impact, (right

panel) magnetic aspect angle history

Small Caliber Maneuverable Guided Munitions

29(1) N March 2008 35

Page 32: The ITEA Journal

at the thruster firing times, the roll orientation of thethruster nozzle when firing was computed from theknown relative orientations of the thruster nozzles andmagnetometer axes. The thruster orientations at theirrespective firing times are seen in Figure 12 left panel tobe at about the 10 o’clock position. These orientationsare plotted as arc segments to indicate the resolution ofthese roll angle measurements resulting from thecombination of projectile spin rates and magnetometersampling rates. These arcs indicate the performance ofthe on-board guidance, navigation, and control inexecuting the commanded maneuver. Also included inthe figure is the projectile impact location (to the right)with respect to the mean impact point without maneuver.The associated magnetic aspect angle history, Figure 12right panel, demonstrates that the yawing motion andmaneuver, resulting from an individual thruster firing,depend on both the thruster orientation and theprojectile yawing rates at the time of thruster firing.Understanding these interactions is crucial to designingan effective SCORPION guidance law in a tacticalround.

ConclusionsIn researching the feasibility of small caliber maneu-

vering munitions, a new diagnostic capability wasdeveloped. An integrated system design was requiredto provide a 17.5 mm data recorder with inertial sensorsuite. This system has proven to survive in excess25,000 g’s in other applications. Its capability providesnumerous opportunities for furthering the effort ofguided small and medium caliber munitions. %

ANDRE J. LOVAS has a master of science degree inelectrical engineering and is a senior research engineer atthe Georgia Tech Research Institute. He is currently

pursuing research on embedded processing and sensorintegration in small caliber munitions. His researchinterests include the use of very high speed integratedcircuit hardware description language (VHDL) to imple-ment embedded digital hardware using field programma-ble gate arrays (FPGAs). He has developed an embeddedmicrocontroller with real-time software control of imageroperation, auto-exposure control, magnetic sensor process-ing, system timing, and 1 Mbps telemetry link. Thisembedded microcontroller is installed in a mortar-launcheddigital imaging reconnaissance system. E-mail: [email protected]

T. GORDON BROWN has a master of science degree inmechanical engineering and is the ballistics team leader inthe Advanced Munitions Concepts Branch, Weapons andMaterials Research Directorate, ARL. Recent researchefforts have focused on design and development of high-Gqualified miniature inertial sensor suites utilizing micro-electromechanical systems (MEMS) components for medi-um-caliber military applications. Projectile design aspectscovered by Brown and his team include mechanical andaerodynamic design. E-mail: [email protected]

THOMAS HARKINS has both a masters of science anda masters of arts degree and is a research mathematician inthe Advanced Munitions Concepts Branch, Weapons andMaterials Research Directorate, ARL. Recent researchefforts have included the design, modeling, implementa-tion, and analysis of low-cost sensor systems for use inmilitary ordnance. He holds five patents related to sensor-equipped projectile technologies. E-mail: [email protected]

ReferencesLyons, D. 2004. ‘‘Embedded Instrumentation Technologies for

Munitions.’’ 2004 NDIA Armaments Conference.

McMichael, J., Lovas, A., Plostins, P., J. Sahu, Brown, G., and Glezer,

A. 2004. ‘‘Microadaptive Flow Control Applied to a Spinning Projectile.’’

AIAA-2004-2512, January 2004.

Lovas, Brown, & Harkins

36 ITEA Journal

Page 33: The ITEA Journal

Test and Evaluation: Department of Defense and Private-Sector Resources—Assessing and Resolving the

Modernization Paradox

Drexel L. Smith

Wyle Laboratories Inc., El Segundo, California

A critical need exists to manage current test and evaluation assets, as well as to implement plans

to maintain seldom-used facilities in parallel with developing new facilities for new and

emerging technologies. Test and evaluation (T&E) provides a critically important element to

the entire acquisition process, ensuring that any weapon system meets its intended purpose—

from the component level to the full-up system. The T&E community consists of five primary

sectors: (1) Military bases and laboratories (government), (2) Major prime contractors

(industry), (3) Specialized subcontractors (industry), (4) Independent laboratories (industry),

and (5) University research facilities (academia). While independent laboratories are often

grouped within the ‘‘industry’’ category, they do in fact compose a separate tier consisting of

thousands of dedicated firms providing laboratories and independent test services, and they

invest heavily in their infrastructures. The federal government also has a significant investment

in equipment and infrastructure in the United States for research and development (R&D) and

T&E, much of which also is underutilized. As the experts examine the need to modernize,

streamline, and better utilize existing facilities, it is important to consider all five components of

the T&E community. This article provides a basis for discussion between government—

primarily the National Aeronautics and Space Administration and the U.S. Department of

Defense—and industry.

Key words: government-owned and contractor-operated; infrastructure; modernization;

privatization; streamlining; test facilities.

Across the board, the test and evaluation(T&E) community is facing a paradox:As weapon systems advance in sophis-tication and complexity, T&E facili-ties require continuing modernization

to keep pace. On the other hand, with U.S.Department of Defense (DoD) outsourcing dwindling,the industry and academic T&E communities findthemselves with costly excess capacity in many areas, aswell as a lack of capability to support new or emergingtechnologies. The unfortunate result is a spiralingincrease in costs for maintaining enormous T&Einfrastructures, thus severely reducing internal capitalavailable for upgrades and modernization.

Nearly a decade ago, U.S. Secretary of DefenseWilliam S. Cohen called for streamlining of theScience & Technology, Engineering, and Test &Evaluation Infrastructure in a report to Congress

(April 1, 1998). Secretary Cohen realized thatsignificant cost savings could be realized, and testcapabilities improved, by implementing a plan forstreamlining and privatizing operation and manage-ment. However, government laboratories and weaponsranges have continued to grow in size, complexity, andconfiguration during the ensuing years.

Many T&E facilities exist that should remain ownedand operated by the government, such as ranges thatcombine training and T&E, live-fire areas, and sea andair ranges. However, many others would benefit fromstreamlining and privatization if potential pitfalls canbe identified and avoided.

Issues for considerationSome issues to contemplate when developing a plan

for streamlining include the following considerations:

ITEA Journal 2008; 29: 37–44

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 37

Page 34: The ITEA Journal

N Government needs to recognize that it has beenconducting itself in a ‘‘conflicted-environmentway’’ when it comes to T&E.

N Government must recognize that the testingindustry is more than just the capabilities thatreside within the enterprises of its prime andsubcontractors.

N The most effective use of existing T&E assets(government and industry) needs to be thor-oughly evaluated and defined.

N Government needs to make hard decisions,specifically in terms of its own test facilities.

Beyond prime contractorsThousands of independent test laboratories have

been created over the years based on meeting DoD’sneeds. These companies employ a majority of theindustry’s true testing experts, who in turn provide‘‘third-party’’ evaluation and qualification for DoD’smost critical defense systems. Yet, these same compa-nies are overlooked when DoD endeavors to evaluatethe status of T&E and determine its future. Thesecompanies are crucial to the overall plan, and theirvoices need to be heard and acknowledged.

Defining effective use of assetsThe DoD, the National Aeronautics and Space

Administration (NASA), prime and subcontractors,independent laboratories, and academic institutions allhave ‘‘labs.’’ These labs are used to provide vital servicesfor preproduction testing as well as long-term researchand development (R&D), where they specialize in bothR&D and T&E. To ensure the best results from any newstreamlining effort, planners must define roles andresponsibilities, review available assets and infrastructure,and define the alternatives for new working relationshipsamong government, industry, and academic participants.

Making the hard decisionsNew approaches are needed, including creation of

new performance objectives and metrics to measure

utilization for comparison with critical requirements.In some cases, the requirement is based on utilizationand cost; in other situations, a facility is needed todemonstrate a new or emerging technology that maynot offer a financial payback. For example, NASA,DoD, and the private sector all have wind tunnels fordevelopment and qualification work; however, toomany facilities are competing for work in the lower-flow regimes and none that operate efficiently in thehigh mach numbers. For more than 10 years, industryand government have been in discussions, and thus farhave been unable to come together with a workablesolution that takes advantage of all facilities or thataddresses both current and projected needs.

Optimal solutionBased on decades of experience in the independent

testing industry, it is fair to say that the optimalsolution is one in which the government drives theexisting private testing industry to invest its ownmoney in future technologies and facilities. However,to accomplish this goal, the government needs toreduce its role as a direct competitor in the T&Eindustry, except where unique situations exist.

Current assets and infrastructureAs mentioned previously, the federal government

has a significant investment, measured in tens ofbillions of dollars, in R&D and T&E equipment andinfrastructure and laboratories, throughout the nation.However, industry also has a significant investment infacilities and equipment, especially the ‘‘full-service,one-stop’’ independent test laboratories that maintainfacilities to accommodate everything from basiccomponents to full-scale rocket and weapons systems.Since the end of the Cold War, many test-relatedresources in both the government and private sectorsare now underutilized, even with the new set ofconflicts and challenges in the Middle East.

Government range and test facilities, coveringthousands of square miles, account for much of thegovernment’s investment. However, there is also alarge mix of facilities operated as federally fundedR&D centers, including those operated by industrialfirms (Sandia, Oak Ridge, Savannah River); thoseassociated with universities (Ames, Joint PropulsionLaboratory, Lawrence Livermore); and those operatedby nonprofit institutions (Aerospace Corp., NationalDefense Research Institute, Project Air Force). Con-tractors also operate a large number of T&E facilitiesand ranges for the government, including the Air ForceFlight Test Center at Edwards Air Force Base (AFB),Space and Strategic Defense Command, and the NavalSurface Warfare Center.

Conflicted environment: The government operatesmany T&E laboratories and facilities, much tothe detriment of the industry that was created toprovide T&E services in the first place. It hascreated a situation where many of its ownfacilities compete for business with privateindustry—which is the antithesis of privatiza-tion—and the result stifles private investment dueto fear of competition with the government.

Smith

38 ITEA Journal

Page 35: The ITEA Journal

Overall, downsizing and Base Realignment andClosure (BRAC) programs have had little impact onapparent laboratory over-capacity, so planners need torethink how to best utilize all available resources anddevelop a cooperative, long-term relationship betweengovernment and industry. Part of this process will be tomore clearly define: (a) the true cost of operating,maintaining, and improving R&D and T&E capabil-ities; and (b) the most effective interfaces amonggovernment, industry, and academia.

As stated at the outset, T&E facilities are criticallyimportant to the acquisition process. Yet, as critical asthey are, they offer a poor return on investment andreturn on net assets, especially in light of thecontinuing investments for modernization to accom-modate the latest DoD warfare technologies. Withregard to over-capacity, the result for DoD issignificantly increased R&D and T&E costs thateventually drive up the overall procurement costs forany new weapon system.

Defining roles and responsibilitiesThe government (NASA and DoD specifically) is

inherently responsible for: (a) defining mission re-quirements and specifying the needs, (b) establishingprocurement and fiscal controls and contractingmethodologies, and (c) accepting the fully developedsystems.

Traditionally, industry has taken the government’srequirements, created the optimal design and per-formed the complete manufacturing process (design,fabrication, production, and distribution). At the sametime, whether intended or not, the government hasmoved increasingly into R&D, laboratory testing,T&E/operational T&E, and live-fire testing andtraining. Producing an effective plan for streamliningthe process will require revisiting each of these areas.Over the years, many studies have been undertaken toreview the nation’s research, development, and test andevaluation (RTD&E) needs and capabilities, and someof the results have been implemented while others havebeen largely ignored.

Independent test labsWith commercial off-the-shelf (COTS) and fixed-

price and warranty systems, industry in general hasassumed increasing responsibility for the demonstra-tion, validation, and reliability of weapons systems.Often forgotten, however, are the independent labo-ratories, which are grouped into the industry categorybut are not part of the typical prime contractor labstructure. With thousands of facilities around theworld, they have a separate identity and perform aunique range of valuable roles and functions:

(1) Trained personnel and dedicated facilities inspecific areas of expertise;

(2) Costs allocated across a large number of users(‘‘pay as you use’’);

(3) Testing is core business and focus is on innova-tion, modernization, and cost effectiveness;

(4) High volume and repetitiveness that allowcomplex work to become routine;

(5) Unbiased results without conflict of interest; and(6) Independent quality assessments.Industry’s role is too often viewed simply as

‘‘contractor operated,’’ but it is now necessary to lookat the bigger issue. This is not just about changingbadges of existing staff. Both government and industryhave top-quality people, but government and industry(including independent test labs) must have clear rolesand responsibilities. Both parties must participate in adialog to review appropriate roles and missions toensure the nation’s continued excellence in T&E.

As part of this process, industry must continue to:(a) maintain a solid infrastructure of laboratories andsupport facilities; (b) provide third-party demonstra-tion/validation services; (c) provide investments inmodernization of facilities, equipment, and manpower;and (d) provide operation and maintenance (O&M)contract labor at competitive costs.

Facilities and capabilities in both government andindustry are underutilized. The independent labindustry, for instance, has more than 7,000 organiza-tions that list 8734 as their primary SIC Code. Manyare narrow-range, special purpose facilities, but arelatively few possess a full range of test capabilitiesand must—of necessity—compete with the smallspecialty labs, which have lower operating costs andoverhead. At the same time, according to the GeneralAccounting Office, significant T&E excess remains inDoD and other government organizations.

Government and industry relationshipsTraditionally, industry provides support service labor

at RTD&E facilities, but partnering on facilities can beaccomplished much more comprehensively than is beingaccomplished today. Government and industry alreadyhave a long-term partnering relationship that includesgeneral services (cleaning, cafeteria, gardening, mainte-nance), technical services (computer systems, metrology,O&M test systems, O&M ranges) and facility O&M(joint facility use, multi-investors, expanded partnering).With these as a model, planning must move forward tocombine T&E resources to make the industry cost-efficient once again for all concerned.

When it comes to increasing the return oninvestment for laboratories, a new language must bedeveloped with the right vocabulary:

Modernization Paradox

29(1) N March 2008 39

Page 36: The ITEA Journal

N Activity-Based Costing: Assigns costs based onconsumption;

N Joint Ventures: Public/private partnerships;N Outsourcing: Government is responsible while

another organization completes the work;N Service Shedding: Divestiture when service is no

longer provided; andN Vouchers: Government subsidies.Key elements of the acquisition process in T&E

measuring are effectiveness, reliability, and suitability.By definition, T&E requires a significant investmentin infrastructure, specialized equipment and skilledmanpower, so the critical question becomes: How doesDoD ensure, with a high degree of confidence, thatT&E data are unbiased, objective, appropriate, reliable,and valid?

In many ways, industry is ahead of the governmentin implementing capacity reductions, driven by theconstant review of assets that do not produce a return.The metric most used is ‘‘RONA’’—a review of Returnon Net Assets including land, buildings, facilities, andequipment (minus liabilities). For instance, a majoraerospace contractor has instituted significant consol-idations by focusing on its core business, developing orenhancing partnerships with suppliers and serviceproviders for noncore services and goods, and reducingthe number of internal laboratory facilities.

Also, many prime and subcontractors are ‘‘surpluss-

ing’’ their excess test equipment, allowing smallercompanies, such as independent labs, to purchase well-maintained and reliable equipment to continue itsuseful life cycle. Partnering agreements betweencontractors also have provided cost-effective ways tokeep vital facilities available. If one company does notwant to lose access to an underused capability, it hasthe option to outsource the entire facility to acommercial lab. This is a win/win strategy: The firstcompany can negotiate full access on a priority basis,while the second company enhances its capabilities tosupport other customers.

As another strategy, several major prime contractorsare participating in a ‘‘laboratory alliance’’ to minimizeexcess capacity through the sharing of resources. Thismethod uses a single work authorization under amultilateral services agreement implemented by severalcompanies.

Two obstacles that continue blocking effectivepartnering are the incomplete move to COTSstandards and the lack of good databases. While themove to COTS is long overdue in some circles, nocommercial standards have been determined except in afew specific areas. Moreover, failure (fragility) limitshave not been established, and there are no reliable

comparisons of reliability versus performance limits atthe component level.

Both government and industry representatives needto change the way they think about the respectiveR&D/T&E roles. For example, not every entity needsits own lab or test range, and with today’s datatransmission technologies, researchers need not be inproximity to the actual test. Examples abound: fromflight test models to the eventual exploration of Mars,a reduction in facilities will allow for more efficient use of

those remaining.Metrology is another area for consolidation. Each

branch of the military, NASA, and Department ofCommerce (National Institute of Standards andTechnology) maintains extensive calibration standardslaboratories, but industry also has extensive capabilities.Unique requirements do exist, and specialty facilitieswill always be required, but there is an optimumbalance between maintaining in-house capability,outsourcing and combining facilities across all branchesof the military services and government agencies.

Time for decision and solutionsThe time for decision is rapidly approaching, and

that decision must resolve the original paradox if T&Eis going to move forward: Investments are needed tomodernize, but excess capacity needs to be reduced tofree up capital.

Competition between the public and private sectorsis not the answer. The real solution is to eliminateexcess capacity, develop partnerships for joint use, andprovide sustainable funding opportunities to industry for

continuing operations.A mechanism is required to review all options across

the military services, government agencies, industry,and academia. A set of standard reporting formatsmust be developed to ensure consistency. Roles andmission statements must be drafted in enough detail todetermine and eliminate overlaps, because industrydoes not want to invest in capabilities that already existor in resources that will force it to compete withgovernment facilities.

Accurate and detailed assessments are needed todetermine the cost basis. After many studies, there isstill no accurate database with regard to R&D, T&E,and laboratory capabilities across the United States.Facilities often have similar equipment performingdifferent functions. The task at hand is to perform aneeds assessment, properly align the cost basis, projectfuture needs and make detailed adjustments. Thiseffort will be unproductive, however, unless theavailable information covers all government agencies,industry, and academia.

Smith

40 ITEA Journal

Page 37: The ITEA Journal

Industry’s role in fielding top-quality systems, earlierand at less cost, is this: Continue to provide O&Mcontract labor at competitive prices; maintain a solidinfrastructure of laboratories and support facilities;participate in rethinking how vast resources can be‘‘right-sized’’; work to change the paradigm of ‘‘everyorganization needs its own facility’’; and support thedevelopment of COTS standards.

The solutions are within reach, but the harddecisions still need to be made. A good start is todefine the roles and responsibilities of all majorstakeholders, which include: DoD, other governmentagencies, industry (especially independent test labs),and academia. The objective is to discontinue thecompetitive approach for utilization of T&E facilities,as well as to eliminate and discourage excess capacitythroughout the T&E community.

Finally, and most important, the Secretary ofDefense needs to make a firm commitment to moveforward, by taking concrete action that encouragespartnering with the government and that stimulatesindustry investment via increased opportunities toshare in test services contracts.

Following is a discussion of several attempts atimplementing a Government-Owned and Contractor-Operated (GOCO) plan for major test facilities—anidea whose time has come once again, and one thatneeds to be carefully reexamined in light of new andinnovative concepts, as well as lessons learned.

GOCO case studiesIn recent years, Wyle Laboratories and others have

worked with both DoD and NASA to manage andoperate government test facilities, including:

(1) McKinley environmental test facility, EglinAFB

(2) Landing gear/tire/brake test facility, Wright-Patterson AFB

(3) Building 65 structural test facility, Wright-Patterson AFB

(4) Environmental test facility at Naval Air WarfareCenter (NAWC)-China Lake

(5) Laboratory consolidation at NASA KennedySpace Center (KSC)

(6) Hyperbaric chamber at Wright-Patterson fortreatment of burn victims

(7) Human centrifuge at Warminster, Pennsylvania(8) Propulsion test facilities at Air Force Research

Laboratory (AFRL)/Arnold AFB(9) Centrifuge and human effectiveness facility at

Brooks AFBWhile some of these studies have been successful,

others have not fared as well. In either case, the lessonslearned will be extremely valuable in determining a

future methodology for using a partnership as a meansof reducing redundant resources and the associatedO&M costs—for both DoD and Wyle.

McKinley environmental testfacility: Eglin AFB

The McKinley Climate Laboratory Main TestChamber at Eglin AFB is the largest facility of itskind in the world. The environmental trials at Eglinare a major milestone on the way to proving thematurity of the Nimrod design, as well as freezing theproduction aircraft design by the end of the year(Figure 1).

At issue is the fact that this facility is very expensiveto maintain and operate. And, as a U.S. Air Forcefacility, the demand is limited.

Competition is from a variety of sources, andoptions could include:

(1) A cold weather outdoor test facility in Fair-banks, Alaska, which operates at 265uF formore than four months of the year as ambientconditions. The cost of operation is minimal,but operators are subject to nature for control oftest conditions. This may be acceptable for manytest programs, but for running a test underspecific laboratory conditions, it may be toorisky (a similar outdoor facility exists in NorthDakota).

(2) Hundreds of environmental conditions testchambers exist in government and privatefacilities. If testing can be performed at thecomponent level, the need for full-scale testingmight reduce the need to maintain the McKin-ley Climate Laboratory.

(3) Some customers do not want testing to beconducted in a government facility because ofconcerns about protection of data (a lingeringview exists that for any test in a governmentfacility, all data become public). In addition, the

Figure 1. The McKinley climate laboratory main test chamber

at Eglin AFB

Modernization Paradox

29(1) N March 2008 41

Page 38: The ITEA Journal

government cannot commit to a fixed cost orspecific schedule. Both concerns could bealleviated with a government-owned/contrac-tor-operated type of program. A critical ques-tion then becomes: ‘‘Could a contractor-operat-ed McKinley Climate Laboratory allow foradditional testing to be conducted so that therewould be sufficient funds to offset the costs?’’

Landing gear, tire, brake testfacility: Wright-Patterson AFB

Through its legacy companies, Wyle operated thisfacility from 1966 to 2005. During that period, thegovernment workforce declined from nearly 20 to justtwo personnel, and it became a GOCO facility. Thisrequired Wyle to assume increasing duties beyond thecore test and engineering mission.

With the expanding scope, Wyle developed a greaterability to scale personnel resources up or down, and thecompany established a cooperative agreement with thegovernment to keep facility utilization high. Wylebrought $1.5 million of external funding into thefacility in its last year of operation, making it thelocation of choice for outside testing by companiessuch as Goodyear and Michelin.

Wyle offered to take on full responsibility for thefacility if the Air Force would provide a commitmentof workload to baseline the costs. The result was thatO&M was moved from the research section of thelaboratory to logistics.

Building 65 structural testfacility: Wright-Patterson AFB

As with the McKinley Climate Laboratory, thestructural test facility (Building 65) at Wright-Patterson has housed a world-class structural testcapability able to accommodate full-scale aircraft. Onseveral occasions, the AFRL expressed an interest inmaking the facility a GOCO operation, and Wyle hasoffered to take on the role of contractor—with acommitment to migrate the facility to a Contractor-Owned/Contractor-Operated (COCO) facility.

By including both government and commercialworkloads, there should be sufficient demand tomaintain such a world-class facility should the AirForce decide to move in that direction.

Environmental test facility: NAWC-China Lake

Wyle Laboratories and NAWC-China Lake devel-oped a working model by which NAWC couldperform tests for Wyle, and Wyle would have accessto NAWC’s environmental test facilities. The long-term vision was for Wyle to establish a commercial

laboratory within the NAWC complex, with NAWCproviding support in specialty activities such asinsensitive munitions and ordnance function tests. Acooperative agreement was developed as a contractingvehicle, and a number of tests were performed byNAWC under its terms.

Unfortunately, Wyle could not determine a suffi-cient level of business, nor could NAWC commit to acontinuing level of environmental testing to justifyproceeding with the development of the internallaboratory concept. However, such an agreement isan attractive option that remains open to this day.

Laboratory consolidation: NASA’s KSCAs part of the winning proposal to manage the Joint

Base Operations Contract, which covered bothNASA/KSC and the Air Force/Cape Canaveral AirStation, Wyle offered to review and develop a conceptto consolidate the metrology and nondestructivetesting laboratories into a single complex that wouldbe operated as a COCO facility. Located outside thecomplex gate (the research park is near the visitorscenter), it allows easy access for commercial usersthroughout Central Florida.

During the first year of the contract, Wylecompleted the business model, participated in siteselection activities at the proposed KSC Research Park,and developed a business plan and funding program.The process was stopped, however, as a result of landmanagement and environmental sensitivity issuesrequiring a complete site assessment.

Wyle was able to successfully integrate the Air Forceand NASA metrology laboratories into one operation(three locations), which resulted in substantial costsavings; however, total commercial laboratory consol-idation remains under discussion.

Although a work in progress, this consolidation is anexample of a government/private-sector program thatcan work, with all stakeholders emerging as winners:(a) The government will be able to reduce spending tomaintain expensive facilities; (b) a world-class capabil-ity will be available to a wide range of users throughoutCentral Florida; (c) the throughput of work will beincreased, reducing time and costs per item; (d) thecontractor will secure a steady workflow for manyyears; and (e) employees will enjoy a new and dynamicwork environment.

Hyperbaric chamber for treatment ofburn victims: Wright-Patterson AFB

This unique facility, used by AFRL to study oxygeneffects under pressure, has been semi-privatized toallow burn patients to receive oxygen treatments. Theprogram allows for increased utilization of the facility

Smith

42 ITEA Journal

Page 39: The ITEA Journal

to defray costs and allows AFRL to retain ownershipfor continued research as funding permits. This isanother example of a program in which everyone wins.

Human centrifuge:Warminster, Pennsylvania

A Wyle legacy company, Veda, won a competitionin 1996 to privatize the NAWC’s Aircraft DivisionWarminster dynamic flight simulator/human centri-fuge. The Navy retained ownership of data andremovable cockpits (and other proprietary materials)in exchange for issuing Veda a sole source contract tooperate the facility.

This turned out to be an unworkable arrangement forthe following reasons:

(1) Although Veda operated the dynamic flightsimulator, it was responsible for all costs,including the facility’s rental. So, the ‘‘Govern-ment Owned’’ portion of that deal was actually amisnomer (Navy support of some of theincurred facility costs would have helped theprivatization effort).

(2) Veda had no real control of the facility or freedomto market it commercially. The Navy controlledwho used the facility and what could be done inthere through the constraints it built into the sole-source contract on using the Navy’s government-furnished equipment and how operations had tobe conducted. For example, Veda could not usethe government-furnished equipment to marketthe facility for g-tolerance improvement programtraining to augment its limited R&D projectsbecause that was viewed as competing againstother dedicated Navy g-tolerance improvementprogram training facilities.

(3) The original business plan involved trying to runthe facility as the Navy had, with a marketingstrategy based on previous customers providingadequate funding for operation. When these oldfunding sources dried up, a scramble took place toidentify new funding sources and new customers.The Veda business plan assumed the Navy wouldprovide at least $500,000 per year of project workto support the transition to a commercialoperation, but it never materialized.

During this period, Veda also tried to privatize thelarge anechoic chamber at Warminster to support anemerging communications and antenna prototyping andtesting business. In that case, there was no commercialcompetition, but the Navy stopped the deal to avoidcompetition with the facility that it was replicating inMaryland. Penn State’s Applied Research Laboratory(ARL) has been in negotiations with the local township

to take over operations and resurrect the facility, but thespecific terms of the deal are not known.

One example of a successful privatization is theInertial Navigation Facility at Warminster by the PennState ARL. This was successful because a continualflow of funded Navy navigation programs enabled theoperation to remain viable without participants havingto find new customers and funding sources for near-term survival. This relationship continues 10 years afterbase closure and has allowed ARL to expand itsnavigation resources to other government and com-mercial areas.

Propulsion test facilities: Arnold AFBArnold Engineering Development Center (AEDC)

and Lockheed Martin Space Systems Company signed amemorandum of agreement in December 2000 for a 10-year alliance for electrical propulsion testing opportuni-ties in the center’s Space Environmental Chamber 12V.

The purpose of the agreement is to work together toaccomplish product research testing, product develop-ment testing, and engineering manufacturing develop-ment testing of Lockheed Martin electric propulsionsystems at AEDC. When upgrades and checkout arecomplete, AEDC will provide electrical propulsiontesting facilities and capabilities, and Lockheed Martinwill provide the integrated component systems fortesting.

Centrifuge and human effectivenessfacility: Brooks AFB

Under the most recent BRAC program, the humantraining centrifuge and other human effectiveness testsystems (hyperbaric and hypobaric chambers, disorien-tation simulators, and ejection seat trainers) are to berelocated from San Antonio, Texas, to Dayton, Ohio.From a practical standpoint, the cost of relocation is sosubstantial that constructing a new centrifuge andother equipment is more cost effective.

At this point, the AFRL is performing cost studies,but it seems clear that a new dynamic flight simulatorwill be constructed in Dayton to replace the centrifugeat Brooks AFB. This situation creates anotheropportunity, and Wyle has proposed to take over thesystem’s operation at Brooks AFB to provide acommercial screening and training facility for potentialcommercial space travelers. This concept will allow theAir Force to have continued access to the Brooks AFBsystem as a backup until the replacement facility inDayton is operational.

Lessons learned from the failed attempt at Warminsterare being applied to allow for this anticipated operation tobecome a commercial success—provided there is suffi-cient demand for the services. As a contingency, the

Modernization Paradox

29(1) N March 2008 43

Page 40: The ITEA Journal

business model has been constructed based on nocommitment for continuation by the Air Force.

Summary and conclusionsThere are countless examples of situations in which

the government can and should make an investmentand then own and operate a test facility (theDepartment of Energy weapons complex is a primeexample). And, there are equally many examples inwhich original equipment manufacturers, prime con-tractors, academia, and independent laboratoriesshould make the investment to own and operate theirown test capabilities.

At one end of the spectrum are material coupontesting and standard analytical chemistry tests, wherethe U.S. marketplace hosts several thousand privatelyheld, commercial testing laboratories. This type ofwork can be purchased as a ‘‘price sheet’’ commodity ona by-the-test basis. With this highly competitive androbust industry available, one would need a strongjustification to develop a new facility. At the other endof the spectrum are the highly sophisticated and uniquefacilities such as a high-energy laser facility where onlyDoD could justify its need.

But, one must consider all the test facility demandsthat fall somewhere in between. Following are somethoughts on the topic:

(1) Commodity-level testing should be left to theprivate sector.

(2) Because a prime contractor or original equip-ment manufacturer is generally in the businessof manufacturing a product, an investment ina full test facility or in costly test equipmentis viewed as a business cost that must berecovered.

(3) Academia is in the business of sharing knowl-edge, so access to test facilities on an as-neededbasis is more important than ownership.

(4) Only the government has the ability to constructand operate test facilities that are unique andmay not provide a financial return.

(5) The independent test laboratory industry fillsthe gaps.

(6) Not every organization can or should own a testfacility because data and information can betransferred and shared in real time with today’stechnology.

(7) Costly test facilities should be shared to the greatestextent possible to lighten the burden on any oneorganization and to add value to the test results.

(8) For unique facilities, a forum involving allstakeholders should be established to reviewthese key questions:

(a) Who is in the best position toestablish the requirements?

(b) Who can best design and constructthe facility?

(c) Who should finance and own thefacility and, by what means?

(d) Who has the required experience andtrack record to operate the facility?

These considerations and questions are not new, butthey need to be revived and revisited as part of ameaningful dialog on resolving the modernizationparadox. As is obvious, the government needs torecognize that the testing industry is more than just thecapabilities that reside within the enterprises of itsprime and subcontractors. It must work with academiaand the industrial sectors to determine the mosteffective use of existing assets (government andindustry) and plan for the efficient use of new assets.

It is time for the government to step up and makethe difficult decisions, and for stakeholders in the othersectors to demonstrate their commitment as well.

It is recommended the senior leadership within DoDand NASA join together to facilitate further discussionswith industry to develop a pathway for determining thebest utilization of test facility resources. The term‘‘industry’’ should not be considered just the major primecontractors, but in this case must include the ‘‘indepen-dent laboratory’’ industry as well. %

DREXEL L. SMITH is the senior vice president, CorporateOffices, for Wyle Laboratories Inc., El Segundo, Califor-nia. He has more than 39 years of experience in the testand evaluation of weapons, munitions, propulsion systems,nuclear systems and components, in positions ranging fromtest engineer to corporate management. Most recently, hewas senior vice president and general manager of Wyle’sTechnical Support Services Business Unit, which operatesand maintains laboratories and research facilities forgovernment and industry clients such as the Department ofDefense (DoD), National Aeronautics and Space Admin-istration (NASA) and major prime contractors. Programsinclude support of the General Electric Turbine Engine testfacility; Propellant Systems, Non-Destructive Evaluationand Metrology Laboratories at Cape Canaveral/KennedySpace Center; and research centers at Wright-PattersonAir Force Base. He also directed the Research Instrumen-tation and Metrology Services contract for NASA Langley.Smith is a long-time supporter and member of ITEA,having served on the Board of Directors (1996–1999) andwas re-elected in 2005. He also served on the DoDDefense Science Board, UXO/Land Mine Task Force(1996–1998). E-mail: [email protected]

Smith

44 ITEA Journal

Page 41: The ITEA Journal

Testing and Training 2020:From Stovepipes to Collaborative Enterprises

Jim Sebolka

The Paulus Institute, Washington, D.C.

David Grow

Headquarters, U.S. Army Training Support Systems, Washington, D.C.

Bo Tye

Developmental Test and Evaluation, Department of Defense, Washington, D.C.

This article presents approaches for overcoming the obstacles in the path to integrating

Department of Defense (DoD) test and evaluation (T&E) and training communities to better

support the modern-day warfighter and to enable new opportunities for shared investment,

development, and process improvement. Testing and training, now managed under separate

fiscal and managerial constructs, are hindered from establishing shared capabilities by distinctly

different goals and funding. Each community must synchronize its priorities and funding with

those of the other community to secure joint investments. A two-community perspective of the

future path was expressed at the International Test and Evaluation Association (ITEA) Open

Forum on Testing and Training. The proposed path to achieving integration of testing and

training includes the establishment of a singular management backbone that encourages joint

investment to eliminate duplication of effort and thus, systemically bring about cost reduction

and enhance effectiveness across both communities. Such a shared backbone would enhance

testing, training, interoperability, and warfighting through the increased commonality and

realism of warfighting systems prior to fielding.

Key words: acquisition; combat readiness; cost reduction; interoperability; joint

investment; testing and training.

Faced with a 30-year history of efforts tointegrate defense testing and training, theInternational Test and Evaluation Associ-ation (ITEA) determined to bring forth theleaders of these two communities to address

a path forward focused on success and documentedfindings. The result was the ITEA National Open Forumon Testing and Training hosted by the ITEA GeorgeWashington Chapter on October 3–4, 2007. Creativeapproaches, lessons learned, and community insights werealso solicited for input. This article represents thecollective findings from that effort, and stands ready toserve as the first touchstone on future efforts.

A major issue for the Department of Defense (DoD)is, ‘‘How does the DoD improve readiness andcapability while cutting costs for training and testand evaluation (T&E) within the context of anoverarching defense enterprise’’? A corollary to this

is, ‘‘How does DoD accomplish cost cutting whileimproving synergies in the areas of test and trainingbased on principles that will survive from one set ofleadership in DoD to another over time’’?

It is the purpose of this article to assess where wehave been, where we are, and where we should be inthe year 2020. The authors aim to present approachesto overcome the issues blocking the path to synergizeT&E and training for a greater good than that whichcan be achieved by each community acting alone and toempower defense leadership with enduring solutionsfor the defense enterprise.

ChallengesOn the surface, the DoD has numerous capabilities

and substantial funding to upgrade those assets.However, peeling back the onion reveals the challenges

ITEA Journal 2008; 29: 45–50

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 45

Page 42: The ITEA Journal

that exist within that structure. Combat readiness andtechnical evolutions present immediate demands uponthe test and training infrastructures. DoD testing andtraining must therefore remain ready to execute anyinstruction at any time, recognizing that a lengthyplanning, programming, budget, and execution processis the only path to new investment. This is the currentsense by those communities. Therefore, each test ortraining facility within each Service must retain asmuch of its assets as possible to prepare for the nextrequirement, even as the costs required for mainte-nance rise with inflation and equipment age and thefinancial support for such maintenance diminishes.This paradigm must change. Examples of thesechallenges for change include:

N The Strategic Missile Defense test capabilities atKwajalein Atoll have large-scale radar systemswhich exceed 35 years of age and operate usingvacuum tubes no longer in production. Mainte-nance of these systems mandates the customizedmanufacture of these tubes or piecemeal replace-ment technologies at great expense, but at lesserexpense than the wholesale replacement of theradar systems. Also, the Roi Namur large-scaleradars at Kwajalein serve Army Space Command.

N Directed energy weapon systems add new tech-nological challenges to the DoD. Speed-of-lightweaponry requires specialized targets, instrumen-tation, and ranges to handle the direct effects ofthe weapon beam, recognizing that any error(even something as simple as a coffee cup in thepath of the test beam) can lead to disastrousconsequences as the weapon changes trajectory.

N Hypersonic and large footprint weapons haveadditional challenges to find sufficient airspaceand range capacity for testing and trainingoperations.

N Improvised explosive device (IED) defeat mech-anisms have mandated the use of high powerjammers in an environment encroached heavilyby commercial spectrum use.

N Recognition of the individual warfighter as a keyelement of technology has led to new instrumen-tation requirements, which forces the addition ofmore weight and bulk onto overburdened trainingmission participants. As technology capabilitygrows, so does the fielding to the individualwarfighter, adding to the combat load even as thetechnology diminishes in size and weight.

N Incorporation of new aviation platforms man-dates reexamination of airspace usage and mon-itoring. The F-22 is flying at higher altitudesthan most combat aircraft, while unmanned aerialvehicles are flying lower. Thus, the definition of

airspace is now requiring more accurate moni-toring of simultaneous activities across theairspace.

N IED usage and world population shifts have ledto a change in the warfighting spectrum. Urbancanyons, multistory buildings, and close-in com-bat operations now must supplement the con-ventional force-on-force combat mechanisms intraining and testing, without adding to thetraining or testing time horizons for completion.Peacekeeping and nation building efforts havealso mandated new duties for military personnelthat had not been part of the original trainingdesigns.

While the DoD attempts to keep costs as low aspossible for its testing and training operations, suchcost savings mandate retaining equipment and facilitiesthat are unaffordable to replace, but also expensive toretain and maintain. Each Service, each community,and each functional capability must provide theresources and staffing to keep these capabilitiesavailable to support an ever-changing DoD missionprofile. While an enterprise-wide approach wouldintegrate these solutions for cost effective benefits,the current business model places defense test andtraining ranges under various management structuresand financial oversight processes. Stovepiped ap-proaches to these issues thus become institutionalizedacross the DoD to meet individualized requirements.

While these approaches provide near-term solutions(without waiting for the execution of the full budgetprocess), they serve as a lightning rod for criticism fromvarious analyses of the Department. Congress, the BaseRealignment and Closure (BRAC) Commission, theQuadrennial Defense Review, and periodic auditspursue opportunities to save funding and reduceperceived excess capacity. Meanwhile, the Departmenthas historically struggled to secure additional fundingand lands to be used for test and training in preparationfor the inevitable next war. Today, those efforts arevalidated as the Department fights the Global War onTerror, also known as ‘‘the long war.’’

In the training world, funding is focused onoperations and maintenance (O&M), procurement,and military construction (MILCON) with a lesseramount on R&D. The reverse is true for testing. It is aquestion of proper balance, which neither communityoptimizes for the overall defense enterprise. Table 1shows distinctions between the testing and trainingmissions and roles.

Today’s testing community grew out of an acquisi-tion environment that had fielded systems withsubstantial problems while acquiring new weaponsystems to provide the warfighter an increased

Sebolka, Grow, & Tye

46 ITEA Journal

Page 43: The ITEA Journal

capability. Test and evaluation serves as what Secretaryof Defense Perry called, ‘‘The conscience of acquisi-tion’’ by providing a focused approach integrated intoweapon system acquisition. The weapon systemacquisition business model and its language sustainthe testing community. Reimbursable range operationsfund the T&E community workload with minimalinstitutional investment.

At the same time, the current training communityconstruct arose from the readiness world, serving toprepare warfighters for combat. The warfighter focusedapproach, business model, and language keep thetraining community operating. Institutional fundsprovide the key support assets needed to keepwarfighters in a state of readiness while expandingtheir capabilities to face combat challenges.

Thus, the two communities began as separate entities,and grew into distinct missions and cultures, united onlyby their support to the warfighter and few sharedresources. Cost models, business enterprises, endobjectives, and even the language of daily operationsdiffer between them. All of these factors serve asobstacles to the Department’s efforts to share resourcesto the benefit of the warfighter and the taxpayer.

Current situationCurrently, the test and training communities primar-

ily attempt to resolve their individual challenges usingcommunity specific investments. This reinvestmentapproach generates community-wide savings but coststhe DoD substantial resources when T&E systemstypically are not applied to training applications, andvice-versa. Ultimately, the two communities establishedtoday’s infrastructures which inherently inhibit theshared use of ranges, technologies, and mission space.

These policies and practices suffer from a lack of astrategic vision binding both communities and are

divided by individual mission statements which arefocused on acquisition or training thrusts rather than aunified thrust of victory in war. The emphasis needs tobe focused on the warfighter as the ultimate customerrather than the missions of the two communitiesindividually serving the warfighter.

Last year a policy letter was signed by the UnderSecretary of Defense for Acquisition, Technology, andLogistics (AT&L), the Under Secretary of DefensePersonnel and Readiness, and the Director of Oper-ational Test and Evaluation (DOT&E) at the Officeof the Secretary of Defense (OSD). This letter wassent to the three Service secretaries requesting theirresponses on how they would implement collaborativeefforts between the two communities for activitiesrequiring similar capabilities.

The BRAC Commission, viewing national assets tomake more synergies for cost reduction, struggled tomake simple definitions. Very little came out of itseffort to drive the testing and training communitiestogether.

There exists a demand for training as combat forcesredeploy back to their home bases. However, limitedsite locations to conduct training (as well as test) exist.The demand on training is going to exceed that whichis available.

Efforts to unify leadership (e.g., the Defense Testand Training Steering Group) have been thwarted byefforts to improve the two individual communities.The separate focus has led to an imbalance anddiminished the stability of the shared testing andtraining environment. One of the problems is thatwarfighters view T&E as an encroachment into theircritical domain, as T&E may force changes based onfailures or safety risks within the inventory of militaryequipment. Sometimes this has been the case. Bycomparison, the training community sustains readiness.

Table 1. Comparison of testing and training cultures

Objective Testing Training

Community of interest Acquisition Operator (readiness)

Key concerns Warfighter equipment Warfighter operations

Key products Materiel safety release Warfighter readiness

Material acceptance Unit readiness

Reliability certification New equipment training

Operational effectiveness Increased warfighter effectiveness

Suitability

Survivability

Milestones Production decisions Combat requirements

Fielding decisions Unit readiness

Funding Limited operations and maintenance (O&M) O&M

Limited procurement Procurement

Limited MILCON MILCON

Research and development (R&D) Limited R&D

Testing and Training 2020

29(1) N March 2008 47

Page 44: The ITEA Journal

Since T&E involvement in the training realm hasdefined ends and data requirements based on weaponsystems, the warfighter perceives little benefit becausethe test functions are typically accomplished before theweapon system is widely distributed to the field, and isoften perceived as delaying the receipt of the latestequipment into the field. Delays in receipt are tangibleto the warfighter. Improvements in safety andeffectiveness of an undelivered system are intangible.Thus, the warfighter perception is validated within hisrealm of awareness.

Today’s growing financial and manpower constraintsexacerbate the rice bowl syndrome. Program managersare being forced to concentrate their available resourceson immediate requirements as opposed to contributingto long-term investments in the broad defense enterprisesolution which will lead to more nearly global optimi-zation. Consequently, today’s existing incentives areoften counterproductive to enterprise-wide solutions.

In the current environment, there have been numer-ous attempts to tie together test and training invest-ments. Unfortunately, little has been achievable totranslate early agreements and initiatives into meaning-ful long-term progress. One recent example was aninitiative undertaken as a result of the OSD AT&L/P&R/DOT&E ‘‘Interdependency’’ memo. CRIIS, the‘‘Common Range Integrated Instrumentation System,’’produced mixed results. After several months ofnegotiations, the closest the test and training commu-nities could come to an interdependent agreement wasto develop the hooks in CRIIS for ‘‘an open architecturesystem capable of supporting both missions.’’ Thesehooks provide an ‘‘ability to grow the system over thenext 5-10 years to meet training needs’’ as well asproviding for the ability to ‘‘develop a radio capable ofrunning Training’s Range Instrumentation Waveform.’’Fiscal, mission requirements, and timing concerns ofeach community overshadowed the benefits recognizedfor the long term and thus sacrificed the future benefitfor the current fiscal and business focus realities.

The objective situation for 2020The ITEA Open Forum concluded with the

participants bringing about a two-community perspec-tive of the future path needed to establish an integratedtesting and training operational basis. The followingfirst two key points surfaced repeatedly:

Singular management. Testing and training cannoteffectively merge common requirements and opera-tions under the current management construct. A newparadigm needs to be created to establish a singularmanagement approach across testing and trainingwhile securing the responsibilities of both the T&E

and training communities. At the same time it iscritical to take into consideration the Title Xresponsibilities of the Services. In OSD, T&E isdivided into offices primarily covering operational testand evaluation (Director, Operational Test andEvaluation), resources (Test Resource ManagementCenter), and developmental test and evaluation(Deputy Director, Developmental Test and Evalua-tion). By comparison, training falls under a singlestructure within the Under Secretary of Defense forPersonnel and Readiness (USD P&R). These separatestructures divide test and training objectives, plans, andfunding, and further dissect the test community intooperational and developmental focuses.

A singular management approach at OSD wouldswiftly enable the progress demanded for savings by theoverall defense enterprise. This senior staff membershould be at the DEPSECDEF or USD level toproperly integrate the communities. An alternate is tohave the director, Operational Test and Evaluationserve as the focal point. Whichever of these threeoptions would be chosen, that individual needs theauthority and resources — funding and manpower —required to properly execute the mission, an indepen-dent reporting system to the Secretary of Defensewhich promotes objectivity, and an enforcementsystem which promotes defense enterprise wide long-term solutions. As an interim step, the re-establish-ment of the Defense Test and Training SteeringGroup would help unite near-term coordination effortsbetween the communities.

This straightforward approach for change at OSDmust take into consideration the multidimensionaldegrees of complexity involving the Services, jointcommands, and program managers. Whatever recom-mendation is implemented at the OSD level must bemirrored swiftly within the Services and COCOMs toensure that the streamlining and focus are madeDefense-wide.Incentives for shared investment. Singular manage-ment cannot succeed without the proper incentives tomake it work. Testing and training, now managed underseparate fiscal and managerial constructs, are hinderedfrom establishing shared capabilities by distinctly differ-ent goals and timing. Each community must synchronizeits priorities and funding with those of the othercommunity to secure joint investments. Rarely do thesepriorities and funding opportunities completely intersect,leading to duplicative and stovepiped investments.Attempts to overcome these challenges lead to a situationwhere one community’s high priority hinges on the othercommunity’s low priority, and thus unravel during theplanning, programming, budget, and execution process.An incentive process rewarding efforts to link capabilities

Sebolka, Grow, & Tye

48 ITEA Journal

Page 45: The ITEA Journal

across testing and training is vital to future success.Within T&E, joint investments are encouraged by theuse of the Central Test and Evaluation InvestmentProgram (CTEIP). CTEIP provides funding for jointinvestments to encourage the Services to consider theneeds of their sister Services and to share their futurevisions across the Department in building new testcapabilities. This model, applied to joint test and traininginvestments, would encourage similar sharing acrossthese two communities. Further, this model would bridgetoday’s two-community structure by encouraging thesharing of investments and requirements to developsingular solutions and capabilities.

Restructuring of investment processes into enter-prise and service-based processes. Similar enter-prise-level distributed capabilities like Real Time Casu-alty Assessment systems, backbone networks, fiber opticinstallations and maintenance, and standardized instru-mentation systems, would enable additional savings on aDepartment level if provided by the DoD rather than leftto be implemented according to individual Servicerequirements, schedules, and budgetary constraints. Bycontrast, individual instrumentation like sky screens, toxicfume detection, optical plume detection, and pressuresensors are best left to the current procurement construct.Individual requirement and budgetary processes are illsuited to the establishment of a corporate enterprise levelcapability within the DoD. Similarly, corporate enter-prise investment cannot proactively address the require-ments of near-term and Service-specific demands ontesting and training. Corporate and individual investmentneed separate processes to achieve maximum benefit tothe DoD, particularly if managed under a single oversightstructure. Standardizing a backbone architecture willprovide new and vital requirements at the Service level toinvest in common and connective capabilities.

The Forum also identified the following majorpoints:N Establish a shared, multilevel secure enterprise

network for testing and training. Currently, thetraining and testing communities struggle over whento use classified or unclassified versions of networks,instrumentation, and operations. These near-termsavings are causing long-term detriment to theDepartment as individual solutions are being estab-lished, and integration opportunities are thusthwarted. A departmental decision to establish asingular standard for multi-level security across thecommunities would negate these problems whileestablishing a DoD standard for the virtual and livebattlespaces. This would deliver long-term savings tothe Services and to all joint operations involving therange infrastructure of the United States. This

investment would also serve as the first corporateinvestment leading to a common electronic infra-structure across testing and training.

N Shared, realistic joint battlespace. Testers andtrainers both seek to establish combat realism intheir operations. Live, virtual, and constructivesimulations have emerged as a vital tool to bothcommunities, but continue to grow separately.Establishing a DoD initiative to provide a sharedlive, virtual, and constructive (LVC) architecturewithin the operating space of the multi-level securitynetwork above would immediately save resourcesand encourage shared investment across the com-munities. The Joint National Training Capability(JNTC) and Joint Mission Environment TestCapability (JMETC) both are taking key initialsteps to make this effort a reality. But, they arechartered to perform other functions with the sharedLVC environment as a byproduct of their efforts. Ajointly managed capability, replete with networkingstandards and protocols, would ensure testers andtrainers link to a common architecture. Ultimately,the communities would come together through theirshared standards and investments in them.

N Timing. Today’s national security environmentmandates prompt attention to the issues above.Network Centric Warfare (NCW) is changing thetest world to one in which the commander’s decisionmaking is a critical element of the test. Unlike pastweapon systems, NCW systems enable the com-mander a multitude of choices in the solution tocombat scenarios. No longer is the commander leftto decide whether to fire a single gun, turn a singleweapon system, or take other singular approaches.Instead, resources can be dedicated and rededicatedin rapid succession. The commander’s decisioncreates the ultimate pass-fail scenario for theweapon system. This enhanced capability hasbecome inherent in the battlefield commander ofthe future as well as the warfighter commanding asingular weapon system such as the F-22 itself.Therefore, NCW is creating opportunities andcritical needs pulling the training and the T&Ecommunities together into a singular effort focusedon supporting the warfighter.

ConclusionT&E and training must converge to support the

modern day warfighter to enable new opportunities forshared investment, development, and process improve-ment. Opportunities and models exist, such as theDefense Test and Training Steering Group and theCTEIP programs, which may be reapplied to thiseffort to secure immediate results. Longer term

Testing and Training 2020

29(1) N March 2008 49

Page 46: The ITEA Journal

opportunities abound, but require hard choices forchange in the managerial and fiscal models. Thiscannot be accomplished without the proper incentives.The DoD’s investment in a shared LVC and multilevelsecure backbone for testing and training can also beimplemented in the near future to tie the communitiestogether in ways never before realized.

The establishment of a singular backbone, andencouraging joint investment through a CTEIP-stylemodel, will eliminate unnecessary duplication of effort.It will systemically bring about cost reduction andenhance effectiveness between the two communities in aproactive fashion. It will encourage creative solutions toproblems for the warfighter. Such approaches willestablish a new model for T&E and training that makesrealism and instrumentation common across communi-ties and Services. Ultimately, this test and trainingenterprise approach will enable greater realism for bothcommunities at reduced overall cost. Lessons learned inthe creation of the singular backbone could then bereapplied to a series of corporate investments thatestablish a universal digital battlespace for testing andtraining to secure further savings. Such a sharedbackbone would enhance testing, training, interopera-bility, and warfighting through the increased common-ality and realism of Service systems prior to fielding.This increased commonality will also enhance ‘‘JointService’’ processes, by institutionalizing part of thisshared framework during the testing and training phasesof combat preparations. %

JIM SEBOLKA graduated from King’s College with

a bachelors of arts degree in mathematics and then

volunteered to become a lay missionary. He has now

completed the circle to work again with religious affairs as

the vice president of The Paulus Institute with an emphasis

on bringing together eastern and western churches through

their liturgies. The intervening 40 years were spent in the

USAF or working for DoD as a contractor. After obtaining

a masters degree in industrial engineering from Texas

A&M, his activities included serving in nine air campaigns

during the Vietnam War. In Vietnam he participated in the

planning and assessment of 400,000 combat sorties that

included developing a quarterly risk factor for all of the

combat missions flown and presenting a monthly briefing to

the four star commanding general for the air war on combat

aircraft lost. On the Air Staff, Sebolka served as the primary

expert on developing and producing the world-wide air-to-

ground conventional munitions requirements for USAF.

At the State Department, he provided assessments for the

Comprehensive Nuclear Test Ban Treaty. As an adviser to

the Thai government, his activities included writing and

coordinating the Thai Operations Research Society charterfor the Office of the Prime Minister and directing thedefinitive systems study for automation of logistics for theThai Army. In Korea, he successfully led U.S. support toestablish the Korean Institute of Defense Analysis as a worldclass think tank. In addition he was the contributing editorfor the Korean Business Review of the Federation of KoreanIndustries, writing 80 percent of that journal. As militaryassistant to the NATO Adviser to the SECDEF, heinitiated and led an international team to develop themethodology for munitions guidance for SHAPE.

As executive director for a joint office addressing theRDT&E of aircraft combat survivability, Sebolka’sactivities included selling the four Services to accept theOSD initiative for Joint Live Fire Testing and combiningthree different service program elements into one at OSD.He was one of the founders of the Survivability andVulnerability Information and Analysis Center (SUR-VIAC). Subsequently, he provided support to the Live FireTest and Evaluation (LFT&E) Office at OSD. He has alsoprovided expert witness to Congress for changes to LFT&Elegislation.

DAVID GROW has served as an instrumentation engineer,test director, dean of a test director college, product manager,and assistant project manager for the U.S. Army, and hasserved several details into the U.S. Navy, U.S. Air Force,and OSD to run financial programs for DoD. Grow iscurrently serving as the lead engineer and senior acquisitionprofessional for Headquarters, Department of the ArmyTraining Support Systems (DAMO-TRS) at the Pentagon.In that capacity, Grow is serving as the action officercoordinating the Training and Testing InterdependencyInitiative (T2I2) effort for the Army’s Director of Training,in coordination with the U.S. Army Test and EvaluationCommand (ATEC).

COUNT BOYER ‘‘BO’’ TYE, JR. is a retired Air Force officerwith over 26 years of experience in Air Force SpecialOperations, Test and Evaluation, and Acquisition. Heserved as the chief of special operations, Tactical Airlift andTrainer Division, Directorate of Global Reach Programswithin the office of the Assistant Secretary of the Air Forcefor Acquisition. He also served on the Air Staff as the chief ofpolicy and programs Division for Test and Evaluation.During his military career, Tye flew over 3,900 flying hoursand flew 59 combat/combat support missions duringoperations URGENT FURY, DESERT STORM, andPROVIDE COMFORT. Tye currently supports the deputydirector, Developmental Test and Evaluation (DT&E) forthe Department of Defense. In this capacity, he develops andimplements systems engineering and T&E policy to ensureService programs are realistic, relevant, and in compliancewith DoD and Congressional directives. His awardsinclude the Legion of Merit, the Airman’s Medal, and theAir Medal with oak leaf clusters.

Sebolka, Grow, & Tye

50 ITEA Journal

Page 47: The ITEA Journal

Evolving Enterprise Infrastructure for Model & Simulation-Based Testing of Net-Centric Systems

Steven Bridges

Joint Interoperability Test Command, Fort Huachuca, Arizona

Bernard P. Zeigler, Ph.D.

Arizona Center for Integrative Modeling and Simulation,

Electrical and Computer Engineering,

University of Arizona, Tucson, Arizona

James Nutaro, Ph.D.

Oak Ridge National Laboratory, Oak Ridge, Tennessee

Dane Hall, Tom Callaway, and Dale Fulton

Joint Interoperability Test Command, Fort Huachuca, Arizona

This article provides perspectives on how a test organization can organize and plan for

enterprise-wide adoption of advances in emerging technologies and techniques, whether

developed in-house or acquired from external sources. This article enumerates capabilities that

greatly enhance a test organization’s ability to support the impending testing demands from such

GIG/SOA-based projects and presents an overarching strategic plan for integrating existing test

technologies, identifying enterprise-wide technology gaps, and coordinating the development

and acquisition of new test capabilities to greatly accelerate their readiness to meet impending

net-centric testing challenges. The plan discussed in this article includes short-, medium-, and

long-term horizon components to acquire or improve current test capabilities and offers a layered

architecture that provides a framework for capability acquisition. Test organizations can

incentivize their contractors to exploit the composability, reusability, and extensibility of

technical attributes of SOA to support the development of the layered architecture. The authors

conclude that the design of the test organization instrumentation and automation on top of the

GIG/SOA infrastructure should be based on a model-driven software approach, systems-

engineering modeling, and simulation principles and frameworks.

Key words: Global Information Grid (GIG), Service Oriented Architecture (SOA), net-

centric testing, real-time interactivity, composability, reusability, extensibility, scalable.

Given Department of Defense (DoD)mandates for transition to net-centricoperation, a test organization mustacquire the ability to perform large-scale and fast-paced developmental

and operational testing of Global Information Grid/Service Oriented Architecture (GIG/SOA)-based de-velopment projects. For example, the Joint Interoper-ability Test Command has the responsibility to test forGIG/SOA compliance for such projects as Net-CentricEnterprise Services and Net-Enabled Command Ca-

pability. A test organization’s ability to support theimpending testing demands from such GIG/SOA-based projects can be greatly enhanced by acquiring net-centric test capabilities. Although most test organiza-tions already have the necessary capabilities to someextent, they can benefit from an overarching strategicplan for integrating existing test technologies, identify-ing enterprise-wide technology gaps, and coordinatingthe development and acquisition of new test capabilitiesto greatly accelerate their readiness to meet impendingnet-centric testing challenges.

ITEA Journal 2008; 29: 51–61

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 51

Page 48: The ITEA Journal

Net-centric test capabilitiesSeveral specific capabilities that a test organization

must address to effectively conduct developmental andoperational tests of net-centric systems are describedbelow (Buchheister 2005, Carstairs 2005).

ComposabilityComposability is the capability to seamlessly com-

pose the elements of the desired test environment byselecting and configuring live (e.g., human players,military systems) and/or virtual (digital representationsof live components) versions of all test environmentcomponents. Test organizations can take advantage ofthe SOA and component styles that offer technicaladvantages for the composition of test instrumentationservices and applications. Contractors should beincentivized to exploit the SOA constructs to buildplug-and-play capabilities while meeting current andfuture needs.

Reusability and persistenceThe test infrastructure persists over time and

includes organized repositories to support the reuseof such elements as simulation models/digital repre-sentations, test development and implementationprocesses, and test experimentation components andtools (intelligent test agents, for example). Thisincludes the capability to automatically store, catalog,and retrieve all information produced by any node onthe network in a comprehensive, standard repository. Acritical advantage of such repositories for the testorganization is that they also help to avoid duplicationof efforts by the test organization’s multiple contrac-tors.

ExtensibilityThe test infrastructure can be efficiently extended

through the use of common architecture, interfaces,processes, and tools. Extensibility, composability, andreusability are mutually supportive attributes of model-driven software design methodology informed byengineering modeling and simulation fundamentals.The test organization must incentivize contractors toadopt such methodologies to achieve composability,reusability, and extensibility attributes in its develop-ments.

Instrumented trustworthy measurementInstrumented trustworthy measurement is the ability

to instrument test environments in a manner that isprincipally nonintrusive and highly embedded, whichprovides real-time measures at the system and system-of-system (SoS) levels. Measurement is consistent andrepeatable across experimental replications, providing

reliable and trustworthy data. Specifically, instrument-ed trustworthy measurement includes the capability to

N Reproduce the test environment and play backsegments of the test event in a manner thatfacilitates assessing the effects of modifying theexperimental conditions with plug-and-play re-placeable test components.

N Measure, compare, and evaluate experimentallyspecified architectural and parametric configura-tions of the system under test.

N Collect and segregate operational data (e.g.,tactical and strategic data exchanged betweensystems under test) from test support data (e.g.,instrumentation, simulation, analysis, and testcontrol data).

N Seamlessly switch between real-time and after-test analysis of collected data.

N Perform the testing of net ready key performanceparameters (NR-KPP) and compliance to theNet-Centric Reference Model for upcomingGIG/SOA and other net-centric developments.

Visibility and controllabilityAs net-centric systems under test become increas-

ingly complex, the ability to visualize complexinteractions and exert control over such interactionsbecomes increasingly vital for the test organization’sability to provide credible test results.

Real-time interactivityReal-time interactivity includes visibility into events

and processes through a display/representation of thetest environment that is tailorable and providesaccurate situational awareness of the test infrastructureand the tests that are underway. Currently, many testenvironments focus on relatively simple interactionsand do not allow for highly complex many-on-manyscenarios in which test environment components(networks, systems, and forces) react within a dynamic,closed-loop environment.

Features of advanced test organizationsThe test organization should strive to be on the

cutting edge of test organization capabilities, includingN Agility. Ability to automatically and adaptively

monitor and manage selective functioning of thetest infrastructures, test scenarios, networks, andsystems and services under test.

N Automation. Ability to continually enhance thedegree of automation of all the processes involvedin defining, implementing, managing, reusing,and executing test events. This includes auto-mated self-organizing recognition, initialization,

Bridges, Zeigler, Nutaro, Hall, et al.

52 ITEA Journal

Page 49: The ITEA Journal

and control of plug-and-play test environmentcomponents.

N Scalability and Applicability to Full Life Cycle.Ability to scale the test infrastructure in terms ofsize, fidelity, and numbers of participants toaccommodate the domains of systems engineering,development, development testing, operationaltesting, interoperability certification testing, andnet-readiness and information assurance testing.

N GIG/SOA Integrated Robust Computer andCommunication Infrastructure. Ability to providehigh-performance computational support wherev-er needed in the configuration and execution of thetest environment and the analysis of test data (inreal time and after test). As the SoS andcollaborations brought in by customers for testingbecome increasingly complex, the test organizationwill require increasingly powerful computingresources to manage all aspects of testing. Thetest organization will also require the ability toprovide reliable, cost-effective, flexible, and GIG-enabled communication to all nodes.

(Note: Most of these requirements are not achievablewith current manually based data collection andtesting. Instrumentation and automation based onmodel-driven and systems-engineering modeling andsimulation principles and frameworks are needed tomeet these requirements.)

Proposed Acquisition StrategyAcquiring all the assets needed for the above

capabilities would significantly upgrade the testorganization’s capability for net-centric testing, butthey will vary in degree of maturity. Some may be readyfor implementation or purchase in the near term, andothers may require significant investment in researchand development. To help manage the acquisition ofsuch assets, we propose an acquisition strategy havingthree levels corresponding to long-, medium-, andshort-term planning horizons: (a) overall plan for testinfrastructure evolution, (b) test infrastructure devel-opment to address test technology shortfalls, and (c)planning for individual test venues and events (Fig-ure 1). The underlying objective of the proposedstrategy is to foster re-use of existing assets so as tomaximize the cost-effectiveness of acquisition. Thegoal should be to set up a process for re-use, so thatnew capabilities are needed only when existing onescannot be reasonably applied to the new situation.

Planning levelsLong-term planning

With respect to long-term planning, the objective isto look out past the horizon of imminent test events and

current infrastructure improvement projects to identifyemerging technologies and emerging system objectivesand to lay out the broad approach to development of thetest and evaluation infrastructure. As Figure 2 illus-trates, we suggest a planning approach to test individualcustomer projects and test events as part of the longerlife cycle of the test infrastructure evolution. Keyactivities in the long-term strategic plan are as follows.

As new systems are defined and developed by acustomer that will be subject to the test organizationcertification, the test organization must derive acoherent family of test objectives from the stated orto-be-developed system under test requirements andbehavior specifications. Test events, venues, andinfrastructure evolution must be synchronized withthe customer system development schedule.

The high-level characteristics of the test develop-ment methodology and of the infrastructure to be usedmust be determined to meet the perceived complexity,volume, variety, and velocity of test challenges—withthe objectives of furthering re-use of test resources andfostering cumulative knowledge management. Thisincludes, among other things, establishing require-ments for infrastructure development tools, such asformalizing and designing test models.

This long-term planning process passes technicalshortfalls and their temporal attributes (e.g., ‘‘needed

Figure 1. Net-centric testing planning levels

Figure 2. Long-term cycle of test activities

Testing Net-Centric Systems

29(1) N March 2008 53

Page 50: The ITEA Journal

immediately,’’ ‘‘needs can be foreseen for tests sched-uled in the near future,’’ or ‘‘is not critical now’’) on tomedium-term planning.

Medium- and short-term planningThe planning for individual test venues and events

consists of a cycle of activities that work within thestructure established by the high-level planning. AsFigure 3 illustrates, this cycle consists of the followingbasic elements:

Establish objectives. The test objectives must providean overview of the high-level system-specific testobjectives and identify basic technical and operationalevaluations that are needed to support future decisionevents. The objectives must

N Be tied to the system acquisition strategy.N Establish the basis for a test and evaluation

schedule in terms of test capabilities that will beavailable after each iteration of the test andevaluation process—this should include bothanticipated costs and timelines. It is vital thatthe test organization and the customer agree toan integrated budget and timeline for each testobjective.

N Be coordinated with the customer’s strategy forsystem development and demonstration.

N Identify major strategic risks to achieving theidentified test capabilities and lay out theactivities necessary to mitigate the risks.

N Identify challenges, such as from complexity andneed for testing that cannot be accomplished

manually in sufficient volume, which must beovercome to effectively assess SoS and systems tocontribute to their improvement. Update plans tomeet these challenges.

Identify relevant test environment requirements.Once the test objectives are set, identify and evaluatespecific test-support capabilities with respect to howthey contribute to satisfying the test objectives. At thisstage, a test environment description is constructed,which is tailored to the test objectives; relevantcapabilities of the system under test are identified, andtestable metrics are developed for those capabilities.

Reuse/build scenarios and mission threads toexercise given system under test requirements.The list of requirements for the system under test islinked to the underlying operational concepts andcapabilities. With this list in hand, it is vital to developspecific mission threads that exercise these capabilitiesin a way that is relevant to the test objectives andanticipated operational environment.

Identify atomic functional units, decompose suchfunctions into atomic behaviors, and implementtest behaviors. The preceding three activities set thestage for technical development of the test environment.The technical development phase includes (a) identify-ing the atomic functional units of the system under testthat comprise the identified capabilities, (b) decompos-ing these functional units into atomic testable behaviors,(c) combining these test behaviors as test models thatcan be compared with, and operated against, the system

Figure 3. System-specific and individual event planning cycle

Bridges, Zeigler, Nutaro, Hall, et al.

54 ITEA Journal

Page 51: The ITEA Journal

under test in the test environment. At this point, specificsystem under test components and/or subsystems areidentified as being relevant to specific system capabilitiesin the context of identified mission threads, and the testmachinery needed to stimulate and observe thesecomponents is ready to be put into place.

Build and/or reuse test bed software and hardwarefor executing test models; design and executetest events. Test events are planned to apply specifictest bed items to the system under test. The test planincludes a test environment configuration for the testevents, identifies the source of test data (e.g., live data,recorded system traces, simulations), and sets specificpass/fail criteria for the event. Acquire, build, and/orimprove infrastructure development tools, such as toolsfor formalizing and designing test models.

This cycle of test activities defines an iterative processthat allows for the evolution of each test phase as thesystem under test moves through its life cycle (Figure 3).Throughout the cycle of test activities, there must be anemphasis on the reuse of proven, reliable, and efficientinfrastructure elements and artifacts that were acquiredas a result of earlier test projects. Efforts first capitalizeon reusing existing software and hardware for executingtest models. Of course, the requirements of each newproject may exceed the capabilities of the currentinfrastructure and artifacts, in which case we seizeopportunities to enhance the infrastructure. Thus, eachspecific system under test feeds back lessons learned andcontributes to long-term capabilities and knowledge.This feedback loop is illustrated in Figure 2.

Proposed layered architectureTo support the acquisition of net-centric testing

capability with the time horizons just discussed, we offera layered architecture that provides a framework for suchcapability acquisition. We propose that the testorganization develop an overall architecture for net-centric instrumentation as illustrated in Figure 4. Thearchitecture is based on that presented in Sarjoughian,Xiegler, and Hall 2001 and refers to background inliterature on modeling and simulation (Zeigler, Fulton,Hammonds, and Nutaro 2005; Zeigler, Kim, Praehofer2000; Zeigler and Hammonds 2007; Traore and Muxy2004); Systems of Systems (Sage 2007; Wymore 1992;Wymore 1967; Morganwalp and Sage 2004); model-driven software development (Dimario 2007; Dimario2006; Object Modeling Group 2007; Jacobs 2004;Wagenhals, Haider, and Levis 2002; Wegmann 2002);and integrated simulation-based development andtesting (Mak, Mittal, and Hwang [in press]; Mittal2006; Mittal, Mak, and Nutaro 2006; Mittal 2007;Mittal, Sahin, and Jamshidi [in press]).

Network layerThe network layer contains the actual computers

(including workstations and high performance systems)and the connecting networks (both local area networkand wide area network, their hardware and software).

Execution layerThe execution layer is the software that executes the

models in simulation time and/or real time to generatetheir behavior. Included in this layer are the protocolsthat provide the basis for distributed simulation (suchas those that are standardized in the high levelarchitecture). Also included are database managementsystems and software for controlling simulationexecutions and for displaying test results and animatedvisuals of the behaviors generated.

Modeling layerThe modeling layer supports the development of

simulation models and other digital representations fornet-centric testing in formalisms that are independentof execution layer implementations. At this layer, thetest organization would compose services and applica-tions. Also in this layer is support for the qualitycontrol of model acquisition, especially the keyprocesses of verification and validation of models,simulators, and test tools.

Experimental frame layerThe experimental frame layer employs the artifacts

and services of the modeling layer to develop testcomponents, such as generators, acceptors, and trans-ducers and their compositions, to provide test instru-mentation services. Included are the observers andagents that run in the execution layer, and thatinterface with the systems and services under test toconnect them to the experimental frame components.Also included are means to capture relevant measures

Figure 4. Architecture for net-centric test instrumentation

Testing Net-Centric Systems

29(1) N March 2008 55

Page 52: The ITEA Journal

of performance and effectiveness and instrument themas experimental frame compositions employing mod-eling layer and execution layer services. These measuresare critical to the testing of NR-KPPs that the testorganization must be able to accomplish.

Design and test development layerThe design and test development layer supports the

ingestion and analysis of model-based system specifica-tion documents, such as in the DoD ArchitectureFramework, where the design is based on specifyingdesired behaviors through models and implementingthese behaviors through interconnection of systemcomponents. In the modeling layer, results of thisanalysis of system behavior requirements will be usedwith automated generation of test models, which whendeployed in the execution layer as automated test caseswill interact with systems and services under test. Thedesign and test development layer also includesmaintenance and configuration support for largefamilies of alternative test architectures, whether in theform of spaces set up by parameters or more powerfulmeans of specifying alternative model structures such asprovided by the System Entity Structure (SES)methodology. Artificial intelligence and simulatednatural intelligence (evolutionary programming) maybe brought in to help deal with combinatorial explosionsoccasioned by analysis for test development.

Collaboration and customer interaction layerThe collaboration and customer interaction layer

enables people and/or intelligent agents to manage andcontrol the infrastructure capabilities supplied byunderlying layers. This includes interactions with thecustomer in which test results are conveyed andexplained if needed.

Note that these layers describe functionalities thatcan be partially supplied by proven and reliable legacytools in the test organization’s inventory from earlierdevelopments. However, the primary objective of sucharchitecture is to facilitate carrying out the multi-horizon planning approach discussed earlier. Ascustomer projects arrive, their testing requirementscan be referenced to the elements within the layeredarchitecture—the detailed test assets at the variouslevels are called out. Missing assets can be the cues tostart an acquisition process to fill the gap. Figure 6illustrates the application of the layered architecture tosensor simulation infrastructure acquisition.

Artifacts, such as models and test and evaluation areresults of processes (systems) that must not only havehardware and software support but must be done bycompetent people using competent methods in anenvironment that fosters each process. Indeed, to be

effective, there must be collaboration among layers andcontinuity of people, methods, software and hardware,good input and materials, and a supportive environment(e.g., from management and external networks). Thiscollaboration is illustrated in Figure 5, employing thebasic categories of People, Policy and Methods,Hardware and Software, Input Data and Materials,and Environment; expressing the areas DoD often refersto as DOTMLPF—doctrine, organization, training,materiel, leadership, personnel, facilities. To bettercommunicate the main collaboration path, connectionsfor exception handling and additional feedback have notbeen included in Figure 5. We recognize that a real-world portrayal of the collaboration would includenumerous iterations, feedback, and exception handling.

Table 1 suggests how some of the identified layerscan be further elaborated in terms of representativeneeds that must be met in the basic categories that aremost pertinent to each layer.

We note that the table makes clear that besides theacquisition and application of test infrastructureelements, the Joint Interoperability Test Command(JITC) must plan for acquiring the right personnel andinstituting the right organization. Specifically, JITCmust develop a culture that will facilitate theinteractions among personnel that are critical for theenterprise to be effective.

Mapping shortfalls to architectural layersThe proposed layered architecture will provide a

framework for focusing the planning and acquisition ofthe test infrastructure capability. With the Xs in thecells of Table 2 we offer a mapping to the shortfallareas that we think are best addressed in each layer.

Figure 5. The layered architecture viewed from the

DOTMLPF perspective

Bridges, Zeigler, Nutaro, Hall, et al.

56 ITEA Journal

Page 53: The ITEA Journal

The test organization should employ this architectureas the basis for its net-centric instrumentation plan.

Strategies for net-centricinstrumentation planning

With the layered architecture as basis, the testorganization can develop specific strategies that take

into account long-, medium-, and short-term consid-erations for orderly acquisition of effective and reusableinfrastructure. One alternative is to continue to rely onlegacy tools while employing the architecture to plan fornew tool acquisitions as the opportunities presentthemselves. Another alternative is to invest immediatelyin high priority tool developments that are compliant to

Table 1. Illustrating the layered architecture in relation to doctrine, organization, training, materiel, leadership, personnel, and

facilities (DOTMLPF)

Layer People, Policy, and Methods Hardware and Software Input Data and Materials Environment

Experimental

Frame Layer

Experimental Frame Developers (1) Access to relevant

models and software

to gather required

measures (MOEs,

MOPs), generate

required stimuli and

loads, and control.

(2) Model development

tools and software

integrated design

environments are

adequate. (3) Access

to JITC network and

to test workstations.

(1) V&V experimental

frame artifacts and

test components from

the Modeling Layer.

(2) V&Ved data for

DT, &V, V&T. (3)

Good requirements

and/or standards.

(4) V&Ved means to

capture relevant

measures.

(1) Development, testing,

and V&V are managed

to plan. (2) Proper SW

CM environment and

practice.

(1) are qualified, (2) have

methodologies that are

appropriate and effective,

(3) have shared awareness of

development plans, design

decisions, and progress, and

(4) have good access to model

developers and to test development

personnel who are prepared to

clarify requirements and standards

governing the systems under test.

Design and Test

Development

Layer

Design and Test Developers Adequate tools to

capture and

characterize systems

under test behaviors and

interfaces.

(1) Adequate system

specification

documents and

DoDAF documents,

(2) Behavior

requirements and/or

standards are sufficiently

well-specified. This

applies particularly to

GIG/SOA-based

developments (e.g.,

NCES, NECC).

(1) Unplanned requirement

additions are avoided.

(2) Proper CM

environment and

practice.

(1) are qualified, (2) have

methodologies that are

appropriate and effective,

(3) have shared awareness with

the JITC team, and (4) have

good access to personnel who are

prepared to clarify requirements

and standards governing the systems

under test.

Table 2. Illustrating the mapping of shortfalls in architectural layers

Layers

Network Execution ModelingExperimental

frameDesign and text

developmentCollaboration and

customer interaction

Composability X X X

Reuseability and persistence X X X X

Extensibility X X X

Instrumented trustworthy

measurement

X

Visibility and controllability X X X

Real-time interactivity X X

Agility X X

Automation X X X X X X

Scalability and applicability to full

life cycle

X X X X

GIG/SOA integrated robust

computer and communication

infrastructure

X X X X X X

GIG/SOA, Global Information Grid/Service Oriented Architecture.

Testing Net-Centric Systems

29(1) N March 2008 57

Page 54: The ITEA Journal

such architecture and that implement nonexistentcapabilities such as planning or automated testing andmay not replace legacy tools in the near term.

Illustrative application to sensor simulationinfrastructure acquisition

Figure 6 sketches how the planning cycle of Figure 2

might apply to the acquisition of sensor simulation fornet-centric testing. The perspectives offered by multi-horizon planning and layered test infrastructurearchitecture are intended to facilitate developing andevaluating acquisition strategies. By themselves, theydo not decide the choices to make.

Summary and recommendationsA test organization needs an instrumentation

development and maintenance system that can be

considered an open subsystem of an open system—thetest organization, test evaluation, and certificationsystem, which produces results as shown on the leftside of Figure 7. Shown on the left are the resourcesand funds leaving the system, and on the right are thefunds and resources coming in. In addition, entering atthe right is a seemingly high volume of a broad varietyof not always clear or fixed system-under-designrequirements, protocols, waveforms, standards, andmandated architectural styles (e.g., net-centric refer-ence model and SOA). As shown at the bottom right,the test organization must encourage scientific researchand technology development projects of the govern-ment, academia, and industry to develop methods andtechnologies needed to fill test capability gaps.

The specific inclusion of infrastructure developmentas an integral part of the top-down approach fosters

Figure 6. Illustrating event planning cycle for sensor simulation acquisition

Bridges, Zeigler, Nutaro, Hall, et al.

58 ITEA Journal

Page 55: The ITEA Journal

significant reuse of test resources and cumulativeknowledge management of the products of testing.We recommend that in addition to basic testdevelopment, each iteration of the individual testevent/venue planning cycle should also target a small,

well-defined, and incremental enhancement of the test

environment functionality that we implement as compo-

nents of the overall test infrastructure. Iterations shouldrefine and/or enhance test objectives and develop and/or modify the test bed technology as needed; and testevents should realize these test objectives using theavailable test bed capabilities. In addition to supportingthe planned test objectives, each iteration should to theextent possible include a test event that specificallydemonstrates the new test environment functionality.

Testing in this paradigm is objective driven ratherthan event driven (i.e., test events must be traceableback to established test objectives). In most cases,major shortfalls of test technology should be identifiedearly, either during the refinement/expansion of testobjectives, or in the early phases of test event planning.Interim technology solutions to reduce shortfalls thatare identified late in test event planning or even laterduring test event execution should be consideredtentative pending review in the next iteration of thetest bed development. These interim solutions shouldbe the exception and not the rule.

We recognize that infrastructure development re-quires competent people using competent methods in an

environment that fosters the development of eachprocess and artifact. In this regard, we recommendincluding in the test organization team a test-infra-structure development component that supports testingfor each customer project and its test events. Theresponsibilities of this infrastructure team would be to

N Identify existing, reusable testing tools andrequirements that are common across test activ-ities for use and for potential adaptation orconversion to a reusable component.

N Build and maintain reusable technical compo-nents of a common test infrastructure.

N Promote test asset reuse where appropriate.N Advise test event planning and execution when

the events rely on pieces of the common testinfrastructure.

N Retain and disseminate lessons learned from atest event.

In addition to the net-centric test infrastructurecomponents involved in specific customer projects, thetest organization should stand up a global test infrastruc-ture development team to operate within the largerframework of its enterprise level plans for coordinatinginstrumentation, automation, and architecture supportacross all the test organization portfolios. This team would

N Coordinate efforts for customer-specific devel-opments with the test organization’s enterpriselevel net-centric test infrastructure developmentand identify overlapping concerns and/or testing

Figure 7. Instrumentation development and maintenance subsystem of the test organization test and evaluation andcertification system

Testing Net-Centric Systems

29(1) N March 2008 59

Page 56: The ITEA Journal

tools. Customer-specific testing requirements canbe referenced to the elements within the layeredarchitecture, calling out detailed test assets at thevarious levels. Missing assets can be the cues tostart acquisitions.

N Provide proactive technical solutions to identifiedcustomer-specific test requirements. These solu-tions will be incorporated into test events thatwill be planned in detail later on in the test andevaluation process.

N Seek out and recommend best practices and culturalinnovations that will facilitate effective coordinationof the personnel working at the various architecturallayers as customer projects arrive.

N Participate actively in teams responsible for testplanning and developing test tools for specificevents. Successful reuse requires positive involve-ment at all levels of the organization. Conse-quently, persons responsible for long-term infra-structure development must be constructively andactively engaged with the elements of theorganization that they support. %

STEVEN BRIDGES is the chief engineer for the Joint

Interoperability Test Command (JITC). In this capacity, he

is responsible for the oversight of technical aspects of all JITC

test programs, development of new test methods for the net-

centric environment, and modernization of both test

infrastructure and instrumentation. He serves as adviser

to the JITC Commander on all technical issues and has

worked on government technical acquistion projects for over

35 years. He received his bachelor of science in electrical

engineering from Texas A&M, Kingsville in 1972 and was

a recipient of the Defense Information Systems Agency

(DISA) Director’s Award for Achievement in a Scientific/

Engineering Field. E-mail: [email protected]

BERNARD P. ZEIGLER is professor of electrical and

computer engineering at the University of Arizona, Tucson,

and director of the Arizona Center for Integrative Modeling

and Simulation. He is internationally known for his 1976

foundational text Theory of Modeling and Simulation,

recently revised for a second edition (Zeigler 2000). He has

published numerous books and research publications on the

discrete event system specification (DEVS) formalism. In

1995, he was named fellow of the IEEE in recognition of

his contributions to the theory of discrete event simulation.

In 2000 he received the McLeod Founder’s Award from the

Society for Computer Simulation, its highest recognition, for

his contributions to discrete event simulation. He received

the JITC Golden Eagle Award for research and development

of the Automated Test Case Generator, 2005, and the

Award for Best M&S Development in the Cross-functional

Area, 2004/2005, by the National Training Simulation

Association, May 2, 2006. E-mail: [email protected]

JAMES NUTARO is with the Modeling and Simulation

group at the Oak Ridge National Laboratory in Oak Ridge,

Tennessee. He obtained his Ph.D. at the University of

Arizona with a dissertation entitled Parallel Discrete Event

Simulation with Applications to Continuous Systems, and he

has published papers describing his research in that area. He

was a post-doctoral researcher at the Arizona Center for

Integrative Modeling and Simulation. He has several years of

employment experience developing and applying distributed

simulation software in a defense systems context. E-mail:

[email protected]

DANE HALL is a certified systems engineering professional

at JITC, Fort Huachuca, Arizona, who has 18 years

experience as a system engineer working with acquisition

programs of the DoD and other agencies. For 20 years he was

an aeronautical engineering duty (maintenance) officer in the

U.S. Navy. He has a master of science degree in systems

management from the University of Southern California, Los

Angeles, and a bachelor of science in aeronautical and

astronautical engineering from Purdue University, Lafayette,

Indiana. E-mail: [email protected]

TOM CALLAWAY is president and principal engineer of

Callaway Engineering Services Incorporated. He has worked

as a program manager and communications test engineer for

over 35 years for commercial and government technical

acquisition projects. For over 20 years he was an Army signal

corps officer. He has a bachelor of science degree in engineering

from the U.S. Military Academy and a master of science

degree in electrical engineering from the University of

Arizona. E-mail: [email protected]

DALE FULTON currently works at JITC, Fort Huachuca,

Arizona, and has worked as an instrumentation designer and

tester for 28 years in the DoD, Department of Energy, and

commercial communication and data acquisition systems. His

experience in test automation includes real-time stimulation

and simulation for technical and operational test environ-

ments, and data acquisition and time-referencing in

laboratory and field systems. He has a bachelor of science

degree from the University of Arizona, Tucson. E-mail:

[email protected]

ReferencesBuchheister, J. ‘‘Net-centric Test & Evaluation,’’

Command and Control Research and TechnologySymposium: The Power of Information Age Conceptsand Technologies, 2004. Available at: http://www.dodccrp.org/events/2004/CCRTS_San_Diego/CD/papers/208.pdf. Accessed October 2005.

Carstairs, D. 2005. ‘‘Wanted: A New Test Approachfor Military Net-Centric Operations.’’ The ITEA

Journal of Test and Evaluation. Volume 26, No. 3.

Bridges, Zeigler, Nutaro, Hall, et al.

60 ITEA Journal

Page 57: The ITEA Journal

DiMario, M. ‘‘System of Systems InteroperabilityTypes and Characteristics in Joint Command andControl.’’ Proceedings of the 2006 IEEE/SMC Interna-tional Conference on System of Systems Engineering. LosAngeles, California. April 2006.

DiMario, M. 2007. ‘‘SoSE Discussion Panel Intro-duction, From Systems Engineering to System ofSystems Engineering.’’ 2007 IEEE International Con-ference on System of Systems Engineering (SoSE). April16–18, 2007, San Antonio, Texas.

Jacobs, R. 2004. ‘‘Model-Driven Development ofCommand and Control Capabilities for Joint andCoalition Warfare.’’ Command and Control Researchand Technology Symposium. June 2004. Available athttp://www.dodccrp.org/events/9th/ICCRTS/CD/Papers/169.pdf. Accessed January 2008.

Mak, E., Mittal, S. and Hwang, M. 2008. ‘‘Auto-mating Link-16 Testing Using DEVS and XML.’’Journal of Defense Modeling and Simulation. (Draft).

Mittal, S. 2006. ‘‘Extending DoDAF to AllowDEVS-Based Modeling and Simulation, Special issueon DoDAF.’’ Journal of Defense Modeling and Simula-tion. 3(2).

Mittal, S. 2007. ‘‘DEVS Unified Process for Inte-grated Development and Testing of Service OrientedArchitectures.’’ University of Arizona Ph.D. dissertation.

Mittal, S., Mak, E. and Nutaro, J. 2006. ‘‘DEVS-Based Dynamic Modeling & Simulation Reconfigu-ration Using Enhanced DoDAF Design Process.’’Special issue on DoDAF. Journal of Defense Modelingand Simulation. III(4).

Mittal, S., Zeigler, B., Martin, J., Sahin, F. andJamshidi, M. 2008. ‘‘Modeling and Simulation forSystems of Systems Engineering.’’ Systems of SystemsInnovations for the 21st Century. Edited by M.Jamshidi. John Wiley & Sons: New York (Draft).

Morganwalp, J. and Sage, A. 2004. ‘‘EnterpriseArchitecture Measures of Effectiveness.’’ InternationalJournal of Technology, Policy and Management. 4(1),pp. 81–94.

Object Modeling Group (OMG). 2007. http://www.omg.org.

Sage, A. 2007. ‘‘A System of Engineering anIntegrated System Family.’’ From Systems Engineer-ing to System of Systems Engineering. 2007 IEEEInternational Conference on System of Systems Engi-neering (SoSE). April 16–18, 2007, San Antonio,Texas.

Sarjoughian, H., Zeigler, B. and Hall, S. 2001. ‘‘ALayered Modeling and Simulation Architecture forAgent-Based System Development.’’ Proceedings of theIEEE 89. (2), pp. 201–213.

Traore, M. and Muxy, A. 2004. ‘‘Capturing theDual Relationship Between Simulation Models andTheir Context.’’ SIMPRA (Simulation Practice andTheory) Elsevier: London.

Wagenhals, L., Haider, S. and Levis, A. 2002.‘‘Synthesizing Executable Models of Object OrientedArchitectures.’’ Workshop on Formal Methods Applied toDefense Systems. Adelaide, Australia, June 2002.

Wegmann, A. 2002. ‘‘Strengthening MDA byDrawing from the Living Systems Theory. Workshopin Software Model Engineering.’’

Wymore, W. 1967. A Mathematical Theory of SystemsEngineering: The Elements. John Wiley & Sons: NewYork.

Wymore, W., Chapman, W. and Bahill, A. T. 1992.Engineering Modeling and Design CRC Press: BocaRaton, Florida.

Zeigler, B., Fulton, D., Hammonds, P. and Nutaro, J.2005. ‘‘Framework for M&S–Based System Develop-ment and Testing in a Net-Centric Environment.’’ ITEAJournal of Test and Evaluation. 26(3), pp. 21–34, 20.

Zeigler, B. and Hammonds, P. 2007. Modeling &Simulation-Based Data Engineering: Introducing Prag-matics into Ontologies for Net-Centric InformationExchange. Academic Press: New York.

Zeigler, B., Kim, T. and Praehofer, H. 2000. Theoryof Modeling and Simulation. Academic Press: NewYork.

Testing Net-Centric Systems

29(1) N March 2008 61

Page 58: The ITEA Journal

Towards Better Control of Information AssuranceAssessments in Exercise Settings

David J. Aland

Wyle, Arlington, Virginia

By the adoption of certain limited techniques, the assessment of Information Assurance in both

acquisition and fielded systems can achieve a higher level of rigor than available using current

methods. These techniques do not replace traditional Blue/Red team activities but are used to

augment them and provide a means by which replicable data may be recorded and analyzed

without raising the level of risk to the exercise planner.

Key words: Exercise planning; network assessment; network penetration & exploitation;

network protection; network vulnerability; risk; training.

The testing of Information Assurance(IA) in Department of Defense (DoD)information systems is addressed atnumerous points throughout the lifecycle of these systems, for the most part

in the development and acquisition process. In 2002, aCongressional mandate added a requirement for post-fielding assessments of DoD networks. These assess-ments were to be accomplished during major exercises,a shared environment often familiar to the operationaltesting community. But this additional venue alsocreated a challenge for both assessment and exerciseplanners—how to best integrate network evaluationsinto highly complex training events that depend uponthe network that is also being evaluated. This wouldnecessitate the integration of both the training eventsand assessment events, and a deeper level of synchro-nization between the two.

There are three key goals to such a process: (a)make the best possible use of the existing IAassessment capabilities; (b) provide meaningful andnondisruptive training in a warfare area (InformationOperations) that had previously received little atten-tion; and (c) structure events to gather meaningfulobservations and data regarding effectiveness of IAsystems, practices, and policies. In order to accomplishthis, it is necessary to design exercise events thatemphasize the various aspects of IA in a manner thatadds value to the training exercises and is consistentwith the skills and expertise of the teams from theagencies that normally conduct DoD network assess-ments. This also requires IA teams to adhere in somedegree to scripted events and timelines. In addition, itrequires exercise planners to place greater emphasis on

IA events, an area which is only now growing inprominence in most exercise scenarios. For the IAteams, this means greater constraints, and for theexercise planners, greater risk. For the operationalevaluator, this could only mean many more variablesin the shared testing environment.

Assessment processInherent to the DoD IA assessment process is the

use of traditional DoD IA teams: Blue teams (technicaland nontechnical vulnerability audits) and Red teams(technical adversarial penetration and exploitationtests). The missions of these teams differ, despite thecommon focus. The Blue team assessment mostfrequently consists of a collaborative review of technicaland administrative support to a system or network,often including the use of scanning tools, passwordcrackers, and low-intensity penetration tests. The goalof a Blue team assessment is to identify and documentvulnerabilities caused by configuration, process, ormanagement shortfalls. Conversely, a Red teamassessment is usually a limited-duration ‘‘attack’’—anetwork-based adversary, operating within some presetlimitations, which attempts to find and exploit at leastone area of vulnerability to gain internal access to anetwork or system. In many cases, such an attack willbe accompanied by modest exploitation of that access,usually in the form of data exfiltration or modification,in order to demonstrate the operational impact of thevulnerability exploited. For these reasons, as well asothers (including technical limitations, operationalconsiderations, and resources), the Blue team activitiescould be described as being ‘‘a mile wide, but an inchdeep,’’ whereas the Red team activities would be ‘‘a

ITEA Journal 2008; 29: 63–66

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 63

Page 59: The ITEA Journal

mile deep, but an inch wide.’’ The differing focus ofeach team provides very different products.

Information Assurance is normally described asconsisting of four fundamental tasks or principles:protect, detect, react, and restore. Due to thefundamental character of the established DoD IAcontrols (DoD Instruction 8500.2), the focus of mostDoD IA assessments (and pre-acquisition testing) ison network protection, with limited insight orinvestigation into network detection, reaction, andrestoration capabilities. Most Blue team events simi-larly focus on protection, with some view to detection.Red team assessments also focus on protection(through penetration and exploitation events) but canallow greater assessment of the other three tasks, ifstructured to do so. However, because of thelimitations most often imposed on Red team events(whether technical, operational, or resource), thedetect, react, and restore functions are not oftenexamined in any depth, nor in a reproducible fashion.The ‘‘traditional’’ modus operandi of most Red teams isto find and exploit a single vulnerability, makingcomparison of one event to another relatively difficult,with only a few common characteristics. Employmentof wider testing can significantly expand the cost, inboth time and resources, of any given Red team event,making such an expansion typically impractical.Furthermore, such an expansion may be contrary tothe interests of the exercise planner, as they mayincrease risk to other training objectives.

Overcoming obstaclesThe agencies that sponsor Blue and Red teams are

experiencing a growing demand for their services, asthe number of critical mission functions migrating intoautomated information systems grows. Working with-in limited budgets, and facing a long lead-time for thedevelopment, training, and employment of skilledoperators, the Blue and Red teams cannot practicallyexpand the scope of their assessments without havingto reduce the quantity of assessments they can perform.Given the limits in funding and manpower, onepossible solution would be to establish means by whichthese assessments can provide greater depth ofassessment without requiring additional time, person-nel, or other resources.

For the ‘‘customer’’—that is, the unit being assessedor sponsoring the assessment—a very robust IAassessment can potentially derail other testing ortraining objectives, and for that reason, most Blue andRed teams must operate within a series of constraints orwritten ground rules established in advance of the event.These ground rules serve to protect critical trainingevents from disruption and yet create de facto limits on

the scope and quality of the IA assessment. Most unitcommanders would be reluctant to expand the scope ofIA events without some form of assurance that criticalfunctions or events would not be impaired.

For both reasons, the assessment planner is faced withlimitations that all too often render the assessmentfindings for any one event essentially unique—a productof the variable selection of limitations imposed by boththe assessment agency and the one being assessed. Inorder to widen the available data for analysis, trending,and long-term issue identification, the evaluator work-ing in this shared environment requires a better form ofcontrolled metrics and conditions but often has the leastinfluence over the environment itself. From an opera-tional test and evaluation standpoint, this is a consid-erable obstacle: conducting an assessment in anenvironment that is not controlled by the assessor,using resources that are, to a greater or lesser extent, alsonot controlled by the assessor.

A better wayThe needs of all three stakeholders—the Blue/Red

teams, the assessed unit, and the operational assessor—can be met by the application of a common solution:establishment of a set of core events that are moreclosely controlled but do not raise the cost ofconducting an assessment, and that do not increaserisk but do improve the consistency of the datagathered.

In order to do so, these events must: (a) leveragetasks already being performed (or that can beperformed) by the Blue and Red teams; (b) maintainor decrease the level of risk currently available throughexisting limitations; and (c) be sufficiently consistentthat they can be performed repeatedly, and in the samemanner, during a variety of assessments of systems,networks, and locations. This may require all threeparties to make adjustments to their current processes,but these adjustments are relatively small, particularlyin view of the gains to be realized.

The implementation of more controlled test eventsmust make use of the highly developed skills of Blueand Red teams in achieving system or networkpenetration, and exploiting those penetrations; dem-onstrate the operational/training risk such penetrationsand exploitations produce without actually incurringany significant risk; and provide a consistent set of teststhat can be repeated and compared in subsequentevaluations and assessments. The main attribute inachieving all three goals is control.

Such control can be achieved in a number of ways:(a) by establishing alternative, but equally fixed,boundaries for test events; (b) by conducting testsagainst non-operational entities; (c) by applying precise

Aland

64 ITEA Journal

Page 60: The ITEA Journal

amounts of force/stimulation during tests; (d) bysegregating tests into discrete events or phases; or (e)by limited automation.

Examples of the kinds of controlled test events thatmight meet these conditions include

N Mission-focused assessments (alternative limits).Assessment plans are designed around one ormore specific mission areas and are limited toimpacting those missions and the networkcomponents supporting the missions designated.For the purposes of an IA assessment, risk wouldbe limited largely to the system or systemstargeted, and the assessment focuses on deter-mining the impact to the designated missionsupported by the targeted systems. This methodwould also allow extrapolation from prior acqui-sition testing into the broader testing of systemsin their intended operational environments whilelimiting ‘‘spillover’’ effects into other systems orportions of the network.

N Repetitive vector assessments (alternative limits/precise force/segregation). Assessment team activ-ities are organized as a series of repeated events,with each event specifically focused on testing adiscrete segment of a system/network, or func-tional attribute. Such events can be conducted asmultiple attacks along a limited set of identifiedattack vectors (authentication, known vulnerabil-ities, etc.) to statistically determine the rate ofsuccess and/or failure, as well as root causes. Theycan also be conducted as a series of eventsconstructed to be increasingly detectable over timeto statistically determine thresholds of sensitivity.

N Automated test events (alternative limits/automa-tion). These events would be a controlled series ofindicators (which may not necessarily require theservices of either a Blue or Red team) that replicatethe symptoms of abnormal network activity,internal traffic loading, or data-exfiltration. Thesewould be used to evaluate network team responsesand detection capability. Such automated eventswould be useful in accomplishing repetitive vectorassessments as well as proxy target events.

N Proxy target events (alternative limits/non-opera-tional). Assessment teams focus on locating andexfiltrating target files specifically placed at criticalnetwork locations as a means of determining depthof penetration, potential mission impact (withoutactually disrupting operations), attack pathways,and effectiveness of specific defense and detectiondevices (‘‘Capture the Flag’’). Alternatively, essen-tially harmless target files (or limited purposemacros constructed to replicate unauthorizedactivities) can be planted at critical network

locations as a means of determining the ability ofthe network management and defense systems/personnel in detecting and reacting to theseactivities (‘‘Scavenger Hunt’’).

N Adversary Level-of-Effort Metrics (alternativelimits/precise force). If the level of effort expendedby a Red team is one de facto measurement of thelevel of network protection, detection, and reaction(just as the level of force applied in kinetic testing isa de facto measurement of material strength), thenthe need to more precisely measure and express thelevel of effort brought to bear against the networkor system is essential to scoping an assessment andanalyzing the results. These metrics would includeobservation of success/failure along selected Redteam attack vectors, time expended, manpower/tool levels, and possibly time-sensitivity factors(i.e., Was a successful attack achieved within acritical time-span?).

N Test Range events (non-operational/segregation).While the best method for observing risk tooperational networks is to conduct tests on theoperating network, one method for reducingactual risk to those networks is to conductdiscrete or high-intensity tests on a simula-crum—a similarly configured test network thatdoes not convey risk to actual network compo-nents or systems. While this type of test is moreakin to laboratory testing than to live systemtesting, the use of a test network (and, poten-tially, simulations or models) allows the assess-ment of specific issues that would otherwiseinduce unacceptable degrees of risk to operatingand operational systems and networks.

N Casualty testing (non-operational/precise force/segregation). One of the most critical IA preceptsis the ability to reconfigure or restore a systemfollowing a casualty, system attack, or otherdebilitating event. The very nature of such eventscauses most network owners to shun such testing.The risk incurred in ‘‘bringing down’’ any portion ofthe network, however, can be ameliorated byinducing the casualties in a very limited scope(specific systems, specific durations, specific networksegments) and observing the subsequent actions.

ConclusionImplementation of some, or all, of these types of

assessment/test events can meet the goals of all threestakeholders in the IA assessment process: (a) they areintended to provide a baseline for Blue and Red teamactivities, but only a baseline—they do not replace theexisting skills and techniques employed by these teams,nor do they represent any significant expansion to their

Control of Information Assurance

29(1) N March 2008 65

Page 61: The ITEA Journal

tasks; (b) they serve to increase the degree of controland decrease the risk present in conducting suchassessments in operational environments, while pre-serving the most critical attributes of those environ-ments in the scope of the assessment; and (c) theyprovide a standardized basis by which multipleassessments can be compared, either of the samesystem, or of same/similar networks and environments.

Each of the three major stakeholders must acceptsome change to the way they currently conduct theseassessments. For the Blue and Red teams, it meansincorporating a more scripted structure to the oftenmore freely executed penetration and exploitationefforts, but it does not replace the element of ‘‘free-play’’ in the assessment. All of the tasks described aboveare within the current scope of skills and expertise forthese teams and should not require additional personnel,time, or significant resources. For the exercise planner, itmeans incorporating more aggressive events into theexercise structure, but it also means a significantreduction in the risk represented by those events. Forthe operational evaluator, it means developing more

specific assessment plans, but it also means a greaterreturn in terms of observations and replicable data.

For each of the stakeholders, the greatest obstacle toimplementing such an approach may be essentiallycultural. It will require IA teams to think like exerciseplanners, assessment planners to think like IA teams,and exercise planners to think like operational testers. Inthe end, however, all three are likely to find that the finalproduct of the assessment/exercise event is a better viewto how well DoD networks are performing. %

DAVID J. ALAND is an employee of Wyle, supporting theOffice of the Secretary of Defense, Operational Test andEvaluation Directorate (OSD DOT&E) in the assess-ment of Information Assurance and Interoperabilityduring major DoD exercises. He is a graduate of theU.S. Naval Academy and U.S. Naval War College, anda retired Naval officer with prior experience as Sixth FleetCommunications and Information Systems Officer (N6)and as deputy to the Navy Chief Information Officer.E-mail: [email protected]

Aland

66 ITEA Journal

Page 62: The ITEA Journal

Best Practices for Developmental Testing of Modern,Complex Munitions

Capt Joshua Stults

780th Test Squadron, Eglin AFB, Florida

The growing cost and schedule constraints on government weapons development programs as

well as their rising complexity increase the need for a decision theoretic-framework for product

development. This framework must rely on insight gained from a variety of sources for test

planning, test evaluation, and decision support. The best practices presented in this article for

system-level developmental test planning and execution are collected from reported experience

and criticism of industry and government product development programs. These practices and

methodologies are applied in a coherent framework that allows a formal combination of the

disparate sources of product knowledge available to decision makers in the early stages of

development.

Key words: Bayes Theorem; best practices; complexity; external validation; knowledge-

based acquisition; weapons systems.

This article illustrates a formal decisionsupport framework for program man-agers and testers that embodies theideas of knowledge-based acquisitionand incorporates best practices identi-

fied from historical product development programs inthe government and commercial sectors. Emphasis ison system-level developmental test and evaluation(DT&E) in support of risk reduction for productiondecisions. The framework consists of four basic steps:identify relevant system performance factors, use priorknowledge to evaluate system level outcomes, incor-porate validated knowledge into product improvementsand evaluate sufficiency of testing through externalvalidation. The motivation for such a formal decisionsupport framework is the growing complexity ofmodern weapon systems. While complexity is not easyto define or measure consistently, indicators ofcomplexity are type and number of weapon sensors,multiple operational modes, multiple communicationslinks, software for autonomous loitering or targeting,etc. These indicators have been shown to increase thecost of test and evaluation (T&E) despite thesignificant constraints currently being placed onweapons development funding (Fox et al. 2004).

The motivation for knowledge-based acquisition isto improve product development outcomes using‘‘quantifiable and demonstrable knowledge to make

go/no-go decisions’’ (GAO 2005). It is based onensuring that the proper product knowledge isvalidated at critical decision points (DoD 2003).Central to this acquisition approach is the progressionof the product through well-defined maturity levels,driven by validated product knowledge.

Three main product maturity levels have beenidentified through analysis of successful productdevelopment practices in industry. The productprogresses through these levels based on specific eventsthat demonstrate validated product knowledge ratherthan schedule driven milestones (GAO 2000). Heu-ristics learned from commercial and governmentproduct development programs can guide the planningof a knowledge validation (testing) program tosuccessfully progress through the product maturitylevels. Ideas such as ‘‘break it big early’’ are examples ofthese sorts of experience-based rules of thumb (GAO2000).

In addition to informal rules of thumb, there arerigorous inference methods that can support knowl-edge validation and decision making even in the systemdevelopment phase when sample sizes are too small forstandard large sample size statistical methods to apply.For example, approaches based on Bayes theoremwhich incorporate prior knowledge in evaluating newknowledge as it arrives can ensure that productdevelopers are making informed decisions even in the

ITEA Journal 2008; 29: 67–74

Copyright ’ 2008 by the International Test and Evaluation Association

29(1) N March 2008 67

Page 63: The ITEA Journal

face of few samples. Sequential Design of Experimentsis another method that allows for smaller expectednumbers of test events to achieve a given statisticalpower by using some sort of stopping rule (Cohen andRolph 1998).

The product maturity paradigm, experience-basedheuristics, formal inference and design of experimentsmethods can be tied together into a coherent decisionsupport framework by a high-fidelity system perfor-mance model as suggested in (Cohen and Rolph 1998).System performance models provide a repository forthe product knowledge gained as the system matures,so that successive testing can be planned based onvalidated knowledge. They can support a constructiveapproach to testing that leverages knowledge discoveryfrom the early phases of product maturity for moreefficient system level DT&E. Likewise, as has beenpreviously suggested, the knowledge gained fromDT&E to develop and validate the system performancemodel should be used for efficient operational test andevaluation (OT&E) planning (Cohen and Rolph1998).

A recurring criticism of Department of Defenseproduct development is that programs proceed withoutthe right kind of knowledge gained from test efforts.When this happens cost, schedule, and performanceproblems often result (GAO 2003). As has beenobserved, ‘‘It is possible to conduct a test or simulationthat does not contribute worthwhile information’’(GAO 2003). By focusing on knowledge validationand knowledge driven product maturity rather thanspecific test schedules or events, we hope to avoid thiswaste of effort and ensure that all planned test eventsvalidate the right knowledge at the right level ofproduct maturity.

Product maturity levelsThree levels of product maturity identified in (GAO

2000) are:1. Technologies and subsystems work individually;2. Components and subsystems work together as a

system in a controlled setting;3. Components and subsystems work together as a

system in a realistic setting.This article will focus on the second and third levels

of product maturity which correspond to system-levelDT&E. Oftentimes because the number of system-level tests during the DT&E phase of weapondevelopment is not large enough for statisticalsignificance in the classical frequentist sense, thesetests are relegated to ‘‘demonstration’’ status. Whenincorporated into a Bayesian inference framework,these tests can support a meaningful estimate ofparameters important to programmatic decisions from

the first test event. In addition, the marginal value(reduction in risk) of additional testing can begin to becompared to the marginal cost of that testing. Thiscomparison is critical to allowing for a decisiontheoretic approach to answering the question of howmuch testing is enough (Cohen and Rolph 1998).

Knowledge validated by testing drives the progressof a product through the stages of development.Incorporating the knowledge gained from each phaseof testing and development can guide the test plan tobe more efficient than starting from assumed ignoranceat each stage. Assuming ignorance is conservative as faras technical risk goes, it drives larger and less efficienttest plans than if prior knowledge is incorporated intothe planning effort.

Historically based heuristics for testplanning and product development

A very disciplined approach to maturing a product isrequired to avoid costly rework late in productdevelopment. The three critical factors that underliethis disciplined approach ensure that:

1. Validation is event based rather than schedulebased;

2. The quality of the knowledge validated in eachevent is not sacrificed;

3. The knowledge validated in each event is used toimprove the product (GAO 2000).

One of the most important heuristics identifiedfrom successful commercial product developmentefforts is known as ‘‘break it big early’’, or ‘‘movediscovery to the left’’ (GAO 2000). This means thatchallenging validation events are planned early toexpose areas of weaknesses in the new design.

Rigorous subsystem verification has been identifiedas one of the means to reduce the burden of discoveryon the later system level test events. This is a way toensure that the quality of knowledge gained from testevents does not suffer due to immature test articles.Aggressive development schedules can often result inan undue burden of discovery on system-level flighttesting. Experience in the Theater High Altitude AirDefense (THAAD) program illustrated that short-comings in component and subsystem validation leadto very expensive failures in the flight test program(GAO 2000). Sacrifices were made in the first twostages of product maturity to keep system level flighttesting on schedule. The problems experienced byTHAAD were not that tests failed or discoveriesoccurred, which is the very purpose of testing. In fact,it has been pointed out that ‘‘...bad things happen intest and that those bad things are valid results just assuccesses are’’ (DOT&E 2007). The object is to findthose bad things early in component level and

Stults

68 ITEA Journal

Page 64: The ITEA Journal

subsystem integration testing, so that the discoveriesduring more expensive full-up system level testing aresmall and affordably corrected.

Also in line with the ‘‘break it big early’’ philosophyis to test at factor levels that give the most variation insystem performance. System response in most realsystems is nonlinear, so the factor level matters. Themost knowledge can be gained from a limited numberof test events by testing at the most stressing factorlevels.

In keeping with the third element of disciplinedproduct development, information gained from initialtest events must be incorporated into improving theproduct. Using knowledge to mature the product andgetting the right knowledge to decision makers is thefocus rather than sacrificing the quality of test events tomaintain schedule goals. The DarkStar UnmannedAerial Vehicle program experienced significant flighttest failures and was eventually terminated due toproblems that surfaced during initial flight testingwhich were not addressed and fixed before subsequenttesting continued (GAO 2000). The point here is notthat flight test failures cause program termination, butthat sacrificing knowledge validation and productimprovement based on validated system knowledge tomaintain schedule is counterproductive.

If these heuristics are applied to the first two levelsof product maturity, then the burden of discovery onsystem-level DT&E will be reduced (GAO 2005).This allows more operational realism to be incorpo-rated into DT&E, thus improving the quality ofknowledge gained from these test events.

The Stand-off Land Attack Missile – ExpandedResponse (SLAM-ER) system experienced failuresduring OT&E that were masked in earlier testingbecause of unrealistic DT&E test conditions andimmature test articles (GAO 2000). This shows howthe heuristics identified can complement each other,mature test articles support more operational realism inDT&E which in-turn supports ‘‘moving discovery tothe left.’’

To summarize the above discussion, here is acollection of some of the experience-based rules ofthumb:

N Break it big early, move discovery to the left

- Rigorous subsystem verification and integra-tion minimizes discovery burden on the final,most expensive system-level development ef-fort;

- Test difficult technology or design featuresearly;

- Test at factor levels that give the mostvariation in system performance: System

response in most real systems is nonlinear,the level matters.

N Focus on getting necessary knowledge to decision

makers rather than specific events, techniques, or

schedules

- Incorporate information from early test eventsto improve the product before proceeding tofuture test events;

- Do not curtail early testing to stay onschedule;

- Do not sacrifice test-item fidelity to stay onschedule: Unrealistic system level test eventslower the amount of useful information gainedfrom those events.

Importance of systemperformance models

Incorporating knowledge gained from disciplinedcomponent and subsystem validation into a high-fidelity system performance model informs decisionmakers about development and production risk. Thiscan also lead to more efficient test planning andanalysis. The system performance model tracks thesystem through the product maturity levels. As productknowledge is validated in each level, that knowledge isincorporated into the model. The model provides ameans for the heuristics identified in Section 3 to berigorously applied. It allows the test planner to answerthe questions like:

N Where can I expect the most variation?N What level of product maturity is the modeled

performance based on?N What discoveries have been made, and has that

knowledge been incorporated into the product(and its model)?

The test planner can make basic decisions aboutinfluential factors and their likely critical levels beforedesign details of the actual test article are finalized. Inother words, ‘‘one can design an effective test for asystem without understanding precisely how a systembehaves’’ (Cohen and Rolph 1998). This allows testingfor the later levels of product maturity to be based onknowledge gained during the initial levels. Figure 1

illustrates the progression of model maturity. Initially,the insight for test planning comes from physics-basedsimulation and other analysis tools. As the productmatures and component and integration testing databecome available these can be used for test planningand decision making. The fast running engineeringmodels are based on the more fundamental informa-tion in the detailed physical models. Componentperformance and integration testing data are incorpo-rated as they become available.

Munitions: System-level Developmental T&E

29(1) N March 2008 69

Page 65: The ITEA Journal

Incorporating prior knowledgeKnowledge captured in the system performance

model (based on component level testing and systemdesign analysis) can be used to generate priorprobabilities in performance metrics of interest. Theseprior probabilities, or degrees of belief, are useful for aBayesian inference method.

The Bayesian approach has advantages over ap-proaches which do not adjust their prior probabilitiesbased on experience (Robbins 1964). It is desirablebecause it gives an optimal prediction: given thehypothesis prior probabilities, any other predictionwill be correct less often (Russell and Norvig 1995).Bayes Theorem is shown in Equation 1.

P Hj jEi,I� �

~P EijHj ,I� �

P Hj jI� �

P EijIð Þ ð1Þ

Where the posterior, or final, probability of thehypothesis, Hj , being true given the new data, Ei, andthe background information, I is updated by the likeli-hood, P (Ei |Hj , I ), and the prior or initial probability, P(Hj |I ). Beliefs about the system under test are updated bynew information gained from each test event.

A common criticism of the Bayesian approach is thatthere is subjectivity in choosing the prior probabilities.This is true, but the benefit is that an explicitexposition of the assumptions underlying the testplanning and analysis has been made, which is oftennot the case for other test planning approaches. Inaddition, the dependence of the result on the priorprobability decreases as the sample size increases. Inthe large sample size limit, for certain modelassumptions the Bayesian approach matches the morestandard frequentist result (D’Agostini 2003).

High level test planning for weapon developmentprograms tends to focus on the number of end-to-endflight tests because this is a significant contribution tooverall test program cost and schedule. Performingenough end-to-end testing to build confidence inter-vals based on large sample-size theories is cost andschedule prohibitive, so the end-to-end testing is manytimes relegated to a demonstration only status. If thesystem level test events are merely demonstration, thereis little rigorous or quantifiable connection betweenthose small samples and knowledge gained to supportdecision criteria.

Since there is no quantifiable connection theargument is often put forth that a sample of 1 is asgood as 1 + m, where m is some number small enoughthat large sample theories still do not apply withsufficient power. This argument is fallacious becauselarge sample theory is not meant to measure thedifference in marginal information gained between twosmall samples. It does not follow that there is nodifference in value to the decision maker because largesample theories cannot measure that difference.

A Bayesian approach incorporates assumptions andprior knowledge about the system under test in aformal way so that information gained beginning withthe first test event improves the certainty of theknowledge about the system in a quantifiable manner.Some estimation of the marginal value of n and n + 1samples can be evaluated even though n is far too smallfor frequentist statistical approaches to apply. There isno free lunch here. With very small n the inferencessupported by a Bayesian approach will be quitesensitive to the priors; however, that sensitivityinformation can be provided to decision makers so

Figure 1. Modeling hierarchy

Stults

70 ITEA Journal

Page 66: The ITEA Journal

that they understand what increasing n will mean interms of reduced risk.

Hit-point distributionThis section presents an example of the Bayesian

approach evaluating hit-point distributions for amunition with some type of smart terminal guidancebased on a multimode seeker and target recognitionalgorithms. The seeker component level testing andclosed-loop guidance and control simulation canprovide a probability density for the hit-point in theplane normal to the weapon’s attack vector. Thisinformation provides a prior probability for evaluatingthe hit-point from the very first end-to-end flight test.For smaller, smarter munitions this hit-point becomesincreasingly important. Great variations in systemeffectiveness (i.e., killing the target) might be expectedfor small variations in hit-point.

Figure 2 illustrates using the Bayesian approach toestimate the variance in hit-point distribution. Themodel predicts a radial distribution of hit-points with avariance of two, while the actual performance is drawnfrom a distribution with variance of three. The variancein this example is our hypothesis, and the priorprobabilities (see Equation 1) for the hypothesis couldbe generated from sensitivity and uncertainty analysisof the model. The actual form for the prior is not

critical as long as there is some finite probabilityassigned to the true answer (Russell and Norvig 1995).

The lowest graph in Figure 2 shows the maximumprobability estimate of the Bayes method and comparesit to the standard frequentist result (for n . 20).Rather than integrate over the continuous hypothesisspace (variance in this case), a discrete set of hypothesesis evaluated. This is why the Bayesian estimate inFigure 2 jumps discontinuously between levels. Themethod allows significant insight into the problemwhile the sample size is still small compared with morestandard estimation methods.

Model output for prior probabilitiesSuppose the output of an uncertainty analysis for a

simple fast-running model can be given by Equation 2,

y~b0ze0z(b1ze1)x ð2Þwhere b0 5 1, b1 5 3, and e0 , e1 are normallydistributed errors with zero mean and 0.25 standarddeviation. The variation simulated here by e0 , e1 can begenerated by sensitivity and uncertainty analysis in afast running engineering model. The prior distribu-tions for the model parameters can be estimated byholding the other parameters constant at their expectedvalue and treating each data point as a measurement ofthe parameter of interest.

Figure 2. Estimating variancein hit-point distribution

Munitions: System-level Developmental T&E

29(1) N March 2008 71

Page 67: The ITEA Journal

Figure 3 shows the probability distributions for theslope and the intercept of the model’s output followingthis method. These prior probabilities can be used toguide test planning by identifying where variation oruncertainty is greatest, which leads naturally to wheretesting will be most profitably executed. The bestpractice heuristics previously discussed become morethan just good rules of thumb when informed by aBayesian planning and analysis framework. Thisframework provides insight into where the variationin system performance can be expected, because itexplicitly incorporates the prior knowledge fromcomponent-level testing residing in the system perfor-mance model.

Sequential design of experimentsThe basic idea of sequential design of experiments is

to test progressively from the outside of the parameterspace, capturing linear effects, towards the inside of theparameter space, capturing higher-order interactioneffects if needed (Curry and Lee 2007). A compre-hensive review of the field is given in (Lai 2001). Ateach level, the predictive power of the effects measuredso far is evaluated and a decision is made aboutwhether additional testing is required.

For example, perhaps the product development teamhas identified some significant factors for a notionalmunition with terminal phase guidance and in-flightcommunication as follows: target aspect (TA), targetspeed (TS), target movement duty cycle (TMDC),impact angle (IA), engagement mode (EM), and targettype (TT). Factors such as noise environment or weather

are generally uncontrollable by the testers, but it isworthwhile to note their significance and then recordtheir levels during test events so their influence onperformance can be quantified (Cohen and Rolph 1998).

An initial experimental design will attempt tomeasure the linear or ‘‘main’’ effects. For the sixcontrollable factors identified above, a seven-parametermodel results, requiring seven tests at the minimum tomake point estimates of the parameters (shown inEquation 3). Two additional tests are added to thedesign so that some estimate of the process variabilitycan be made, and a final confirmation test is added toevaluate the sufficiency of the linear model.

Y ~b0zXn

i~1

bixi ð3Þ

Given ten test events and minimum and maximumlevels for each of the factors, a constrained optimiza-tion method can be applied to find the combination offactor levels across the tests that gives the lowest factorcorrelation. This is known as a d-optimal test designsince it maximizes the determinant of the factorcorrelation matrix (Curry and Lee 2007).

One method of reaching an approximate optimum issimulated annealing (exactly orthogonal test series existonly at multiples of four tests). It is a heuristicoptimization method that combines both divide-and-conquer and iterative improvement strategies (Kirkpa-trick and Gelatt 1983). The method starts with afeasible set of factor levels for the test series and thenswaps factor levels and evaluates if this improves or

Figure 3. Estimation of priorprobability from model

output

Stults

72 ITEA Journal

Page 68: The ITEA Journal

degrades the orthogonality of the tests. If the changeimproves the orthogonality, it is accepted withprobability, P 5 1. If the change degrades theorthogonality, it is accepted with probability relationshown in Equation 4.

P~ed1{d0=T ð4Þ

Where d1 is the determinant of the correlation matrix(a measure of orthogonality or ‘‘goodness’’) and T is thetemperature, a parameter that is gradually reducedduring the optimization. This allows the process toavoid being trapped by local minima because it acceptsmoves which are ‘‘bad’’ according to the difference d1

2 d0 and the cooling schedule in T. As coolingprogresses the algorithm accepts ‘‘bad’’ moves with lessand less probability.

A test series developed by the simulated annealingmethod is shown in Table 1. The correlation of factorsacross the test events for this design is shown inTable 2.

An exactly orthogonal series would have no nonzerooff-diagonal terms in the correlation matrix. The goal ofthe optimization is to make these terms approximatelyzero. The advantage of using an optimization techniquelike simulated annealing is that constraints on the testdesign can easily be added and optimization can proceedexactly as before, only within the reduced set of feasibledesigns. For example, the factors describing an impor-

tant operationally representative scenario can be con-strained to occur a given number of times.

Importance of external validationIn a test program that relies heavily on modeling and

simulation, it is critical to guard against over-fitting themodel. The basic algorithm to avoid such over-fittingis known as ‘‘model-test-model-test’’ (Cohen andRolph 1998). The final validation tests are outsidethe scenarios which were used for parameter tuning.Sequential design of experiments naturally provides theframework for such an approach. The stopping rule ina standard sequential design depends on evaluating thepredictive power of the simple empirical model usingthe final additional test.

When a high-fidelity system performance model isavailable the stopping rule should be modified todepend on an external validation of the systemperformance model as well as the more standardstopping rule. The initial tests used to develop thesimple linear empirical model can also be used forparameter tuning of the high-fidelity model and thefinal test serves as an external validation of the high-fidelity model as well.

ConclusionsHigh-fidelity system performance models along

with full-up system level test events incorporated intoa formal inference framework provide rigorous supportto decision makers in developing and acquiring modernweapon systems of ever-increasing complexity. Theproposed framework for knowledge-based test plan-ning and execution consists of four basic steps:

1. Identify significant factors and levels based on ahigh-fidelity system performance model;

2. Use the model for prior distributions (context,background knowledge) with which to analyzefull-up system level test outcomes;

3. Incorporate discoveries into product improve-ments and improved performance model;

4. Evaluate sufficiency of testing based on predictivepower of high-fidelity system performance mod-el, i.e., model-test-model-test.

Table 2. Factor cross-correlation matrix

TA TS TMDC IA TT EM

TA 1 0 0 0 0.2 0

TS 0 1 20.16667 0.16667 0 0.102062

TMDC 0 20.16667 1 20.16667 0 20.102062

IA 0 0.16667 20.16667 1 0 0.102062

TT 0.2 0 0 0 1 0

EM 0 0.102062 20.102062 0.102062 0 1

TA, target aspect; TS, target speed; TMDC, target movement duty cycle; IA, impact angle; TT, target type; EM, engagement mode.

Table 1. Approximately d-optimal test design

Test TA TS TMDC IA TT EM

1 360 20 0.1 15 1 1

2 360 20 0.9 75 21 21

3 180 4 0.9 15 21 1

4 360 20 0.9 15 21 21

5 180 4 0.9 15 1 1

6 180 20 0.9 75 1 1

7 180 4 0.9 15 1 21

8 180 20 0.9 75 21 1

9 360 4 0.1 75 1 1

10 180 4 0.1 75 21 21

TA, target aspect; TS, target speed; TMDC, target movement duty

cycle; IA, impact angle; TT, target type; EM, engagement mode.

Munitions: System-level Developmental T&E

29(1) N March 2008 73

Page 69: The ITEA Journal

The exact mechanics of the approach presented inthis article are not critical. Any integrated method thatgives some measure of the marginal value of system-level test events when sample sizes are small canprovide useful support to decision makers. Thissupport will begin to allow hard risk managementdecisions about how much testing is sufficient to bemade in a more decision-theoretic framework.

The critical aspect of the approach is the knowledgewarehouse known as the system performance model.The knowledge it contains at the same time informsdecision makers and test planners, and provides arepository of validated knowledge from test conduc-tors. The execution of a knowledge-based test programsupports decision makers with solid information abouttest sufficiency and risk. Through improvementsincorporated into the product and its model, it ensuresthat decisions made about the system are based on thehighest quality of information available. %

CAPT JOSHUA A. STULTS is the deputy live fire agent forconventional munitions, Air Force Live Fire Office, 780thTest Squadron, Eglin AFB, FL. Prior to his current dutiessupporting U.S. Air Force weapons programs in planningand executing live fire test and evaluation, he was a testengineer for the 780th supporting weapons developmentflight test. He holds a master of science degree inaeronautical engineering from the Air Force Institute ofTechnology, Dayton, Ohio; and a bachelor of science degreein aeronautical engineering from the U.S. Air ForceAcademy, Colorado Springs, Colorado. E-mail: [email protected]

ReferencesCohen, M. L. and Rolph, J. E. (eds.) 1998.

‘‘Statistics, Testing and Defense Acquisition: NewApproaches and Methodological Improvements.’’ Wash-ington, D.C.: National Academy Press.

Curry, T. F. and Lee, S. J. 2007. ‘‘Using Sequential-Designed Experimentation to Minimize the Number ofResearch and Development Tests.’’ The ITEA Journal ofTest and Evaluation, Volume 28-2, pp. 41–47.

D’Agostini, G. 2003. ‘‘Bayesian Inference in Pro-cessing Experimental Data: Principles and BasicApplications.’’ Rept. Prog. Phy. 66, pp. 1,383–1,420.

DoD, (Undersecretary of Defense for Acquisition,T. and Logistics). 2003. ‘‘Operation of the Defense

Acquisition System, No. 5000.2.’’ In: Department of

Defense Instructions. May 2003, pp. 1–50.DOT&E. 2007. ‘‘Lessons Learned from Live Fire

Testing: Insights Into Designing, Testing, and OperatingU.S. Air, Land, and Sea Combat Systems for ImprovedSurvivability and Lethality.’’ O’Bryon, J. F., (ed). Wash-ington, D.C.: Secretary of Defense, Operational Testand Evaluation Directorate (DOT&E), Live Fire Testand Evaluation, pp. 3–15.

Fox, B., Boito, M., Graser, J. C. and Younossai, O.2004. ‘‘Test and Evaluation Trends and Costs forAircraft and Guided Weapons, Tech. Rep. MG-109.’’Arlington, Virginia: RAND Corporation.

GAO (General Accounting Office). 2000. ‘‘BestPractices: A More Constructive Test Approach is Keyto Better Weapon System Outcomes, Tech. Rep.GAO/NSIAD-00-199.’’ Washington, D.C.: U.S.General Accounting Office. Available online athttp://www.gao.gov. Accessed January 18, 2008.

GAO (General Accounting Office). 2003. ‘‘DefenseAcquisitions: Assessment of Major Weapon Programs,Tech. Rep. GAO-03-476.’’ Washington, D.C.: U.S.General Accounting Office. Available online at http://www.gao.gov. Accessed January 18, 2008.

GAO (General Accounting Office). 2005. ‘‘BestPractices: Better Support of Weapon System ProgramManagers Needed to Improve Outcomes, Tech. Rep.GAO-06-110.’’ Washington, D.C.: U.S. GeneralAccounting Office. Available online at http://www.gao.gov. Accessed January 18, 2008.

Kirkpatrick, S. and Gelatt, M. V. 1983. ‘‘Optimi-zation by Simulated Annealing.’’ Science, Volume 220,No. 4598, pp. 671–781.

Lai, T. L. 2001. ‘‘Sequential Analysis: SomeClassical Problems and New Challenges.’’ StatisticaSinica, Volume 11, pp. 303–408.

Robbins, H. 1964. ‘‘The Empirical Bayes Approachto Statistical Decision Problems.’’ The Annals ofMathematical Statistics, Volume 35, No. 1, 1964, p. 1.

Russell, S. and Norvig, P. 1995. ‘‘Artificial Intelli-gence: A Modern Approach.’’ 2nd ed. Upper SaddleRiver, NJ: Prentice Hall. p. 1132.

AcknowledgmentsThe author thanks Maj David Winebrener for the

lively discussions on test planning and lessons learnedthat led to this research effort.

Stults

74 ITEA Journal


Recommended