+ All Categories
Home > Documents > A Model Integration Approach to Electronic Combat Effectiveness

A Model Integration Approach to Electronic Combat Effectiveness

Date post: 12-Feb-2022
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
25
RADC-TR-89-183 AD-A215 804 In-House Report October 1989 A MODEL INTEGRATION APPROACH TO ELECTRONIC COMBAT EFFECTIVENESS EVALUATION Alex F. Sisti DTIC 0% ELECTE SDL8138U APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. ROME AIR DEVELOPMENT CENTER Air Force Systems Command Griffiss Air Force Base, NY 13441-5700 89 12 27 020
Transcript

RADC-TR-89-183 AD-A215 804In-House ReportOctober 1989

A MODEL INTEGRATION APPROACHTO ELECTRONIC COMBATEFFECTIVENESS EVALUATION

Alex F. Sisti DTIC0% ELECTE

SDL8138U

APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.

ROME AIR DEVELOPMENT CENTERAir Force Systems Command

Griffiss Air Force Base, NY 13441-5700

89 12 27 020

This report has been reviewed by the RADC Public Affairs Office (PA) and isreleasable to the National Technical Information Service ('TIS). At NTIS it will bereleasable to the general public, including foreign nations.

RADC TR-89-183 has been reviewed and is approved for publication.

APPROVED:

THADEUS J. DOMURATChief, Signal Intelligence DivisionDirectorate of Intelligence and Reconnaissance

APPROVED:

WALTER J. SENUSTechnical DirectorDirectorate of Intelligence and Reconnaissance

FOR THE COMMANDER:

IGOR G. PLONISCHDirectorate of Plans and Programs

If your address has changed or if you wish to be removed from the RADC mailing list,or if the addressee is no longer employed by your organization, please notify RADC(IRAE) Griffiss AFB NY 13441-5700. This will assist us in maintaining a currentmailing list.

Do not return copies of this report unless contractual obligations or notices on aspecific document requires that it be returned.

UNCLASSIFIED

SECURITY CLASSIFICATION OF THIS PAGE

orm ApprovedREPORT DOCUMENTATION PAGE OMB No. 0704-0188

la REPZ.T SECUR.T" CLASSIFICATION lb RESTRICTIVE MARKINGSUNCLASSIFIED N/A

2a. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION/AVAILABILITY OF REPORTN/A Approved for public release;

2b. DECLASSIFICATION/DOW NGRADING SCHEDULE distribution unlimited.N/A

4. PERFORMING ORGANIZATION REPORT NUM"BER(S) S MONITORING ORGANIZATION REPORT NUMBER(S)

RADC-TR-89-183 N/A

6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZAT;ON(If applicable)

Rome AiI Development Center [RAE Rome Air Development Center ([RAE)

6c ADDRESS (City, State, and ZIP Code) lb. ADDRESS (.ity, State, and ZIP Code)

Griffiss AFB NY 13441-5700 Griffiss AFB NY 13441-5700

Ba. NAME OF FUNDING/SPONSORING 8b OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER

ORGANIZATION (If applicable)

Rome Air Development Center IRAE N/ABc. ADDRESS (City, State, anl ZIP Code) 10. SOURCE OF FUNDING NUMBERS

PROGRAM PROJECT TASK WCK UN:-Griffiss AFB NY 13441-5700 ELEMENT NO NO. NO ACCESSION NOGiisAF NY141-7062702F 4594 15 El

11. TITLE (Include Security Classification)

A MODEL INTEGRATION APPROACH TO rT-ECTRONIC COMBAT EFFECTIVENESS EVALUATION

12. PERSONAL AUTHOR(S)

Alex F. Sisti13a. TYPE OF REPORT 13b. TWME COVERED 14. DATE OF REPORT (Year, Month, Day) 15 PAGE COUNTIn-House FROM Mar 8 TO p_ October 1989 28

16. SUPPLEMENTARY NOTATION

N/A

17. COSATi CODES 18. SUBJECT TERMS (Continuo on ever' if necessary and identify by block numbsr)FIELD GROUP SUB-GROUP Modularity Model Integration Electronic Combat

I 04 Object-Oriented Design Hiercharchy of Models-7 - 1 6.7 ISoftware Reuse Model Manatyement System i ,)

19. ABSTRACT (Continue on reverse if necessary and identify by block number)

This report addresses the problems inherent in the modeling of large-scale and complex softwaresystems in general, and specifically, how these problems have affected simulation systems designed toevaluate Electronic Combat Effectiveness in a combat scenario. Conceptual improvements andpotential solutions are offered, leading to an in-depth discussion on a variety of disparate, yet relatedsubject areas. An implementation of the Electronic Combat Effectiveness System is presented, basedon these suggestions; and finally, recommendations are outlined as to future areas of research meritingincreased investigation.

20. DISTRIBUTION/ AVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY CLASSIFICATION(7JUNCLASSIFIEDIJNLIMITED [3 SAME AS RPT Q DTIC USERS UNCLASSIFIED

22a NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (Include Area Code) 22c OFFICE SYMBOLALEX -. SISTI (315) 330-4517 RADC(IRAE)

00 Form 1473, JUN 86 Previous editions are obsolete. SECURITY CLASSIFICATION OF THIS PAGE

UNCLASSIFIED

This report addresses the problems inherent in themodeling of large-scale and complex software systems ingeneral, and specifically how those problems have affectedsimulation systems designed to evaluate Electronic Combateffectiveness in a combat scenario. Conceptual improvementsand potential solutions are orrered, leading to an in-depthdiscussion on a variety of disparate, yet related subjectareas. An implementation of the Electronic CombatEffectiveness System is presented, based on these suggestions;and finally, recommendations are outlined as to future areasof research meriting increased investigation.

THE PROBLEM

The results of large-scale, monolithic battlefieldsimulation analyses have never really been totally accepted bymanagement, and with good justification. A scenario ofrealistic proportions could conceivably involve the modelingof hundreds of thousands of entities, dynamically interactingamong themselves, and reacting to other (simulated) activityin their environment. Even given the substantial hardwareimprovements in the form of larger, faster memories andexponential increases in processing power, it is stillimpossible to do an analysis at anything but a grosslyagqJLegaLtd level. . ii becoming increasingly imperative rnaLanalyses of this sort pay more attention to the underlyingdetails of the entities being modeled; especially sincedecisions based on these results could ultimately involvehuman life. As part of a panel discussion at the 1983 WinterSimulation Conference, panel chairman Kenneth Musslemanremarked "Aggregated reasures are used to draw conclusionsabout system performance, while the detailed dynamics of thesystem go virtually unnoticed ... A decision based solely onsummary performance could lead to unacceptable results ... Itmakes practical sense for us to learn how to properly amplifythese details and to incorporate them into the evaluationprocess." One of the possible alternatives wculd be toaccurately and completely model every entity in the scenariosuch that its associated details are therefore incorporatedinto the scenario. This approach is obviously discarded inlight of the size and complexity of the simulation as well ascost, time, and resource constraints.

With the extremes (aggregate analysis based on coarselyrepresented dynamics versus completely detailed analyses)being rejected, what remains is: 1) modeling the completesystem from a variety of different points of view, 2) modelingonly selected items of interest, or 3) modeling the entiresystem, following a convention of multiple levels of "4represe _at on, such that entities are modeled at varying ?orlevels etail ranging from the top-level representation ofthe "essen-e" of the entity, to the lowest level, which would [model the entity at the finest level of detail. This approach 0clearly has shown the most promise, and brings to bear a Cvariety of technological, theoretical and practical aspects;including modularity, software reuse, object-oriented design,a hierarchy of models in a component library, a modelmanagement system for manipulating that library, and software ..engineering principles in general. t7 Codes

.... nd/ori t i Speoial

S I '

Modularity

Modularity is defined as breaking a software system intosmaller and simpler parts or modules such that the modulestogether perform as the system. In the historical sense, asubroutine is functionally equivalent to a module which can beused repeatedly (and only when desired) in different places inthe system. Breaking a large task into smaller, more tractablepieces is certainly not new; it is a time-honored method ofincreasing productivity in many manufacturing disciplines. Itis therefore only natural that the earliest approach away fromthe traditional monolithic development of software systems wasto modularize -- to decompose the system at the functionallevel -- and to group related functions together.

Leading the maturation of modularity is the thrust forimprovements in the general area of software engineeringprinciples, the foremost being the concepts of dataabstraction, encapsulation and information hiding. Dataabstraction refers to the process of hiding the implementationdetails of an object (e.g., program, data) from the users ofthat object; also called "information hiding" or"encapsulation". Abstract data types describe classes ofobjects as a function of their external properties asopposed to their specific computer representation.Deemphasizing a data type's representational details in thisway can help eliminate problems of design changes, hardwarechanges and vesion compatibility. In other words, as thesystem undergoes a normal evolution, the actual implementationmay be changed without affecting the users of the system.Examples of information likely to be hidden includespeculiarities of the underlying hardware, algorithms toimplement a feature specified by the interface,representational details of data structures, andsynchronization schemes.

Modular techniques formally began being embodied inprogramming languages in the mid-1970s with the development ofModula-2, with its "modules", and later, with Ada and its"packages". Still other, more conventional languages can nowprovide assistance in simulating data abstraction. Thesemodular units contain separate sections for interfacespecifications and for implementation specifications, therebysupporting data abstraction. Making up the interface sectionwould be information specifying type, data and proceduresexported by the module, while the implementation section wouldcontain the executable statements of the interface section,along with locally-used type, data and proceduraldeclarations. Users wishing to invoke such a module to performits function can access it through the interface section, butconsistent with information hiding, cannot see (or affect) howthat function has been implemented. The application of dataabstraction and of a sister technology thrust, object-orienteddesign, will be discussed in a later section of this report.

2

As alluded to earlier, the reasons for modularity aremany. A partial list follows:

1. Modularity facilitates writing correct programs.Smaller, more tractable modules, based on sound softwareengineering principles, are less susceptible to errors.2. Modular code is easier to design and write. A softwaredesign engineer merely has to specify the functionalessence of a module in a traditional top-down manner,rather than all of its internal and external details.Actual coding is then facilitated by this visibility ofthe design structure.3. A modular approach to a system development allows manyprogrammers to contribute to that development, inparallel. Domain-specific individual programmers can workon separate functional pieces, guided by the fact thatinteractions between parts of a system are rigidlyrestricted to the allowable interactions between thoseindividual pieces.4. Modularity facilitates easier maintenance as thesystem evolves. Since almost all large-scale softwaresystems chanqe as the requirements, methodologies orusers change, it is essential that a system be easilyreconfigurable and maintainable. Since modularitystresses the distinction between implementation changesand interface changes, modifications to theimplementation may take place with coniidence thatinconsistencies will not be introdiircd to other -z .fthe system.5. Modularity facilitates testing, varification andvalidation. Individual components can be "locally"tested, then fit into the svstem and re-tested.Furthermore, some applications have exploited modularityto the noint of replacing software components with thehardware ieing simulated.6. Modularity allows a component hierarchy to beexploited. The concept of a module hierarchy was alludedto earlier in the context of modeling components of asystem at varying levels of detail. Bernard Zeigler,who has written many articles on the subject ofhierarchical modular modeling [4,7,8,11,36] makes thepoint that "...models oriented to fundamentally the sameobjectives may be constructed at different aggregationlevels due to tradeoffs in accuracy achievable versuscomplexity costs incurred." Hierarchical modeling will bediscussed in much greater detail in later sections.7. Lastly, and most important, modularity permits

software reuse.

SOFTWARE REUSE

The process of combining and building up known softwareelements in different configurations (also called synthesis)has been likened to Gutenberg's concept of removable type.Gutenberg's contribution has historically endured thecriticism of purists who argue that his printing press was notco much an invention, as a new application of existing

3

technologies at the time. To that, Douglas McMurtie respondedin "The Book", "It does not at all mini ize the importance ofthe invention ... to point out that the invention was theresult of a process of synthesis or combination of knownelements. For that power of the human mind which can visualizeknown and familiar facts in new relations, and theirapplication to new ones -- the creative power of synthesis --is one of the highest and most exceptional of mentalfaculties."

The idea of software reuse is certainly not new. Reuseand reworking has been practicea in one form or dnother sincethe 1950s; however, the landmark paper in this area is that ofM.D. McIlroy's "Mass-Produced Software Components", publishedin 1969. In that paper, he envisioned and proposed a catalogueof software components from which "software parts could beassembled, much as done with mechanical and electricalcomponents."

Software reuse is defined as the isolation. selection,maintenance and use of software components in the developmentand maintenance of a software project. Soundly based on theprinciples of modularity, it has been shown to improveproductivity by using previously developed and testedcomponents. Reusable components of a software system includedesign concepts, functional specifications, algorithms, code,documentation and even personnel. These components embody thevarious degrees of abstraction which pervade the process ofclassifying reuse items. Higher degrees of abstraction imply agreater likelihood for reuse. For example, specifications doriot yet contain detailed representation details orimplementation decisions, so the potential for reuse isgreater, while it is very difficult to find pieces of codewhich can be used Without some modifications.

Strictly speaking, software reuse must be distinguishedfrom redesign or reworking. Reuse means using an entity in adifferent context than what was initially intended, and isaiso known as "black box" reuse. Redesign or reworking refersto the modification of an existing module before it is used inits new setting. This is known as "white box" reuse, and is byfar the more common of the two.

Historically, the classical reusability technique hasbeen to build libraries of routines (e.g., subroutines,functions, procedures), each of which is capable ofimplementing a well-defined operation. These routines aregenerally written in a common language for a specific machine,and are accessed by a linker as needed. Although this approachhas met with some degree of success in numerical applications,there are some obvious problems which preclude using it toimplement a generally applicable reuse system. Subroutines aretoo small, representation and implementation details have beenfilled in, and the glue (interface requirements) necessary tobring many subroutines togeth-r is too extensive to makegeneral reuse feasible.

4

IA second approach to reducing or eliminating Lit software

development process takes the form of software generatingtools. Software generation has been successfully applied innarrow, well specified domains (e.g., report generators,compiler-compilers, language-based editors), but shows littlechance of being used outside these very specific domains.This is primarily because the nature of program generators issuch that the application area needs to be very well-definedto achieve the desired level of efficiency.

The third and most promising approach for achievingreusability is based on a technique called object-orienteddesign. Under object-oriented design, the decomposition of asoftware system is not based on the functions it performs, buton the classes of objects the system manipulates. Object-oriented programming and languages support the notions of dataabstraction, and encapsulation, as described above, andtherefore exhibit the flexibility necessary to define andcompose reusable components. Some of the more prevalentobject-oriented languages include Simula, Smalltalk and someextensions of Pascal and Lisp. In addition, object-orientedprinciples and constructs are being implemented (or simulated)in other, more conventional languages; especially those whichsupport data abstraction.

Assuming, as many practitioners have, that an object-oriented approach is the best way to describe a Loftware reusesystem, many other questions arise. How should existingsystems be deuomposed? What system fragments are candidatesfor reuse? How should these candidates be represented andstored? How should they be accessed? Once located, how shouldthey be coupled with other candidate modules, such thattogether they perform as the system? In the section thatfollows, these questions are answered.

THE SOFTWARE REUSE SYSTEM

A conceptual software reuse system is made up of acomponent repository or library, from which the necessarycomponents are retrieved and, if required, integrated withother co1rponents to address the functional requirements neededto solve a problem. The process is envisioned as follows:

The user, through some man-machine interface, presentsthe problem in a form which can be parsed to ascertain thefunctional requirements needed to solve the problem. Thelibrary management system would then search through therepository to find available candidate components such that,alone or in conjunction with other components, they can meetthese functional requirements. If a component fully satisfiesthe requirements, nothing further is required of the reusesystem; the component is simply executed as implemented. Ifsome composition of components can solve the problem, they arecoupled together a. necessary by the reuse system, and aresubsequently executed. However, more often than not, some

5

degree of modification of a component or components isnecessary before reuse is possible. The very .ict thatmodification of components is supported in a reus system iswhat makes software "soft". Biggerstaff and Rihter [31]asserted that "The modifying process is the lifeblood ofreusability. It changes the perception of a reusability systemfrom a static library of rock-like building blocks to a livingsystem of components that spawn, change and evolve newcomponents with changing requirements with their environment."

This sequence, as described, implies many aspects ofdevelopment that must be resolved up front, in order toprogress from the current methods cf processing; even in th±osesystems that, at present, claim to be reusing software in someform or another. Design decisions have to be made regardingthe nature of the component library. The reuse system designerhas to decide on the functional areas that have to covered bythe components. Once that decision is made, he can address thequestion of component availability -- does he have thenecessary modules at his disposal? This often necessitates asurvey of some magnitude, which will be needed at any rate forother development areas in building the reuse system (e.g.,the library management system). In summary, a successfulreuse system requires:

1) population of the component repository,2) methods for accessing candidates in the repository,3) methods for selecting candidates from the

repository,4) methods for modifying "close" candidate components

and5) methods for coupling candidate components

Populating the Component Repository

As a general rule, the first step in the developmentprocess should involve a survey of availablemodules/components that would satisfy the functionalrequirements of the system. That is, driven by the functionalrequirements of the problem to be solved, the designer shouldmake a survey of the existing software, both internal to hisorganization and commercially available software packages. Ofparticular interest should be certain "reusability factors"that make some components more attractive (useful) thanothers. Examples of reusability factors are: the degree ofiaaguajg%- and - the extent t3 whichthe implementation i3 context-inependent; the level oftesting, validation, and verification (V&V) undergone by thecomponent; the complexity and accuracy of the function beingimplemented, and the availability and quality ofdocumentation.

Although a survey is certainly a good starting point, itis by no means the only method of identifying candidatecomponents for inclusion. In many cases, the biggest steptowards populating the component library involves asignificant decomposition task -- a top-down pruning of the

6

functional structure of an existing software system to selectthe desired components. AL Lhis stage of the design, twomajor factor- are kept in mind. In addition to the reusabilityfactors just mentioned, the inherent hierarchical structureof the system may be exploited.

Zeigler [11], discussing the theoretical aspects ofdecomposition in a hierarchical sense based on varying degreesof detail, states "...specification of design in levels in ahierarchical manner [implies] the first level, and thus themost abstract level, is defined by the behavioral descriptionof the system. Next levels are defined by decomposing thesystem into subsystems (modules, components) and applyingdecompositions to such subsystems until the resultingcomponents are judged not to require furtherdecomposition...Therefore, the structure of the specificationis a hierarchy where leaf nodes are atomic models (cannot bedecomposed any further)." Hierarchicaldecomposition/representation is fairly extensively treated inthe literature; by Zeigler and others. Zeigler's conceptualdesign is covered later in this paper.

Accessing the Component Library

Once identified and "accepted" for inclusion, thecomponent needs to be stored in the component library, in amanner which facilitates straightforward accessIng, comparisonto the input requirements, and ultimately, retrieval. Oneoutstanding approach taken by Burton and others in "TheReusable Software Library" [25] involves the insertion of a"reuse comment" section into each component, which can beautomatically scanned by the library management system. Eachreuse comment is treated as an attribute of the componentitself; e.g. the comment labeled 'Overview' contains adescription of the component's function. Components are firstscanned for possible entry ini the library, addressing thefactors of reusability and the analysis of interest. Oncecomponents are entered into the library, the problem ofsearching for components to match the analysis requirements isreduced to one of scanning these reuse comments and extractingthe pertinent components for coupling and/or execution.

Selecting Candidates

As components are scanned in the conceptual reuse system,their applicability to the problem is assessed. This isbasically a matching problem, which can be handled in avariety of ways; none very easy. Given the user's problem(experiment, drea of analysis, etc.), the library managementsystem has to search through the available components to seeif any one component, or a mix of components, can solve theproblem. One approach i.tight takp the form of a table lookup,involving a "requirements versus components" matrix; animplementation which again requires considerable up-fronteffort to populate these natrices. Another more elegantimplementation incorporates a "closeness metric" as a way of

7

choosing and ranking likely candidates. More often than not,several candidates exist, each satisfying some of therequirements. These measures of closeness are useful inselecting and ranking the available candidates based on howwell (how closely) they matched the requirements of thesystem. A third method of candidate selection is slightly morefuturistic, and is based on an Expert System-basedimplementation. As part of the library management system,there could exist a "decision rule base", which incorporatescomponent selection knowledge, including explanations as tohow and wl some candidates were selected over others.

Aifying "Close" Components

As mentioned earlier in this paper, the ability of thereuse system to adapt existing components to the evolvingrequirements of the project is (at present) its mostattractive feature. The distinction between implementations ofthe present and those of the future is intentional -- a reusesystem being developed today, from existing components orproducts of a decomposition process is seriously constrainedfrom the start. A modifying process is essential in such asystem, because with very few exceptions, existing componentsare not amenable to reuse without modifications of some sort.It is hoped that in development efforts in the future,software components will be designed with reuse in mind fromthe outset, encompassing the recommended software engineeringprinciples discussed earlier. Until then, however, theconceptual reuse system and the associated library managementsystem should be capable of modifying "close" components tobetter meet the specifications of the user's problem. Today,this is largely a manual process, as few tools exist to helpmodify components. Among other things, this process couldinclude prompting the user to interactively supply, forexample, a missing data item or unit conversion. Of course, awell-designed library management system could eventuallyresolve these differences, even going as far as accessing adatabase or suggesting the execution of other components tosupply the missing information.

Component Coupling

The most difficult aspect of the conceptual reuse systemdeals with coupling, or integrating, existing componentstogether in some manner, to replicate the desiredfunctionality of the system. The benefits of componentcoupling were discussed in an earlier section of this paper.Therefore, this section will address the implementationaspects of component coupling.

Whereas system decomposition is viewed as a classicapplication of top-down design, component synthesis takes abottom-up approach. Candidate components meeting the inputspecifications are put together like building blocks toconstruct a software system capable of solving the problem. Asdiscussed earlier, these individual components should satisfy

8

the software engineering principles of abstraction,encapsulation and information hiding to reduce the amount of(usually manual) modification needed to integrate them.Software components written in languages which support thosefeatures are obviously most desirable from an integrationpoint of view, but that is not to say that software(subroutines, functions etc.) written in a conventionallanguage cannot be used. Again, one of the major designconsiderations of a reuse system involves possible workaroundsbecause of a desire to retain an expensive or "key" piece ofsoftware.

There are two fundamental ways components can beintegrated, depending on the degree of interaction requiredand inter- and intra-dependence of individual components. Inthe first and simplest case, software modules are runseparately and sequentially, using outputs of one component asinputs to the next component to be executed. This method,alternately called chaining or the UNIX pipeline method, isthe most straightforward method of integration, and is easilyimplemented; althouyh some interim transformations or analysismay be required. The second integration method is more complex(often impossible) and is employed when the components arpexpected to interact with each other. The compositionprinciple used in this case is based on inheritance andmessage-passing. As expected, the components in a reuselibrary using this integration method should preferably bethose which embrace the policies of object-oriented design, astheir interface details are well specified and they can bebound with other components without the internal(implementation) details of the components being known.Object-oriented programming also supports the concepts ofobject classes and tile message-passing between objects. Thenotion of inheritance is appl 4 ed in passing messages betweenclasses and subclasses (parents and children). When a newobject is sub-classed from a more generic parent class,capabilities common to both are implemented. In addition tobeing able to process any message that the parent class can(by passing off to the more generic parent), the sub-class canlocally process messages which are specific to itself. Again,Ada's "generic packages" and Modula's "modules" meet theseoijectives. Another useful construct in Ada and some otherobject-oriented languages which helps facilitate integrationis known as "overloading". Overloading allows more than onemeaning to be attached to a name. To use an example from theliterature, giving the name "Search" to all associated searchprocedures enables the user (or the library management system)to always invoke a search operation in the same manner,regardless of the implementation chosen, or the data types.Still another construct being advocated is that of semanticbinding. This applies to the flexibility needed whenreferencing across different domains. The calling componentmust refer to items it expects in its context, and since itcannot know the items' names in advance, it should be able torefer to them semantically.

9

One can perhaps begin to see the irtricacies involved,and the enormous representational decisions required indeveloping a reuse system. Reuse systems and their associatedtools are costly in terms of time, money and personnel, andeven then are often dismissed as ur easible due to lack ofreliable, reusable c'imponents. Contractors are hesitant tobuild software that is too rmusable for fear that there may beno "next job" for him. Even the "not invented here" syndromecauses resentment among management and system users. Mostimportantly, reuse systems have proven to be very application-dependent, with some applications providing a more maturetechnology base on which to build such a system. One suchapplication area is in support of Modeling and Simulation.

REUSE AND MODULARITY APPLIED TO MCDELING AND SIi.RLATTON

Reese and Wyatt [1] speaking at the i987 WinterSimulation Confereace s'ated "The issues of reusability ...apply to all types of software, including simulation sottware... Adoption of reuse philosophy and the subsequent creationof a reuse library by management is expected to improve thesimulation software development process and increase thecredibility of simulation results." In addition to thestandard components mentioned in earlier sections of thispaper, there are some support functions which are fairlyspecific to mcleling and simulation, and are required bynearly all discre':e simulation applications. Examples of suchfunctions are: time management; queue management; randomnumber qeneratioi; data I/O; input sequence front-ends; debugroutines; validation and verification tools;graphical/statistical analysis tools, and animation tools.Obviously, the functions of individual model components arealso amenable to reuse technology. In the sections thatfollow, the discussions on reuse are instantiated to the fieldof modeling and simulation technology. As part of thisinstantiation, some of the previously used terms will bechanged for 2larity. That is, components will be calledmodels; the component library will be designated the ModelBase; the library management system will be referred to as theModel Management System (MMS), and component coupling will, ingeneral, be known as model integration.

Zeigler's Hierarchical, Modular Modeling

Arguably the most prolific of the authors and researchersin the domain of model integration is Bernard P. Zeigler, whofirst referred to the concept in a 1976 book entitled "Theoryof Modeling and Simulation." In that book, he quietlyintroduced the idea of decomposition of existing models in ahierarchical manner, corresponding to the levels ot functionaldetaii. He discusses the process of aggregating the details ofwhat he calls base models into "lumped" models. A lumped modelincludes the functional coverage of one or more detailedcomponent (base) models, modeled with less detail. Thisinvolves a simplification process in which the description ofthe base model is modified in one of the following ways: 1)

10

one or more ui the descriptive variables are dropped and theireffect accounted for by probabilistic methods, or 2) the rangeset of one or more of the descriptive variables is coarsened,or 3) similar components are grouped together and theirdescriptive variables are aggregated. Finally, he pursues amathematical approach to valid simplification, based on"structure morphisms" at various levels of specification.

In later works, he refined his ideas, and changed theterm "lumped" model to coupled model. Most of hiscontributions to the literature now revolve on hishierarchical, modular composition techniques or actualimplementations of his concepts. He says [8] "Considering areal system as a black box, there is a hierarchy of levels atwhich its models may be constructed ranging from purelybehavioral, in which the model claims to represent only theobserved input/output behavior of the system, up to thestrongly structural, in which much is claimed about thestructure of the system. Simulation models are usually placedat the higher levels of structure and they embody manysupposed mechansims to generate the behavior of interest."

Central to his hierarchical scheme is the compositiontree, which describes how components are coupled together toform a composite model. In his words (and referring to Figures1 and 2), "Suppose that we have models A and B in the modelbase. If these model descriptions are in the proper form, thenwe can create a new model by specifying how the input andoutput ports of A and B are to be connected to each other andto external ports, an operation called coupling. The resultingmodel, AB, called a coupled model is once again in modularform ... modularity, as used here, means the description of amodel in such a way that it has recognized input and outputports through which all interaction with the external world ismediated. Once in the model base, AB can itself be employed toconstruct yet larger models in the same manner used with A aniB. This property, called closure under coupling, enableshierarchical construction of models." Noting the dualrelationship of system decomposition and model sysnthesis, heremarks that the coupling of two atomic models A and B isassociated with the deccmposition of the composite model ABinto components A and B.

MODEL BASE-- r-AB

n1 I 1 . . out in out outA 0 PA -)-) B -

n-). 0 )". Out

i B n COUPLING:

external input: A in -> A in

in__ A -- out external output: B.out -> AB.outinternal: A out -> B in

Fig 1. Model Base Fig 2. Model Coupling

In Zeigler's scheme, which has already been applied to avariety of disciplines, there are three basic parts to thedescription of an atomic model: 1) the input/outputspecification, explicitly describing the input and outputports and the ranges their associated variables can assume, 2)the state and auxiliary variables and their ranges (the staticstructure) and 3) the external and internal transitionspecification (the dynamic structure). The descriptions forcoupled models differs slightly, in that informationpertaining to the coupling process is also included. Separatefiles containing interface specifics are maintained for eachcomponent and facilitate the coupling of selected models.

There are three facets to Zeigler's coupling scheme.First, there is a file which contains information relating theinput ports of the composite model to the input ports of thecomponents (called external input coupling). Next, externaloutput coupling tells how the output ports of the componentmodel are identified with the output ports of the components.Finally, internal coupling information is maintained, tellinghow the output ports of the components are connected to inputports of other components; in other words, it describes howthe components inside a composite model are interconnected.Following the principles of abstrdction and object-orienteddesign, all interaction with the environment is mediatedthrough these input and output ports, regardless of theinternal implementations of the models. Furthermore, thesending of external events from the output port of onecomponent to the input port of another component can belikened to message passing; another composition techniquesupported by object-oriented design.

This multifaceted, hierarchical modular modeling concepthas been successfully applied in other disciplines; and eachtime, is improved upon to some degree. The lion's share of hisresearch has been in the representational details of the modelbase (the component library) and to a limited extent,maintenance of that library. To fully instantiate the idea ofa reuse library management system to the field of Modeling andSimulation, it is once again necessary to borrow from one ofthe kindred fields of software science; that being DecisionSupport Systems research.

THE MODEL MANAGEMENT SYSTEM (MMS)

Model Management has recently been extensively studiedand applied by Benn Konsynski and others [37-44] in relationto its function in a Decision Support System. Essentially, itis an instantiation of the library management system discussedin earlier sections, providing for the creation, storage,manipulation and accessing of models in a model base. In otherwords, a Model Management System (MS) is to the Model Basewhat a DBMS is to a database. There are three basiccomponents which make up the MMS: The Decision Component, theModel Component and the Data Component.

12

The Decision Component provides the user with the abilityto describe, analyze and store decisions. It is comprised of aDecision Manipulation component and a Decision Storagecomponent. An example of what might be found in the DecisionManipulation component would be solution rules or modelselection rules. The Decision Storage component contains, forexample, the rules related to completeness/consistency checks.The Model Component is made up of a Model Manipulationcomponent and a Model Storage component. Examples of the typesof functions found in the Model Manipulation component aremodel classification rules and data selection rules. Anexample of what would be found in the Model Storage componentwould be rules for checking the consistency/integrity of themodels. Finally, the Data Component is basically a DBMS.

The most elucidating information on the workings of anMMS comes from knowing the interactions among the threecomponents of the MMS, and between the MMS and the UserInterface to the MMS.

Interactions involving the Decision Component:

The Decision Component accesses the Model Component toretrieve, sequence and control the models needed to solve aspecific problem. It checks the consistency, integrity andcompleteness of the Model Base and the database for solving aspecific decision problem. Finally, it queries the userthrough the User Interface if the data or models needed tosolve the problem are inconsistent or incomplete.

Interactions involving the Model Component:

The Model Component accesses the Data Component toretrieve, sequence and control the data needed forimplementing a specific model.

Interactions involving the User Interface:

At any time, the user may interactively input data ormodels that are not stored in the system, through the UserInterface. Also, the user can directly access the Model Baseor the database for model or data management and analysis thatis not specifically related to a particular decision problem.

The Model Management System is the cornerstone of asoftware reuse system, and can be extremely complex in design.Much of the literature in this area confines itself topresenting knowledge representation schemes for implementing aconceptual MMS, but little has been formally done as far asactually building one. Most often, a manual, brute forceapproach is taken for Model Base population and manipulation.Such is the case for the modeling system described in the nextsection.

13

ELECTRONIC COMBAT EFFECTIVENESS ANALYSIS: The Concepts areApplied

The specific application of software reuse and modelintegration of interest to this paper involves conductingsimulation studies to evaluate the effects of ElectronicCombat (EC) in support of mission planning in battlefieldscenarios. In addition to the problems which are universallycommon to software reuse in general, some domain-specificissues are introduced in studies of this sort. In contrast toearlier building block applications of software reuse which,in general, use decomposition schemes based primarily on theimplicit functional hierarchy of the system, this studyincludes another aspect which can be hierarchically described;that being the degree of analysis possible at each level. In a1979 Air Force study, LtCol (then Major) Glen Harris addressedthis new aspect in regards to analyzing force effectivess, andintroduced the concept of a "validated analytical hierarchy ofmodels", stating "Neither a highly detailed approach nor abroad aggregate modeling approach by itself is adequate toanalyze the complex battlefield. Unless both approaches areused and carefully integrated, the results obtained will notprovide the insight required to determine why one ensemble ofsystems should be preferred over another. An integratedapproach must be designed to answer several tevtls ofquestions as to the causal relationships involved..."Accordingly, various working groups under the Department ofDefense defined four levels of Electroinc Combat analysis, asfollows:

Level I: System/Engineering Analysis. The analysis atthis level primarily deals with individual systems orcomponents; e.g., jammers, sensors, transmitters, antennas,etc. The objective is to measure the required and/or achievedengineering-level data for Electronic Warfare systems andtheir interactive effects with target systems [46]. Theanalysis at this level (and therefore any Measure ofEffectiveness associated with this level) is limited to theeffects of, for example, a single jamming component against asingle target threat.

Level II: Platform Effects Analysis. At this level, theevaluation focuses on the component being associated with aplatform; e.g., a radar jammer installed on an aircraft. Theeffectiveness of the installed system is then evaluated in thecontext of a one-on-one or few-on-few analysis.

Level III: Mission Effectiveness Analysis. Analysis atthis level assesses the contribution of Electronic Combat to acombat mission environment, including other aspects such asCommand and Control, time-sensitive manuevers, and a definedenemy posture.

Level IV: Force Effectiveness Analysis. This encompassesall the activity associated with operations in the context ofjoint Air Force/Army/Navy campaigns against an enemy combined

14

arms force, towards evaluating the contribution of ElectronicCombat support in such a campaign.

Under this hierarchical scheme, a distinction is madebetween vertical integration and horizontal integration.Vertical integration refers to the ability to pass data outputof a model at one level of the hierarchy to the input of ahigher (or lower) level model. This is in consonance with theconcept of the UNIX pipeline method of integration discussedearlier. Another method of vertical integration is effected byusing lower level models as modules in higher level models.For example, a Level I standard propogation model could beused in a Level II Surface-to-Air weapons model, which couldin turn be used in a Level III mission effectiveness model.Vertical integration is important because it provides avalidated audit trail of higher-level results to "hard"engineering data, range data, hybrid simulation outputs and/orflight test data. The credibility of most of the upper-levelmodeling rests on the ability to vertically integrate.

Horizontal integration (federation) of data refers to theaccessing (by models at all levels) of a master input/runtimedatabase for data that is global in nature. This concepteliminates the problem of needing multiple databases formultiple models. When intelligence and other situational(state) changes require updating of data, they are changed inone place only, with new values propogating to all models thatneed to reflect those updates; analogous to the blackboardapproach followed in some disciplines of ArtificialIntelligence.

The Problem

The specific problem which is the subject of this paperdeals with the lack of fidelity of an existing missionplanning tool. Although soundly based on phenomenological andphysical properties of barrage jamming (radiated power againstradiated power), the tool lacked the required depth, as far assimulating existing Electronic Combat assets to the detailneeded for real-life decision making. As in other disciplines,when users required this additional detail, a decision had tobe made as to its implementation: rebuild or reuse. For themany reasons described earlier, an integration approach wasdeemed 1) the most attractive from a cost/time perspective, 2)the most intriguing from a research and developmentstandpoint, but 3) certainly the most difficult (and possiblyunattainabie), from a realistic point of view.

A Solution

Complexities notwithstanding, an integration approach waspursued. First, existing model repositories were surveyed, asto their compliance with the well-designed criteria of thesurvey. For instance, each candidate model was comparedagainst such features as model availabilty, degree ofvalidation, modeling methodology (e.g. stochastic event

15

scheduling versus deterministic scripting), underlyingassumptions, Measures of Effectiveness, hostsoftware/hardware, dependencies on other models/databases,processing modes (interactive versus batch), availability andcurrency of documentation, and others too numerous to mention.Finally, three candidate models were chosen, representative ofa specific radar jammer, a specific communications jammer andan aircraft dedicated to suppressing enemy air defensesystems. These models were obtained from their respectiveowning/maintaining agencies, and were microscopically studiedto ascertain and illucidate the necessary integrationproperties; such as input/output characteristics, units, etc.It should be stressed here that this was a very time- andmanpower-intensive undertaking, necessitated in generalbecause of the absence of object-oriented principles in any ofthe chosen models.

Once modified (minimally, so as not to violate therequirement of maintaining each model's standalone status),the three models were configured to run synchronously underthe existing simulation executive, Together, the integratedsystem is called the Electronic Combat Effectiveness System(ECES), and runs on a VAX 11/780. At startup, the three modelswere informed of the initial conditions of the scenario (e.g.,each's own flight path, the perceived threat laydown,geographical/terrain data, etc.), which were, in general,required inputs of each to begin with. In addition, each modelcontained (or read) its own local data, necessary forstandalone execution.

As the ECES model (the aggregate model) is run, messagesare sent to the individual (component) models to updatepositions, announce threat movements and signa± activity, andso on. In return, each model passes messages such as flightpath changes, jamming noise figures (if requested), or otherinformation pertinent to the mission. The aggregate modelserves a dual purpose: it is the orchestrator of the eventscript during the scenario (updating the information common toall the models), as well as dynamically displaying allscenario activities (common to all, or as specificallyreported by each model) on a graphics terminal.

The Electronic Combat Effectiveness System now runs as itdid before its decomposition; the difference with the currentsystem is that validated jammer characteristics are nowexplicitly modeled, and are inherited as needed. Another majorplus is that the synergistic effects of the Electronic Combatassets (taken pairwise or in total) can now be determined.This offers an obvious improvement over the independentexecution of the three component models in standalone mode.

During the execution of the system, statistics arecollected for ultimate Measures of Effectivess (MOEs)calculation, corresponding to the four levels of the analysishierarchy desribed earlier. For example, in determiningmission-level (Level III) effectiveness, a factor called "per

16

cent neutralizaLion" is computed, comparing the ability of(for example) a Target Tracking Radar to successfully track dnincoming strike aircraft; both in the presence and absence ofjamming support. The actual MOEs and overall results of theasset tradeoffs, while interesting, are beyond the scope ofthis paper. What is of greater interest is the fact that,despite the labor-intensive nature of the integration effort,scftware reuse was possible, and actually (in thisapplication) facilitated a "total-is-greater-than-the-sum-of-its-parts" system which was not attainable through independentexecution of the component models.

One rather glaring omission in the system was the lack ofa Model Management System, per se. Since the library onlyconsisted of the three candidate models, model selection wastrivial. Obviously, now that the concept and a chosenintegration methodology have been proved feasible (anduseful), the next step is to obtain, build or acquireadditional modular components for populating the componentlibrary. Armed now with the lessons learned during thisintegration effort, more consideration will likely be giventowards assuring that any modules/models selected fotinclusion adhere to the principles of data abstraction,encapsulation, and so on. These lessons learned will not onlyhelp in the next integration effort we undertake, but willalso no doubt help define future directions in research in thefield.

FUTURE RESEARCH DIRECTIONS

In the future (and starting now), research should bedone in the following areas: model/software development, ModelManag-ment Systems and standardization. As mentioned above,stricter adherence to modern software engineering principlesis a must. It appears that those development efforts that arefounded on object-orientod design -- Ada in particular --represent a good start towards expanding the technology base.In addition, new model structures may be developed whichspecifically address the issues of software reusablility andintegration. Model Management Systems and their associatedtools must be developed to facilitate the selection andsynthesis processes in configuring a "solution model" for agiven problem. Expert System techniques could be brought tobear in nearly all phases of development of the ModelManagement System; especiall" for choosing and couplingcertain mixes of models -- even offering explanations as towhy they were chosen over others. The User Interface portionof the MMS suggests a fertile area for improvement. Just aswindow-based interfaces and mouseable items have progressedrapidly over a short time, other advances are likely in theareas of natural language iilterfaces, touch- and voice-sensitive input and visual languages [21]. Other researchareas which could be applied to Model Management Systemdevelopment include database development, distributed andparallel processing, library representation schemes; and thelist literally goes on and on. Underlying everything, however,

17

is one common thread: standardization. Having beensuccessfully applied in nearly every known discipline,standardization also holds great promise in the field ofsoftware reuse. As models/modules are developed and eventuallyevolve into sucessfully reusable components, they could bemade available as standard component models. Standardcomponents could then be coupled to form standard aggregatemodels, if needed for some analyses. Eventually, it isconceivable that packaged libraries of optional standardreusable components could be offered, much as libraries ofmathematical routines are available as part of compilerpackages. A less obvious area requiring standardization dealswith the problem of integration. Compatibility standardsshould be investigated, against which newly written modelswould be required to comply. Standard "closeness metrics"should be defined, to rate candidate components equally.Finally, some thought should be given to even standardizingunits for often-used parameters; possibly even those parameternames. Again, this list is only meant to show a few of themany areas that are amenable to standardization.

CONCLUSION

Despite documented prophesies to the contrary, softwarereuse and model integration will continue to grow. As expectedimprovements in programming practices and stanfardizationmaterial _,e, reuse system development will migrate from thisawkward period of "working with the givens", towards theinception of standard models, interfaces, and perhaps evenpackaged standard libraries. This next generation of softwaresystem development will not be without cost. There will begreat outlays and sacrifices in all areas; not the least ofwhich will be obtaining management support. But no matter thecuu, no matter the effort, no matter the level of resistanceencountered, this technology area should be vigorouslypursued, as were many other unlikely, yet promising areas,which are now standard practices. Simply put: for large-scalesoftware system development, there is no realisticalternative.

18

BIBLIOGRAPHY

1. R. Reese, D. L. Wyatt, "Software Reuse and Simulation",Proceedings of the 1987 Winter Simulation Conference

2. G. C. Vansteenkiste, "New Challenges in System Simulation",Proceedings of the 1985 Summer Computer Simulation Conference

3. K. J. Murray, S. V. Sheppard, "Automatic Model Synthesis: UsingAutoma-ic Programming and Expert Systems Techniques TowardSimulation Modeling", Proceedings of the 1987 Winter SimulationConference

4. B. P. Zeigler, T. G. Kim, "The DEVS Formalism: Hierarchical,Modular Systems Specification in an Object Oriented Framework",Proceedings of the 1987 Winter Simulation Conference

5. D. Kostelski, J. Buzaicott, K. McKay, X. Liu, "Development andValidation of a System Macro Model Using Isolated Micro Models",Proceedings of the 1987 Winter Simulation Conference

6. R. G. Sargent, "An Overview of Verification and Validation ofSimulation Models", The Proceedings of the 1987 Winter SimulationConference

7. B. P. Zeigler, "Hierarchical Modular Modeling/KnowledgeRepresentation", Proceedings of the 1987 Winter SimulationConference

R. R. P. Zeiryler T. I. Oren. "Multifacted, Multiparadigm ModellingPerspectives: Tools for the 90's", Proceedings of the 1987 WinterSimulation Conference

9. R. G. Sargent, "Joining Existing Simulation Programs",Proceedings of the 1987 Winter Simulation Conference

10. A. I. Concepcion, S. J. Schon, "SAM - A Computer Aided DesignTool for Specifying and Analyzing Modular, Hierarchical Systems",Proceedings of the 1987 Winter Simulation Conference

11. J. W. Rozenblit, S. Sevinc, B. P. Zeigler, "Knowledge-BasedDesign of LANs Using System Entity Structure Concepts", Proceedingsof the 1987 Winter Simulation Conference

12. S. A. Shoaf, "A Modular Approach to the Simulation ofManufacturing Processes", Proceedings of the 1983 Winter SimulationConference

13. K. J. Musselman, et al, "Practitioners' Views on Simulation",Panel Discussion from the Proceedings of the 1983 Winter SimulationConference

14. W. T. Jones, B. J. Jones, "Computer Simulation UsingHierarchical Models", Proceedings of the 6th Pittsburgh Conference,1975

B-1

15. B. R. Konsynski, J. F. Nunamaker, "A Generalized Model forComputer-Aided Process Organization in Design of InformationSystems", Proceedings of the 6th Pittsburgh Conference, 1975

16. D. W. Balmer, "Modelling Styles and Their Support in the CASMEnvironment", Proceedings of the 1987 Winter Simulation Conference

17. R. Prieto-Diaz, P. Freeman, "Classifying Software forReusability", IEEE Software, Jan 1987, p6

18. R. F. Kamel, "Effect of Modularity on System Evolution", IEEESoftware, Jan 1987, p 48

19. A. Reilly, "Roots of Reuse", IEEE Software, Jan 1987, p 4

20. B. D. Shriver, "Reuse Revisited", IEEE Software, Jan 1987, p 5

21. S. Chang, "Visual Languages: A Tutorial and Survey", IEEESoftware, Jan 1987, p 29

22. W. Tracz, "Reusability Comes of Aqe", IEEE Software, Jul 1987, p6

23. P. G. Bassett, "Frame-Based Software Engineering", IEEE Software,Jul 1987, p 9

24. G. E. Kaiser, D. Garlan, "Melding Software Systems from ReusableBuilding Blocks", IEEE Software, Jul 1987, p 17

25. B. A. Burton et al, "The Reusable Software Library", IEEESoftware, Jul 1987, p 25

26. M. Lenz, H. A. Schmid, P. F. Wolf, "Softwdre Reuse ThroughBuilding Blocks", IEEE Software, Jul 1987, p 34

27. A. Gargaro, T. L. Pappas, "Reusability Issues and ADA", IEEESoftware, Jul 1987, p 43

28. S. N. Woodfield, D. W. Embley, D. T. Scott, "Can ProgrammersReuse Software?", IEEE Software, Jul 1987, p 52

29. G. Fischer, "Cognitive View of Reuse and Redesign", IEEE

Software, Jul 1987, p 60

30. R. Conn, "ADA Software Repository", IEEE Software, Jul 1987, p105

31. T. Biggerstaff, C. Richter, "Reusability Framework, Assessment,and Directions", IEEE Software, Mar 1987, p 41

32. B. Meyer, "Reusability: The Case for Object-Oriented Design",IEEE Software, Mar 1987, p 50

33. K. W. Miller, L. J. Morell, F. Stevens, "Adding Data Abstractionto Fortran Software", IEEE Software, Nov 1988, p50

34. G. Gruman, "Early Reuse Practice Lives Up To Its Promise", IEEESoftware, Nov 1988, p 87

B-2

35. J. R. Fishoff, R. L. Sisson, "Design and Use of Simulation

Models"

36. B. P. Zeigler, "Theory of Modeling and Simulation"

37. Dolk, B. Konsynski, "Knowledge Representation for ModelManagement Systems", IEEE Transactions on Software Engineering, VolSE-10, No. 6, Nov 1984

38. 0. I. Truncer, "Concepts and Criteria to Assess Acceptabilityof Simulation Studies: A Frame of Reference", Communications of theACM, Vol 24, No. 4, Apr 1981

39. Klein, Konsynski, Beck, "A Linear Representation for ModelManagement in a DSS", Journal of Management Information Systems,Vol II, No. 2, Fall 1985

40. McIntyre, Konsynski, Nunamaker, Jr., "Automating PlanningEnvironments: Knowledge Integration and Model Scripting", Journalof Management Information Systems, Vol II, No. 4, Spring 1986

41. Liang, Ting-Peng, "Integrating Model Management with DataManagement in DSS", Decision Support Systems 1 (1985), p221

42. Applegate, Konsynski, Nunamaker, Jr., "Model ManagementSystems: Design for Decision Support", Decision Support Systems 2(1986), p81

43. Konsynski, Sprague, "Future Research Directions in ModelManagement", Decision Support Systems 2 (1986), p 103

44. Fedorowicz, Williams, "Representing Modeling Knowledge in anIntelligent Decision Support System", Decision Support Systems 2(1986), p3

45. G. L. Harris, 'Computer Models, Laboratory Simulators and TestRanges: Meeting the Challenge of Estimating Tactical ForceEffectiveness in the 1980's", 1979

46. G. R. Dougherty, "On What Basis, EW?", Journal of ElectronicDefense, Oct 1984

47. A. F. Sisti, et al, "Electronic Combat Development andDemonstration Component", RADC-TM-86-9, Aug 1986

48. A. F. Sisti, et al, "Automated Intelligence Decision Aids",RADC-TR-87-17, Feb 1987

49. M. Ringler and G. Brown, "Electronic Combat EffectivenessStudy" Final Technical Report, Jul 1988

B-3

MISSION

ofRome Air Development Center

RADC lans and executes research, development, test andselected acquisition programs in support of Command, Control,Communications and Intelligence (CII) activities. Technical andengineening support within areas of competence is provided to

N ESD Program Offices (POs) and other ESD elements toperform effective acquisition of C'I systems. The areas oftechnical competence include communications, command and

N control, battle management information processing, surveillancesensors, intelligence data collection and handling, solid state

N sciences, electromagnetics, and propagation, and electronic NN rri'ia bility/ maintaina bilty and compatibility.


Recommended