+ All Categories
Home > Documents > Knowledge-based system to assess air crew training requirements

Knowledge-based system to assess air crew training requirements

Date post: 22-Sep-2016
Category:
Upload: cp
View: 212 times
Download: 0 times
Share this document with a friend
7
Fig. 1 . Student pilot in training simulator--air refueling. Knowledge-Based System to Assess Air Crew Training Requirements Barry R. Smith and Carolyn P. Banda An important consideration in developing mission requirements and cockpit equipment for an evolving aircraft is the training required for safe and effective operation of the com- pleted vehicle by air crews. A prototype train- ing assessment tool has been developed as part of a computer-based cockpit design and analysis workstation that estimates the train- ing resources and time imposed by the anticipated mission and cockpit design. Em- bedding instructional system and training Presented at ihe 1989 IEEE International Confer- ence on Systems, Man and Cybernetics, Cambridge, MA, November 14-17,1989. Barry R. Smith is with ihe US. Army Aerojlight Dynamics Directorate and Carolyn P. Banda is wiih Sterling Software, Inc. Both work at NA,TA-Ames Research Center, Moffett Field, CA 94035. analysis domain knowledge in a production system environment, the tool allows crew sta- tion designers to readily determine the training ramifications of their choices for cockpit equipment, mission tasks, and operator qualifications. Initial results have been validated by comparision to an existing train- ing program, demonstrating the tool’s utility as a conceptual design aid and illuminating areas for future development. Knowledge-Based Approach The Army-NASA Aircrew/Aircraft Integration ( ~ ~ 1 ) Program is an exploratory development effort conducted by the U.S. Army Aeroflightdynamics Directorate and the NASA Aerospace Human Factors Research Division. The program, initiated in 1984, is chartered to develop a human engineering workstation to aid in the conceptual design of advanced rotorcraft cockpits. This worksta- tion, called MIDAS (for Man-machine Inter- face Design and Analysis System), includes a number of interactive, analytic, and graphic tools to integrate and depict human engineer- ing principles early in the design process. Because of the growing expense and sophistication required in training personnel to operate complex aeronautical systems, one of the key components of MIDAS is a knowledge-based system to estimate the train- ing resources and time for the anticipated aircrew tasks and cockpit equipment. Even- tually the training costs can be estimated as well. This Training Assessment Module is based on the Instructional System Develop- ment (ISD) methodology frequently used by /E€ Control S ysterns Magazine 0272-1708/90/0800-009 $01.00 0 1990 IEEE 9
Transcript
Page 1: Knowledge-based system to assess air crew training requirements

Fig. 1 . Student pilot in training simulator--air refueling.

Knowledge-Based System to Assess Air Crew Training Requirements

Barry R. Smith and Carolyn P. Banda

An important consideration in developing mission requirements and cockpit equipment for an evolving aircraft is the training required for safe and effective operation of the com- pleted vehicle by air crews. A prototype train- ing assessment tool has been developed as part of a computer-based cockpit design and analysis workstation that estimates the train- ing resources and time imposed by the anticipated mission and cockpit design. Em- bedding instructional system and training

Presented at ihe 1989 IEEE International Confer- ence on Systems, Man and Cybernetics, Cambridge, MA, November 14-17,1989. Barry R. Smith is with ihe U S . Army Aerojlight Dynamics Directorate and Carolyn P. Banda is wiih Sterling Software, Inc. Both work at NA,TA-Ames Research Center, Moffett Field, CA 94035.

analysis domain knowledge in a production system environment, the tool allows crew sta- tion designers to readily determine the training ramifications of their choices for cockpit equipment, mission tasks, and operator qualifications. Initial results have been validated by comparision to an existing train- ing program, demonstrating the tool’s utility as a conceptual design aid and illuminating areas for future development.

Knowledge-Based Approach

The Army-NASA Aircrew/Aircraft Integration ( ~ ~ 1 ) Program is an exploratory development effort conducted by the U.S. Army Aeroflightdynamics Directorate and the NASA Aerospace Human Factors Research Division. The program, initiated in 1984, is

chartered to develop a human engineering workstation to aid in the conceptual design of advanced rotorcraft cockpits. This worksta- tion, called MIDAS (for Man-machine Inter- face Design and Analysis System), includes a number of interactive, analytic, and graphic tools to integrate and depict human engineer- ing principles early in the design process.

Because of the growing expense and sophistication required in training personnel to operate complex aeronautical systems, one of the key components of MIDAS is a knowledge-based system to estimate the train- ing resources and time for the anticipated aircrew tasks and cockpit equipment. Even- tually the training costs can be estimated as well. This Training Assessment Module is based on the Instructional System Develop- ment (ISD) methodology frequently used by

/E€ Control S ysterns Magazine 0272-1708/90/0800-009 $01.00 0 1990 IEEE 9

Page 2: Knowledge-based system to assess air crew training requirements

the Department of Defense. At a simplified level, this methodology involves creating an exhaustive list of the tasks necessary to ac- complish a particular job or mission and then describing these tasks in the form of be- havioral objectives, conditions, and standards. The collective tasks are then organized into an hierarchy of learning such that subordinate or enabling objectives are taught prior to at- tempting more difficult tasks. Acomparison is then made between the task demands and the composite skills and knowledge of the planned operator. The resulting differences constitute the training requirement for a par- ticular system. Based on a number of task attributes such as cuing requirements or desired performance level, individual training objectives are then allocated to various in- structional media. These media typically range from simple workbooks or static mock- ups to extremely complex and expensive flight simulators.

Once all of the required instructional media are defined, the individual task and media elements are grouped into lessons ac- cording to their similarity or mission sequence and then grouped into courses. The Instruc- tional System Development methodology is described in detail in [1],[2]. Although sys- tematic, the historical approaches to applying this methodology in large-scale training sys- tems are cumbersome and inefficient. First, volumes of intermediate data are generated, often requiring months of sustained effort. The large data volume makes it difficult to isolate salient tasks from the aggregate requirements. Second, the important media selection process is frequently politicized and inconsistent, resulting in drastically different media choices for similar tasks. Finally, this methodology has traditionally been performed separate from the aircraft development, usually after a prototype has been designed. These factors make a knowledge-based system approach at- tractive as a training assessment tool par- ticularly when implemented as part of a designer’s workstation for the actual aircraft design.

MIDAS Approach

The MIDAS Training Assessment Module capitalizes on some related work per- formed by the Logicon Corporation under contract to the Air Force. Logicon developed the Training Analysis Support Computer Sys- tem (TASCS), written in dBase III for an IBM PC [3]. Their program walks a user through the complete Instructional System Develop- ment process by displaying a series of query screens and prompts for information. The Logicon system does contain some embedded

Task Data Collect ion Program - TDCP ( ART-readable

(Lisp)

f f Quasi- PO I :

- media - t ra in ing t imes

> Training Assessment - learning experiences

User inputs: - student level --e Module - TAM - budget level (ART & Lisp) - task selections

--+

I I Just i f icat ion Network (opt.)

-~ ~. .-

ig. 2. Program of Instruction development process.

domain knowledge, particularly related to determining learning experiences (or instruc- tional strategies) and calculating training times. However, it encompasses the entire in- struction design methodology, and much of the information it generates is of interest only to training system specialists, not necessarily cockpit designers. Consequently, the MIDAS developers chose to build upon the foundation of the Logicon system but abbreviate the scope addressed, include a more robust train- ing domain knowledge representation, and concentrate on output pertinent to designers of both training systems and aircraft systems.

The output of the Training Assessment Module is in the form of a quasi-Program of Instruction for each operator task examined. This Program of Instruction is a set of lesson elements that includes the learning experienc- es, training media, and time required to in- struct planned students in the tasks and equipment under conceptual development. Task descriptions are carefully worded so that the individual tasks and the corresponding behavioral objectives are considered synonymous. While the module currently does not implement the latter stages of Instruc- tional Systems Development, where in- dividual tasks are grouped into lessons or courses, the powerful ART (Automated Reasoning Tool) by Inference Corporation and Common Lisp environments have enabled the authors to develop a tool that rapidly isolates the most significant training impacts of conceptual vehicles and allows designers to ask pertinent “what if‘ questions

about changes to mission requirements, cock- pit equipment, operator skills, and training budgets.

Module Implementation

Performing training assessment with the Training Assessment Module is a two-step process. First, using detailed mission decom- position and equipment data generated by other MIDAS tools, the user, who may be a systems designer or a training designer, runs a task data collection program to create a file containing the tasks of interest and their char- acteristics. This program formats task infor- mation into an ART-readable file composed of task objects (ART schemata) and facts. After the task data collection process is completed, the user loads the Training Assessment Module, along with the task data file, and initiates the training assessment process by selecting an entering student level, a budget level, and some or all of the tasks from the file. In order to predict training requirements and to develop a Program of Instruction for each task, the module uses the task information and training design knowledge embedded in its rules. Fig. 2 depicts this process.

The first program, Task Data Collection, is written in Symbolics Common Lisp and runs under Genera 7.2 on a Symbolics 3600 series computer. Its menu-oriented user inter- face is designed to provide a structured, ac- curate way to enter task attribute values and facts related to a student’s familiarity with cockpit tasks and equipment.

10 August 1990

Page 3: Knowledge-based system to assess air crew training requirements

Currently there are 9 task attributes. Some attributes include selection of multiple values (there are 23 possible reasons for difficulty, for example) and others have only a simple numerical rating (for example, safety criticality is a number from 1 to 5). Specifi- cally, task attributes include cuing require- ments, reasons for difficulty and a difficulty rating, reasons for safety criticality and a safety criticality rating, mission criticality rating, frequency of task performance, perfor- mance time, and learning subcategories. For example, values for the attribute cues include still visual, dynamic visual, wide field of view, distant (out-the-window) field of view, color, photographic detail, motion, sound, and tactile feel. Possible reasons for difficulty are exer- tion; appreciation of position and energy; precise manipulation (psychomotor realm); decision on mission, situation, or equipment; conceptual memory and literal memory (cog- nition); and others related to cuing and time sequence.

Each task is classified with respect to learning subcategories, which include the cog- nitive, psychomotor, and affective domains and several miscellaneous characteristics. Cognitive categories classify a task according to nature and extent of cognitive demands on the operator; they form a matrix with the type of information on one axis (fact, concept, pro- cedure, rule, principle) and the method of using this information on the other axis (remember, use aided, use unaided). Psychomotor learning subcategories specify whether a task requires fine or gross motor control and the number of spatial dimensions involved in task performance. These learning subcategories are described in more detail in

The second program, the Training Assess- ment Module, is written primarily in ART, with a few functions in Common Lisp; it also runs on the Symbolics under Genera 7.2. This module contains about 100 rules (largely for- ward-chaining) that examine task charac- teristics and predict training requirements, accordingly. The following sample rule (paraphrased from ART) assigns a Cockpit Procedures Trainer as a medium for the demonstration of a task having these indicated characteristics :

[41.

IF (AND (task has learning experience = demonstration) (task reason for difficulty = specific sequence) (task criticality rating <= 3) (task does not require significant motion cues)

(task does not require distant Field Of View cues))

(assign Cockpit Procedures Trainer as medium)

THEN

The domain knowledge encoded in the Training Assessment Module's rules was ob- tained both from the Logicon system and local personnel familiar with aircrew training pro- cedures and devices. In addition, one of the authors was used as a subject matter expert due to his extensive experience in developing training simulators and instructional systems. Initially, informal interviews were conducted to understand the reasoning process followed by the domain experts, isolating how they decomposed and approached the problem, as well as the task, device, and operator attributes they found most critical. Logicon's system was then studied, with the relevant portions captured and augmented to match the stages of processing and heuristics elicited by the domain knowledge sources. The extent and rationale for the rules and processing stages actually reduced to code are described under the section "Knowledge Base Features."

Training assessment for a given task is implemented as a sequential process. First, a set of learning experiences (or instructional strategies) is assigned according to task char- acteristics. Possible learning experiences are three types of explanation (textual, graphic, and dynamic), demonstration, three types of part-task training (cognitive, psychomotor, and affective), and full-task training. As a minimum, a single type of explanation is al- ways assigned, as is full-task training. As an option, any or all of the remaining learning experiences may be assigned according to task characteristics and student background. A summary of the assignment sensitivities is shown in Fig. 3.

After a task has its ascribed set of learning experiences, a training medium is chosen for each experience. Available training media in- clude textbook/workbook, lecture, videotape, videodisc-Computer Based Training (CBT), cockpit familiarization trainer, cockpit proce-

dures trainer, operational flight trainer, part- task trainer, weapon system trainers with and without motion platforms, and the actual sys- tem. These media span the major classes of training devices presently in use [ 5 ] .

Finally, a training time for each lesson element is computed, based on the task char- acteristics, learning experience, training medium, and student background.

Salience values for certain rules are used during inferencing when it is important to control the order of firing. For example, this is done in the assignment of media within the various categories of learning experiences where a "default" method of reasoning was necessary to emulate the domain expert's line of reasoning. For demonstration and full-task training learning experiences, salience values allow the Training Assessment Module to as- sign media with successive levels of fidelity, starting at the low end (cockpit familiarization trainer) and progressing to the actual system when no other medium could be assigned.

Knowledge Base Features

Questions about "how much fidelity is enough?" have plagued the simulation com- munity for many years. Because such queries are difficult to succinctly answer, training device specifications often default to duplica- tion of the actual vehicle as the answer. The resulting expense can be staggering - over $200 million for 7 B-1B Weapon System Trainers and Mission Trainers [6]. Conse- quently, the Training Assessment Module's knowledge base generally attempts to provide the most cost-effective solution to the training requirement. Student entry level and training budget start the process. If the student has prior experience with the same equipment or task from a different helicopter, fewer learning experiences and less time-to-train result than from a completely new task with new equip- ment. For example, demonstrations are not recommended for familiar tasks performed with new equipment. The student's prior ex- perience with similar tasks and equipment also affects the time-to-train calculations through

I ASSIGN LEARNING I ASSIGN MEDIA I COMPUTE I

Fig. 3 . Summary of assignment sensitivities.

/€E€ Confro/ Systems Magazine

Page 4: Knowledge-based system to assess air crew training requirements

TRAINING ASSESSMENT FOR: LEARNING OBJECTIVE: INCOMING STUDENT BUDGET LEVEL: High

Automated Target Handover Perform Automated Target Handover Graduate of Basic Helicopter Training

LEARNING EXPERIENCE TRAINING MEDIUM TIME TO TRAIN (MINUTES)

Dynamic Explanation Lecture with Visual Aids 156 Demonstration Weapon System Trainer 26 Cognitive Part-Task Training Dedicated Part-Task Trainer 64 Psychomotor Part-Task Training Dedicated Part-Task Trainer 80 Affective Part-Task Training Actual System 50 Full-Task Training Weapon System Trainer 75

45 1 (7 Shours)

'ig. 4 . Representative output from the Training Assessment Module.

use of a multiplier ranging from 0.5 to 1 .O of the base value.

The budget level determination allows discrimination among media types that satisfy the same learning experiences. High-cost simulator features, such as motion platforms or day/night/dusk visual systems, are not added solely because a large budget is avail- able. However, if such cuing or dynamics are present in the actual task and are not deter- mined to be absolute requirements due to mis- sion criticality or safety-of-flight reasons, varying the budget level allows users to quick- ly see if the incremental addition of training (or reduction in time for some higher fidelity media) is worth the added cost. Conversely, even when a low budget determination is entered, if the task demands require a costly full-mission simulator or weapon system trainer, the Training Assessment Module recommends such media despite the expense. The embedded rules always attempt to assign the low-end media types first, however, and progress upward only if the task characteris- tics dictate the additional training features.

The time-to-train calculations consider a number of task attributes for the estimate. The learning subcategory (cognitive, pys- chomotor), the learning experience type, reasons for difficulty, the difficulty rating, and the student background are all factors. These formulas were taken from the Logicon system's code and then modified to include an additional argument based on media type. If, for example, budgetary restrictions require the use of the actual system rather than a dedicated part-task trainer for a particular psychomotor task, a multiplication factor is used to adjust the time-to-train calculations for the less than optimal media. This factor was included be- cause of the need to isolate and remove the task's additional cognitive demands or else

repeat the task to enforce the particular psychomotor behavior desired. The actual values for the media type multipliers are es- timates which are in the process of being validated with current training programs.

Representative output from the Training Assessment Module is shown in Fig. 4. The results show training requirements for the Automated Target Handover task for a graduate of basic helicopter training, assum- ing a high budget level. Note that a high fidelity Weapons Systems Trainer was as- signed for the demonstration and full-task training segments, but a dedicated part-task trainer was recommended for cognitive and psychomotor training.

Related results (not shown) demonstrate the module's sensitivity to incoming student training level and available budget. Training requirements for the same task with an Apache pilot (for whom the task is familiar but the equipment is not) do not include the three part-task training sessions. In addition, ex- planation training time for the more ex- perienced Apache pilot is halved (from 156 to 78 min) and demonstration time is reduced by a factor of 0.8 (from 26 to 21 min). Full-task training time remains the same.

Similarly, varying the budget level from high to low for the basic helicopter training graduate in the same task causes the actual helicopter to be assigned for the demonstra- tion, part-task training, and full-task training to save the cost of acquiring expensive simulators and specialized trainers. Time-to- train is accordingly longer (by a factor of 1.2) to allow for the overhead of configuring the actual system to train the isolated learning objective. The actual ::ystem is also assigned as the medium for demonstration and full-task training for an Apache pilot and a low budget.

One of the benefits of using ART as a shell for implementing the Training Assessment Module is its capability to take the provided output and quickly isolate the specific task characteristics or domain rules which drove the results. Traditional training analyses have often suffered from this lack of "inspecta- bility." Major design decisions regarding training device features or cumculum content are frequently made early in the training sys- tem acquisition with the designer unable to retrospectively understand the corresponding operational requirement for such decisions. The justification feature of ART provides a directed graph of nodes containing the rules fired and requisite facts present to amve at a particular conclusion. For example, in the out- put shown above, with a few key strokes the designer can easily bring up the justification network for assigning the demonstration learning experience to the Automated Target Handover task and see that the recommenda- tion was made for a variety of reasons. These may include high safety andmission criticality ratings, a relatively high difficulty rating com- bined with a requirement for fine motor manipulation in multiple dimensions, coor- dination requirements with other parties, and the likelihood for a high level of anxiety.

The capability to quickly justify or ex- amine the module's inferencing process is also valuable when testing and debugging the knowledge base with a range of sample tasks. Once confident that the system is providing "expert" predictions, the user can determine he trainingrequirements for the complete set of tasks without the inconsistencies inherent in aditional methods.

Predictions are on Target

Ten widely varied aircrew tasks from the actual Apache training program were used as test cases to validate output from the Training Assessment Module. While these tasks were "new" to the knowledge-based system, their format, content, and level of abstraction were consistent with those tasks used in the Train- ing Assessment Module's development. Ranging from purely flying-oriented tasks to aircraft system emergency procedures, the recommended learning experiences, training media, and training time for a basic helicopter training graduate were compared to those con- tained in the current Flight Training Guide for the Apache qualification course. This course is conducted in 50 training days and consists of 45.5 h in the actual helicopter, 13.0 h in a Combat Mission Simulator (equivalent to a full-fidelity Weapon System Trainer with its motion platform and out-the-window visual system), 12.0 h in a Cockpit Procedures

12 August 1990

Page 5: Knowledge-based system to assess air crew training requirements

Trainer, 8.4 h in a Target Acquisition and Detection System (TADS) Selected Task Trainer, and over 100 academic hours [7].

Twenty-eight of the 29 learning experiences predicted by the Training Assess- ment Module for the 10 tasks examined are present in the current training program. Notably, the module correctly identified the need for intensive part-task training in the Apache Target Acquisition and Detection Sys- tem when used to engage targets with the Hellfire missile, 2.75-in rockets, and 30-mm gun. However, psychomotor part-task training was recommended for an engine-out hover task which is currently taught by demonstra- tion and full-task practice.

Media predictions matched those current- ly in use in 23 of the 29 leaming experiences. The exceptions were all in the academic por- tion of each task. In these six cases, interactive computer-based trainers were recommended for the explanation learning experience be- cause of the task's complexity and dynamic visual cuing requirements. The actual training program does not contain such devices, rely- ing instead on lectures, static visual aids, and extensive demonstration with flight simula- tions. It was encouraging to note, however, that perfect correlation existed with the actual cumculum every time the Training Assess- ment Module assigned media for demonstra- tions, part-task, or full-task training (whether low-fidelity procedures-oriented trainers, full- fidelity weapon system trainers, or the Apache).

The training time predictions were slight- ly more difficult to validate for three reasons. First, the Flight Training Guide did not break out times for individual tasks; instead, a num- ber of tasks were taught during the course of any given day. Some days, over 40 tasks were taught while other days only 3. Secondly, several days were devoted to discretionary review and practice of any task previously learned. Finally, some tasks were continuous- ly repeated as part of more advanced tasks (such as performing a take-off to hover prior to beginning a new mission phase). Neverthe- less, comparisons were made with two com- plete, separate training days. In each case, a training assessment was performed for all the tasks taught on that day, and 50% of the fol- lowing day's review was assumed to be devoted exclusively to the previous day's tasks. Task repetitions beyond such time were assumed to be incidental to the training time totals. According to the Flight Training Guide, day number 9 consists of 4.5 h of instruction on various procedures: engine start, hovering, specialized landing, and emergencies. This is followed by 3.2 h of review the following day. This total of 7.7 h is twice the 3.8 h predicted

by the Training Assessment Module for the same tasks. However, day number 32, which consists of performing target identification, tracking, and engagement using the Apache's varied armament, showed a much closer train- ing time correla tion with the module's results. The predicted training time of 7.8 h was within 10% of the actual time of 8.4 h. Additional validation is obviously needed for the training time predictions, as is more specific data on the time spent learning in- dividual tasks. considerable "overhead" time is most likely included in the Apache data since the aircraft or simulator often must be flown to the area where the actual task conduct occurs. However, the majority of the module's output compares favorably to the available data, and the initial "runs" are encouraging.

Approximately five person-months were

software implementation standpoint, three items stand out.

First, for ease of coding, some of the rules were designed to check for the absence of facts and then assert conclusions by default. In the previously described media selection, the task characteristics are first examined by the rules pertaining to the lowest fidelity media; if they aren't appropriate, progressively higher fidelity media rules are queried. By the time tests are made for the most elaborate training devices, the heuristics resemble statements like, "If a previous media has not been as- signed to this task because of a high mission criticality rating or unsatisfied cueing require- ment, then assign a weapon system trainer as the medium." While this process asserts valid conclusions and may indeed choose the most cost-effective training methods, the resulting

Fig. 5 . Crewstation panel designed using MIDAS workstation, out the window view, DMA ter- rain.

spent in designing and developing the Train- ing Assessment Module and the Task Data Collection Program. Processing the validation tasks took about three weeks.

Enhancement Goals

While the present knowledge base is suf- ficient to address the immediate goals of the Army-NASA Aircrew Aircraft Integration Program, several "lessons learned' from this phase of development have illuminated desirable features for future work. From an

justification feature is nondescriptive. Jus- tification under these circumstances fails to readily isolate the attributes which caused the assignment. Rewriting some of these rules so that they emphasize present versus absent facts would alleviate this problem and provide a much more inspectable conclusion.

Second, task data were collected and stored as individual files. Characteristics were not represented in a data base form, nor was an attempt made to depict a collection of tasks in an hierarchical structure. While acceptable for examining individual task training require- ments, using a structured data base format will

IEEE Control Systems Magazine 73

Page 6: Knowledge-based system to assess air crew training requirements

be necessary in the future to easily examine and modify task data, as well as show the relationships and subordination among a large collection of tasks.

Third, the current method of addressing the planned student's incoming skills and knowledge embeds his or her familiarity with the task and equipment within each task's data file. For example, "Send Radio Message Using the Radio [ARC 186 VHF]" is iden- tified as an "old task" with "old equipment" for an operator qualified in the Apache, while it is an "old task" with "new equipment" for pilots with only basic helicopter training. This practice requires knowledge of the complete range of potential students when task data is entered. Representing target operator skills and knowedge with a separate student profile is preferable. Student data would be com- pletely isolated from the individual tasks and would only have to be entered once; the com- posite skills and knowledge profile could be compared with the demands of any task to determine the training requirements. If the user wants to ascertain the training require- ments for a completely different type of stu- dent, he simply builds a single new profile, and uses the already defined task descriptions for the new training assessment. Maintaining a clear distinction between the student and task data will allow considerably more flexibility in answering questions about chan- ges to the task, equipment, or student back- ground. Furthermore, this feature would promote a distinction between device-depend- ent or task-specific knowledge from device- independent or tool-specific knowledge. This separation has proven important with more formal modelling and analysis of skilled be- havior and learning [8]. In addition to software implementation aspects, the previous phase of development highlighted several areas where the domain representation can be enhanced to improve the utility of the Training Assessment Module.

Perhaps of most importance is automating the collection of task characteristics, if pos- sible. Ideally, the module should capture enough training domain knowledge so that a person unfamiliar with Instructional System Development methods, perhaps the helicopter cockpit designer, could use the program unas- sisted to assess the training requirements im- posed by a given mission and equipment complement. Currently, personnel familiar with training systems and the operational en- vironment are required to respond to a series of prompts and queries for the task data col- lection. Rather than having them provide a numerical estimate for how "difficult" or "critical" a task is, other components of MIDAS should be used to analytically deter-

mine such task attributes. For instance, instead of using a subjective number from one to five for difficulty, the visual, auditory, cognitive, and psychomotor loading imposed by the task values currently calculated in the simulation models could be captured. Researchers in the Army-NASA Aircrew Aircraft Integration Program are also exploring whether an analytical measure of cognitive complexity, as described by Eggleston et al. in [9], could be substituted for the difficulty estimate. Not all of the required task attributes lend themselves to automated characterization, but developing those that do will certainly improve the value of the Training Assessment Module.

Also of importance is incorporating rep- resentative dollar figures for the Program of Instruction output. This activity consists mainly of surveying existing training programs and developing an hourly operating cost for each type of media. Multiplied by the time-to-train, this figure will provide an addi- tional comparison for those interested in the bottom line.

A third enhancement would address continuation of training. The initial efforts using the Training Assessment Module have been completely focused on skill attainment and initial qualification training. Automat- icity, performance decay, and their effects on continuation training requirements have not been addressed. Since a large portion of aviator training occurs after initial qualifica- tion in the helicopter or aircraft, this important aspect should be modeled during future development. Inexpensive media and a short training time may not be the obvious choices when long-term training requirements are considered.

Program Results Correlate with Data

Notwithstanding the desired enhance- ments outlined above, the Army-NASA Aircrew Aircraft Program has been successful in demonstrating the use of a knowledge- based computer tool to assist with the difficult and time-consuming task of assessing air crew training requirements. The combined ART and Symbolics Common Lisp environment was ideally suited to quickly capture and rep- resent the target domain. Despite its rather small size in comparison to other knowledge- based systems, recommendations produced by the Training Assessment Module show a very high correlation with empirical data from the Apache qualification course. Easily tailored to the specifics of a wide range of training programs, the module can conceivab- ly be further developed as a stand-alone product. However, when considered as an in- tegral part of the MIDAS workstation, the

module's capability to allow designers to un- derstand the training implications of their cockpit, mission, and operator choices be- comes a major step toward promoting effec- tive man-machine integration.

Acknowledgment

The authors thank Dr. Yan Yufik, Dr. Fritz Brecke, and Mr. Gregory Butler for their as- sistance in obtaining and understanding the Logicon system.

References

[ 11 "Instructional systems development," in Air Force Manual 50-2, July 1975.

[2] R. K. Branson, G. T. Rayner, J. L. Cox, J . P. Furman, F. J. King, and W. H. Hannum, Interservice Procedures for Instructional Systems Development, (6 vols.). Ft. Monroe, VA: U S . Army Training and Doctrine Command, Aug. 1975, TRADOC Pamphlet 350-30 and NAVEDTRA 106A.

[3] Training Analysis Support Computer System (TASCS) User's Guide, prepared by Logicon, Inc., San Diego, CA, for Air Force Systems Command, ASD/YWB, Wright Patterson A B , OH, Aug. 1987.

[4] J. A. Ellis and W. H. Wulfeck 11, Handbook for Testing in Navy Schools. San Diego, CA: Navy Personnel Research and Development Center, Oct. 1982.

[SI Air Force Regulation 50-11, "Management of training systems," Oct. 1983.

[6] Contract F33657-84-C-2135, B-lB Simulator System, Oct. 18, 1984, Air Force Systems Com- mand to Boeing Military Airplane Company.

[7] Flight Training Guide for AH-@ (Apache) Aviator Qualification Course. Fort Rucker, AL: United States Army Aviation Center, Feb. 1988

[8] T. Bosser, "Modelling of skilled behaviour and learning," in Proc. 1986 IEEE Int. Conf. on Systems, Man, and Cybernetics, pp. 272-276.

[9] R. G. Eggleston, R. A. Chechile, R. N. Fleischman, and A. M. Sasseville, "Modelling the cognitive complexity of visual displays," in Proc. 1986HumanFactorsSoc. Meet., 1986, pp.675-678.

Barry R. Smith received the B.S. degree in aero-nautical en- gineering from the USAF Academy, Colorado, in 1983, and is a 199 1 candidate for the M.S. degree in engineer- ing management from

Stanford University, Stanford, CA. He served as an Air Force officer from 1983 until 1988, working for

14 August 1590

Page 7: Knowledge-based system to assess air crew training requirements

the Aeronautical Systems Division's Deputy for Training Systems as the Lead Engineer on the BIB Simulator System Irrogram and the Special Opera- tions Forces Aircrew Training System. In 1988, he joined the research staff of the U.S. Army Aeroflightdynamics Directorate and the NASA Aerospace Human Factors Research Division, both located at Ames Research Center, Moffett Field, CA. He is currently the Deputy Director for the Army-NASA Aircrew/Aircraft Integration pro- gram, a joint Army-NASA exploratory develop- ment effort. His research interests include

simulation and training device technology, mtel- knowledge-based sys- ligent tutoring systems, computer-aded engineer- tems, knowledge repre- ing, and knowledge representation. sentation, multiple

cooperating expert sys- Carolyn P. Banda is a Senior Member of the Tech- tems, intelligent tutonng

systems and intelligent nical Staff with Sterling Software. For the past 12 years, she has worked at NASA-Ames Research computer-aided design. Center in the areas of thermal analysis modelmg and Ms. Banda received the simulation and knowledge-based systems for B.S. degree in statistics scheduling flight plans and for assessing training from Stanford University and has taken graduate- requirements for operators of man-machine sys- level AI courses. She is a member of ACM, the tems. Her current research interests include Computer Society of the IEEE, and AMI.

Doctoral Dissertations

The information about doctoral disser- tations should be typed double-spaced using the following format and sent to:

Prof. Bruce H. Krogh Dept. of Electrical and Computer Eng. Camegie-Mellon University

Rutgers, The State University of New Jer-

Rao, Ming, "Integrated Intelligent Sys-

Date: January 1990. Supervisor: Tsung-Shann Jiang. Current Address: Department of Chemical

Engineering, University of Alberta, Edmon- ton, Alberta, Canada T6G 2G6.

sey

tem for Real-Time Control."

The George Washington University Geng, Zheng, "Two Dimensional Sys-

tem Model and Analysis for a Class of Itera- tive Learning Control Systems."

Date: March 1990. Supervisors: Rober t L. Car ro l l and

Mohammad Jamshidi.

Current Address: Intelligent Automation, Inc., 1370 Piccard Drive, #210, Rockville, MD 20850

Institute of Mathematical Statistics and Operations Research, Technical University of Denmark, Lyngby, Denmark

Knudsen, T., "Start/Stop Strategies for Wind-Diesel Systems."

Date: November 1989. Supervisor: Jan Holst. Current Address: Department of Control

Engineering, Institute of Electronic Systems, University of Aalborg, Frederik Bajers vej 7, DK-9220 Aalborg 0, Denmark.

The Australian National University Liu, Yi, "Frequency Weighted Control-

ler and Model Order Reduction Methods in Linear System Design."

Date: March 1989. Supervisor: Brian D.O. Anderson. C u r r e n t A d d r e s s : Department of

Mathematics, The University of Western Australia, Nedlands, WA 6009, Australia.

Princeton University Stephen H. Lane, "Theory and Develop-

ment of Adaptive Flight Control Systems Using Nonlinear Inverse Dynamics."

Date: 1988. Supervisor: Rob Stengel. CurrentAddress: Robicon, Inc., 30 1 North

Harrison Street, Suite 242, Princeton, NJ 08540.

Princeton University David A. Handelman, "A Rule-Based

Paradigm for Intelligent Adaptive Flight Control."

Date: 1989. Supervisor: Rob Stengel. Current Address: Robicon, Inc., 30 1 North

Harrison Street, Suite 242, Princeton, NJ 08540.

Princeton University Chien Y. Huang, "A Methodology for

Knowledge-Based Restructurable Control to Accommodate System Failures."

Date: 1989. Supervisor: Rob Stengel. Current Address: Grumman Aerospace

Corp., Bethpage, NY.

/€€E Contra/ Systems Magazine 15


Recommended