+ All Categories
Home > Documents > Dynamic Task Allocati~n Gaps, anC, Recommendations · SOP - Standard operating procedure...

Dynamic Task Allocati~n Gaps, anC, Recommendations · SOP - Standard operating procedure...

Date post: 11-Jun-2019
Category:
Upload: lenguyet
View: 212 times
Download: 0 times
Share this document with a friend
14
ll/- 00 Dynamic Task in Operational Systems: Issues, Gaps, anC, Recommendations Aaron W. Johnson, Charles M. Oman, & Thomas B. Sheridan Massachusetts Institute of Technolbgy 77 Massachusetts Ave., 37-219 1 Kevin R. Duda The Charles Stark Draper Laboratory, Inc. 555 Technology Square Cambridge, MA 02139 617-258-4385 [email protected] Cambridge, MA 02139 1 617-258-4751 [email protected] Abstract - The use of automation in complex a 1 erospace systems has helped to lessen operators' workload while increasing the precision and safety of certain tasks. ¥,owever, as automation changes physical work into cognitive work, it can also lead to complacency, a loss of situation aware11ess, and the degradation of skills. Dynamic task allocation - in which the allocation of tasks between the human operators!and the automation can change in response to the state of the operators, system, or environment - has the potential to leverage the advantages of automation while minimizing the disadvantages. While a number of studies have investigated dynamic task allocation in a laboratory setting, it is unknown how the concept is currently implemented in re11l-world operational systems, or what research gaps need to be closed to further this implementation. This paper begins with r review of the basic research into dynamic task allocation.I. It then analyzes the structure of human-automation and pilot flying- pilot monitoring dynamic task allocation in nominal and off- nominal approach and landing in commercial aircraft. Using the interaction and coordination between the two pilots as a model, the paper describes how dynamic task allocation between the human and automation can be optimally implemented in real-world operational systems and discusses the areas of future research necessary to achieve this. TABLE OF CONTENTS ACRONYMS ........................................................... 1 1. INTRODUCTION ................................................ 1 i: .. 1 TASKALLOCATION ......................................... 4 4. OPERATIONAL SYSTEMS EMPLOYING DYNAMIC TASK ALLOCATION .............. ] ........ 4 5. METHODS OF IMPLEMENTING DYNAMiq TASK ALLOCATION IN OPERATIONAL SYSTEMS ...... 6 6. CONCLUSION .................................................... 7 ACKNOWLEDGMENTS .......................................... 7 REFERENCES ••.•••••••••....•••••.......•••.......••••.....••••..... 8 BIOGRAPHY .................................................. l ..... 11 Acronyms HT A - Hierarchical task analysis PF - Pilot flying PM - Pilot monitoring SOP - Standard operating procedure 978-1-4799-1622-1 / 14/$31.00 ©2014 IEEE 1. INTRODUCTION The automation employed by complex aerospace systems has the ability to complete many tasks that the operators would otherwise perform. Commercial aircraft, for example, can almost complete an entire flight on autopilot alone. While automation has the ability to perform tasks for the operators, it does not truly remove work. Rather, it changes the work from physical to cognitive - from manual control to supervisory control. This shift can lead to problems with vigilance, situation awareness, and long-term skill loss. These are what Bainbridge collectively terms the "ironies of automation" [l]. Even today, recent accidents like the 2009 Colgan Air Flight 3407 crash in Buffalo, NY; the 2009 AirFrance 447 disaster over the Atlantic Ocean; and the 2013 Asiana Airlines Flight 214 accident in San Francisco have questioned whether pilots are relying on automation too much and losing their manual flying skills [2-5]. In January 2013, the FAA even got involved, releasing a Safety Alert for Operators encouraging airlines to "promote manual flight operations when appropriate" [6]. The concept of dynamic task allocation may help to maximize the advantages of automation while minimizing these disadvantages. In dynamic task allocation, the allocation of tasks to the human agents and the automation is not fi xed during operations. Rather, it can vary based on the state of the operators, system, and environment. The goal of dynamic task allocation is to keep the operators' cognitive workload at a moderate level, where performance is highest (the Yerkes-Dodson Law). Keeping the operators involved in some of the operational tasks also helps to preserve situation awareness and manual flying skills. 2.BACKGROUND There are three main dimensions of dynamic task allocation that together describe the different ways in which the concept can be applied: ( 1) Who or what decides to dynamically allocate tasks? (2) When should tasks be dynamically allocated? (3 ) How should tasks be dynamically allocated to the operators and the automation? Th e Decision Authority The agent that chooses to re-allocate current tasks or allocate newly-acquired tasks during operations - the
Transcript

ll/- 00

Dynamic Task Allocati~n in Operational Systems: Issues, Gaps, anC, Recommendations

Aaron W. Johnson, Charles M. Oman, & Thomas B. Sheridan Massachusetts Institute of Technolbgy

77 Massachusetts Ave., 37-219 1

Kevin R. Duda The Charles Stark Draper Laboratory, Inc.

555 Technology Square Cambridge, MA 02139

617-258-4385 [email protected]

Cambridge, MA 02139

1

617-258-4751 [email protected]

Abstract - The use of automation in complex a1erospace

systems has helped to lessen operators' workload while increasing the precision and safety of certain tasks. ¥,owever, as automation changes physical work into cognitive work, it can also lead to complacency, a loss of situation aware11ess, and the degradation of skills. Dynamic task allocation - in which the allocation of tasks between the human operators! and the automation can change in response to the state of the operators, system, or environment - has the potential to leverage the advantages of automation while minimizing the disadvantages. While a number of studies have investigated dynamic task allocation in a laboratory setting, it is unknown how the concept is currently implemented in re11l-world operational systems, or what research gaps need to be closed to further this implementation. This paper begins with r review of the basic research into dynamic task allocation. I. It then analyzes the structure of human-automation and pilot flying­pilot monitoring dynamic task allocation in nominal and off­nominal approach and landing in commercial aircraft. Using the interaction and coordination between the two pilots as a model, the paper describes how dynamic task allocation between the human and automation can be optimally implemented in real-world operational systems and discusses the areas of future research necessary to achieve this.

TABLE OF CONTENTS

ACRONYMS ........................................................... 1 1. INTRODUCTION ................................................ 1

i: ~~::~:~~~~~·~~·E'~;·z~~~·~~·r;~~·f ;~ .. 1 TASKALLOCATION ......................................... 4

4. OPERATIONAL SYSTEMS EMPLOYING

DYNAMIC TASK ALLOCATION .............. ] ........ 4 5. METHODS OF IMPLEMENTING DYNAMiq TASK

ALLOCATION IN OPERATIONAL SYSTEMS ...... 6 6. CONCLUSION .................................................... 7 ACKNOWLEDGMENTS .......................................... 7 REFERENCES ••.•••••••••....•••••.......•••.......••••.....••••..... 8 BIOGRAPHY .................................................. l ..... 11

Acronyms

HT A - Hierarchical task analysis PF - Pilot flying PM - Pilot monitoring SOP - Standard operating procedure

978-1-4799-1622-1 / 14/$31.00 ©2014 IEEE

1. INTRODUCTION

The automation employed by complex aerospace systems has the ability to complete many tasks that the operators would otherwise perform. Commercial aircraft, for example, can almost complete an entire flight on autopilot alone. While automation has the ability to perform tasks for the operators, it does not truly remove work. Rather, it changes the work from physical to cognitive - from manual control to supervisory control. This shift can lead to problems with vigilance, situation awareness, and long-term skill loss. These are what Bainbridge collectively terms the "ironies of automation" [l]. Even today, recent accidents like the 2009 Colgan Air Flight 3407 crash in Buffalo, NY; the 2009 AirFrance 447 disaster over the Atlantic Ocean; and the 2013 Asiana Airlines Flight 214 accident in San Francisco have questioned whether pilots are relying on automation too much and losing their manual flying skills [2-5]. In January 2013, the FAA even got involved, releasing a Safety Alert for Operators encouraging airlines to "promote manual flight operations when appropriate" [6].

The concept of dynamic task allocation may help to maximize the advantages of automation while minimizing these disadvantages. In dynamic task allocation, the allocation of tasks to the human agents and the automation is not fixed during operations. Rather, it can vary based on the state of the operators, system, and environment. The goal of dynamic task allocation is to keep the operators' cognitive workload at a moderate level, where performance is highest (the Yerkes-Dodson Law). Keeping the operators involved in some of the operational tasks also helps to preserve situation awareness and manual flying skills.

2.BACKGROUND

There are three main dimensions of dynamic task allocation that together describe the different ways in which the concept can be applied:

( 1) Who or what decides to dynamically allocate tasks? (2) When should tasks be dynamically allocated? (3 ) How should tasks be dynamically allocated to the

operators and the automation?

The Decision Authority

The agent that chooses to re-allocate current tasks or allocate newly-acquired tasks during operations - the

operators or the automation itself - is called the decision authority. Dynamic task allocation with the operators as the decision authority is referred to as adaptable automation, whereas dynamic task allocation with the automatibn as the decision authority is termed adaptive automation [7]. Both adaptable and adaptive automation have their advantages and disadvantages. Adaptable automation kdeps the authority to allocate tasks with the operators, who are ultimately responsible for system safety. Howe~er, this authority loads the operators with another task - evaluating their workload and the situation and determining whether or not to re-allocate tasks - at a time when their cognitive workload may already be high [8]. As a result, the operators may make an expedited or un-informed decision ayd fail to employ automation in the best way to maximize system performance. Furthermore, there are situations when the operators may be unable to re-allocate tasks - theylmay not have enough time or may be physically unable to do so (e.g., due to high g-loads) [9]. Adaptive automation is not without its limitations either. Automation surprise ~d mode confusion become a concern when the automation's actions are not transparent and understandable to the ~perators

[10,11].

Adaptive and adaptable automation are not the onlyJpossible decision authorities; rather, they are the two extrei;nes of a continuum with intermediate degrees of operator and automation authority in between [12-14]. For example, the automation may suggest a certain task allocation that the user must confirm or choose to ignore and remain under the current allocation. Hancock calls this "pilot com+and by initiation" [13]. Or, the automation may suggest a certain task allocation that will automatically go into effeet unless the operators veto it. Hancock calls this "pilot command by negation." Another possibility is for the automation to wait for the operators to take an action, warn them if the action has not been performed close to the deadline, and then finally perform the action itself when the deadline! arrives. In these three examples both the pilots and the automation share in the decision authority to different degrees. I Triggering Dynamic Task Allocation

If the automation in a complex aerospace system \has any degree of authority to allocate tasks, it needs some method to determine when to implement a re-allocation. The process by which the automation makes this decision is d11ed the triggering. One option is for tasks to be re-allocated on a certain time interval without any regard to eventk in the environment or system. Research has shmln that temporarily returning control to the operator during a long, sustained period of automation can lead to better operator performance once the automation regains control 1[15,16]. However, this trigger is only beneficial for operations with constant tasks that do not change, like the cruise ~hase of flight. If this trigger were implemented during a phase of operations in which tasks are changing, like approach and landing, the tasks may be re-allocated at an inadropriate time. For example, the operators could regain control at the very moment that a difficult task begins, causing him to

2

quickly become overloaded. So, for operations in which the task structure is not constant over a long period of time, the dynamic task allocation must respond to changes in the environment, system, or operators.

One possibility is for the automation to monitor the readiness of the operators to undertake a task, either by using an eye tracker to note where the operators' attention is focused or by predicting the operators' intent through a human performance model [I 7, I 8]. If the automation perceives that the operators are not attending to a task or does not intend to perform that task, the automation can step in and take care of that task itself. In such a case the automation is choosing to perform the tasks that the operators are not. Another possible trigger is the operators' knowledge regarding a particular situation. If the system enters into an off-nominal situation for which the operators have not received much training, but which the automation recognizes and can handle, tasks should clearly be allocated to the automation. On the other hand, when the system enters into an off-nominal situation that the automation does not recognize, tasks can be allocated back to the operators who have the ability to think creatively to solve the problem.

The most common adaptive automation trigger investigated in the literature is the operator workload. It has been found that dynamic task allocation provides the most benefit when it is matched to the workload of the operators - providing more automation assistance during times of high operator workload and less during times of low workload [15]. In this manner, the operators' workload is kept at an intermediate level where they are not overworked but still remain involved with the task to preserve situation awareness.

There are four classes of operator workload triggers, which all estimate or measure workload in different ways: 1) critical events, 2) operator performance measurements, 3) psychophysiological assessments, and 4) operator performance modeling [19].

Critical Events - Triggering on critical events is where the amount of automation increases and decreases when certain pre-defined events occur in the environment that are anticipated to raise or lower the operators' workload [20-22]. Example critical events include the appearance of an unidentified target on a military aircraft's radar or the beginning of a particular phase of flight (e.g. final approach). Triggering on critical events is the simplest workload trigger, as it does not require the automation to take any measurements from the operators. However, this trigger may only partially or implicitly reflect the operators' true workload. The assumptions about the operators' workload after the critical event may be incorrect, and the automation may be inappropriately applied.

Operator Performance Measurements - This class of triggers uses measurements of the operators' performance to infer their workload level. It assumes an inverted-CT-shaped

relationship between performance and workl~l ad, the Yerkes-Dodson Law: performance on a task drops hen the overall workload becomes either very high or very low and increases when workload returns to a ma ageable intermediate level. Many different tasks have bee used in the literature to measure operator workload: a 2D tracking primary task [23], a gauge-monitoring secondary task [12,24-26], a change-detection secondary task [27], and a collection of five different primary and secondary tasks [28] . Unlike critical events, this trigger explicitly re~ects the operators' mental states. This trigger also has the advantage of being able to react to unexpected changes in ~orkload . One major disadvantage of triggering on operator performance measurements is that the task re-allofation is reactive to changes in workload, rather than pro('.\ctive. It would be more beneficial for the automation to anticipate changes in workload and to increase the ampunt of automation before performance begins to decrease. I

Psychophysio/ogical Assessment - Sometimes ~rouped together with operator performance measurements, psychophysiological assessments (also called augmented cognition or neuroergonomics) measure operator oognitive workload or engagement through psychological and physiological measurements such as heart rate variability, galvanic skin response, eye movement, and brain activity from fNIR and EEG recordings [8,29,30-34].

Psychophysiological measures are an attractive trigger because they provide a continuous measure of workload, even if there are no apparent behavioral changbs. The operators may have undesirably high cognitive workload, but may be "rising to the challenge" to satisfactorily complete all tasks. Measurements of operatbr task performance will not capture this, but psychophysiological measurements can. Psychophysiological measurements do still have the same disadvantage as operator performance measurements in that that they are reactive to changes in workload, rather than proactive. There is also a deba:~e about the sensitivity and diagnosticity of psychophysiblogical measurements - whether or not they actually have the ability to capture changes in workload or engagement [9,31 ,34]. Furthermore, these measurements tend to have high intra- and intersubject variability, making true changes in workload harder to predict.

Operator Performance Modeling - ln this class of ~iggers, the automation uses human performance models in real-time to predict the behavior of the operators and estimate their future workload. There is not one standard human performance model; instead, researchers have investigated a number of models that take different measurements f[rom the environment and operators and employ unique algorithms to predict operator workload, engagement, exhaus~on, or performance [35-37]. Operator performance modeli~g is an attractive trigger because in many situations it is lable to predict changes in operator workload and re-allocate tasks in advance. It is proactive, unlike operator performance measurement or psychophysiological assessment. On the

3

other hand, this trigger is only as good as its model. Modeling human performance is not an easy task, and any deficiencies in the model will lead to inappropriate automation.

Hybrid Triggers - Because each of these triggering methods have advantages and disadvantages, it is unlikely that using just one individually will provide a workload measurement that is responsive and sensitive enough. Rather, multiple triggers can be used in combination to provide a comprehensive assessment of operator workload [38-40]. These "hybrid" triggers would employ multiple sensors receiving different types of information from the operators. They might measure vehicle state, control inputs, operator performance, operator physiological state, or any number of other metrics. Algorithms would weigh this data, combine it, compare it to the projections of operator performance models, and determine if the operator' s workload is such that the level of automation should be raised or lowered.

Allocation of Tasks

During operations, tasks must be dynamically re-allocated to the agent - the operators or the automation - that can best perform them given the current system state. This will help to assure optimal system performance. The static (fixed) and dynamic allocation of tasks has been the focus of many theoretical, analytical, and experimental investigations [see 41 and 42 for reviews] . The most basic task allocation guidelines simply state which tasks humans can perform better than the automation and vice versa. Fitts ' List states that automation is good at performing repetitive tasks that might bore human operators, tasks that necessitate a high degree of precision, or time-critical tasks that require a faster response than humans can provide [43]. Conversely, the human operators are better at thinking creatively and improvising based on past experience in unfamiliar situations. Billings' concept of "human-centered automation" is another set of guidelines that prescribes how automation can be best implemented as a tool to enhance the human pilot' s performance [44] . These guidelines state that the automation should be predictable, understandable, and only used in situations where it is necessary (and not just because the technology exists to automate a function).

Fitts ' List is certainly not incorrect; it is a good set of general guidelines that provides a starting point for design. However, in reality the best allocation of tasks is more complicated than Fitts discussed. It is context-specific, depending on a number of factors including the training and ability of the operators, the capabilities of the automation, the operating procedures, and the mission to be completed. Billings' guidelines are more specific than Fitts ', discussing specific human-centered automation requirements for aircraft control automation, information automation, and management automation for air traffic control.

Experimental studies have been conducted that investigate specific task allocations in a particular and their effects on the human operator in a particular domain. A number of

experiments have investigated the effects of allocating tasks to the human or automation based on the information processing stages they involve. Endsley and Kaber [45] and Parasuraman et al. [46] both hypothesized applying static automation to the different stages of information processing. The authors used a simplified model of information processing consisting of four stages: l) Infbrmation acquisition, 2) Information analysis, 3) Decision and action selection, and 4) Action implementation (Parasuraman et al.'s terminology). ln the first stage, information is gathered from the environment through a number of different sensory channels and pre-processed. In the second sdge, this information is consciously perceived, manipulated, and analyzed. The second stage lasts just prior to the point of decision, which is contained within the third stage. Once an action has been decided upon, it is carried out in t~e fourth stage and the process repeats. Parasuraman et al. discussed 10 levels of automation, and how these did not nl1ed to be consistent across the four stages. Instead, tasks involving different information processing stages could employ different levels of automation. Kaber and Endsley discussed 10 similar level of automation, and outlined in de[tail how the four stages were al located to the human or the automation under each level.

Parasuraman et al. only briefly discuss dynamic task allocation with respect to the different stages of information processing, but other research has experitnentally investigated this in much greater depth [12,25,26,47]. Collectively, this research found that dyn~mically automating tasks at the low-level stages of information processing (information acquisition and action implementation) led to better primary and secon~ary task performance than when the high-level stages of information processing (information analysis and decision makiJilg) were automated. The authors hypothesized that this was lbecause automating lower-level functions is more "transparent." It is easier for the operators to understand what the automation is doing when it is gathering information or carryin~ out an action the operators selected, rather than w?en the automation is analyzing information or making a decision ("opaque" automation). When the automation is opaque, the operators have to spend additional time checking trat what the automation did is in line with their mental model of the situation.

3. RESEARCH SYSTEMS EMPLOYING DYNAMIC

TASK ALLOCATION I

The preceding discussion on the fundamentals of aynamic task allocation has centered around the large collection of basic research on the concept. Each of these investigations has focused on a few specific hypotheses in one particular experimental setup. Other research has collect~ these experimental results and generalized them in guidelines for the implementation of dynamic task allocation [48-50]. A third group of research has worked to construct full systems employing dynamic task allocation to serve as technology demonstrations and experimental platforms. These rystems

4

are helping the field to move towards its eventual goal - the implementation of dynamic task allocation in real-world operational systems. However, while these systems have been tested in the laboratory, none have gone beyond this initial research stage and been fully implemented in their domain.

ln the late 1980s and early 1990s, the Pilot's Associate (PA) and Rotorcraft Pilot's Associate (RPA) projects sought to design an adaptive interface (the cockpit information manager) for helicopter pilots [18,51-53]. The PA and RPA detected new tasks by monitoring the environment, and then either allocated the new tasks to the pilot or handled them itself based on pre-approved schemes. The displays were also re-configured based on the system's estimation of the pilot's intentions made through a human performance model. A more recent system employed adaptive automation for naval command and control tasks [38-40]. Before operations, the user set "working agreements" that dictated when and how the automation should increase its authority. During operations, the automation used operator performance measurements and a model of operator cognition to estimate operator workload and trigger the adaptive automation. Lastly, the Playbook concept is a method of adaptable automation that makes it easy for the operator to delegate tasks to automated urunanned air vehicles (UAVs) [51-57]. A "playbook" consists of a number of levels of automation that are available to the operator - manual control, selecting a pre-defined automated "script" for one UA V, or selecting a pre-defined automated "play" for multiple UA Vs.

4. OPERATIONAL SYSTEMS EMPLOYING

DYNAMIC TASK ALLOCATION

There are a number of real-world complex aerospace systems that employ automation. But do they employ dynamic task allocation between the operators and this automation, as the body of literature recommends? This question was investigated in nominal and off-nominal approach and landing in commercial aviation. This domain was selected as a case study because it represents a prominent industry - with over 70,000 flights per day in the U.S. alone [58] - that could potentially benefit from the implementation of dynamic task allocation. Furthermore, incidents and accidents in commercial aviation, especially those in which pilot error is a factor, draw a great deal of attention from the media and the general public. Approach and landing was selected as the phase of flight for analysis because it is a high-workload time period where a number of tasks must be properly executed in sequence. This is true even under nominal conditions and off-nominal events, like a late change of runway or inclement weather, can increase the pilots ' workload even further.

Dynamic Task Allocation between the Pilots and Automation

In this investigation, the nominal approach and landing procedures have been detailed through the use of a hierarchical task analysis [59,60]. A hierarchic;al task analysis (HT A) takes a specific task and its as~ociated

cognitive goal (e.g. "safely land the airplane") and decomposes it into subtasks and subgoals (e.g. "slow to the air traffic control-mandated outer marker speed"). These subtasks can in turn be decomposed into their own s~btasks, so on and so forth until a sufficient level of detail has been reached for the desired analysis. This HTA of nominal approach and landing has been constructed using lnanuals for the Boeing 767 aircraft [61 ,62], but the same general procedure is used across aircraft types.

1 The HTA of approach and landing shows twelve high-level goals that must be accomplished in order for the airplane to safely touchdown (Appendix A). The agent in charge of the task (the human or the automation) is noted by the bolor of each task. Red tasks are always completed by one o~ both of the two pilots, and blue tasks are always completed by the automation. Purple tasks can be accomplished by either agent, depending on whether or not the automation is engaged. Lastly, green tasks are those high-level goals that the automation can accomplish only with assistance from one of the pilots. The border of each task box not1s if the task is always necessary (solid border), or only necessary if the automation is in use (dashed border). The latter tasks are primarily automation setup tasks, in which the pilots enter the correct parameters for an action (changing r ltitude, turning, etc.) and the automation subsequently undertakes these actions.

1

This HT A shows that the automation is not needed for a safe landing under nominal conditions. However, r.ts use certainly makes the entire process easier, more prec~se, and more consistent. The automation can be used to reach the outer marker speed and altitude prescribed by a~ traffic control. To accomplish this, one of the pilots sets the new speed and altitude in the automation. Upon entering the new speed the autothrottle will automatically retard the tjrrottles, and upon pressing the flight level change button to accept the new altitude the autopilot will automatically pitch the aircraft down. If a pilot wishes to regain control, I he can switch the automation off and fly manually. If the landing continues nominally with the automation engaged, it will capture the instrument landing system glideslope (which provides vertical guidance to the runway) and lbcalizer (which provides horizontal guidance), automatically entering the approach mode. When in the approach mode, the automation guides the aircraft down to the runway. It is the pilots ' responsibility to ensure that the autom~tion is functioning correctly and to decide whether or not to regain control by disengaging it. Even if the automation is in control of the entire landing, there are still tasks in the landing procedure that must be accomplished by the pilots -

5

arming the autobrakes or cutting the engines 50 feet above the ground, for example.

The procedures detailed in this HT A hold for a nominal approach and landing. During some approaches an off­nominal event may occur, forcing the pilots to adopt new procedures. Many off-nominal events have been anticipated in the development of procedures, which give pilots a structured way to react. One such anticipated off-nominal event is a go-around and transition into a missed approach [61 ,63 ,64]. The pilots are required to initiate this procedure ifthe airplane is on an unstabilized approach at the decision altitude (which is I 000 feet in instrument meteorological conditions and 500 feet in visual meteorological conditions) or if the weather is below acceptable minimums. In a go­around, the aircraft must be pitched up and the thrust must be increased in order to gain altitude. The automation is pre­programmed with the missed approach procedures before every approach, and the pilots can engage the automation to control the aircraft ' s pitch and thrust during the go-around. When they do this, they are still responsible for monitoring aircraft ' s performance to a safe altitude and speed. Furthermore, the pilots are responsible for the raising the landing gear and retracting the flaps at the proper speeds regardless of whether the pitch and thrust is under manual or supervisory control.

It is clear from the HT A of nominal approach and landing and the discussion of the go-around I missed approach procedure that the automation in a 767 approach and landing can be turned on or off. Some tasks, like descending to the outer marker altitude, can be dynamically allocated to the pilots or the automation during operations. However, the automation never decides itself to take over a task. Nor does it even suggest that the pilots might want to re-allocate tasks. Even at the moment when a go-around must be initiated - a high-workload situation - the pilots must push a button to switch the automation mode from approach to go­around. The human is the sole decision authority; modem commercial aviation employs pure adaptable automation with no degree of adaptive automation. It does not leverage the benefits of this concept that are discussed in much of the scientific literature.

Dynamic Task Allocation between the Pilot Flying and Pilot Monitoring

While most of the literature refers to dynamic task allocation with reference to human-automation interactions, there is another set of agents in the cockpit between whom tasks can be dynamically allocated - the pilot flying (PF) and pilot monitoring (PM) [65] . Their interaction and coordination can serve as a model for the optimal ways in which human-automation dynamic task allocation can be implemented.

The PF is responsible for the aircraft' s horizontal and vertical flight path and energy management regardless of the automation mode. The PF must be engaged in either manual control - directly hand-flying the aircraft - or supervisory

control - monitoring the automation when it is ~ing the aircraft. The PM (also called the pilot not flying) has the responsibility for cross-checking the PF, performTg tasks requested by the PF, and monitoring the stat9 of the airplane. The aircraft state to be monitored includes the flight director and flight management system (e.g. target airspeed and heading), aircraft systems (e.g. engine performance), and aircraft configuration (e.g. flap land slat settings). The PF and PM roles are distinct from the captain I first officer designation, and either member of the brew can perform the roles of the PF or PM.

The tasks performed by the PF and the PM are codified in the airline's Standard Operating Procedures (SOP). The primary purpose of these procedures is to "identify and describe the standard tasks and duties of the flight crew for each flight phase" [65, p. I]. Among other things, tr,e SOPs provide information about the use of automation during a particular phase and describe the standard calls and checklists to be used. SOPs also detail anticipated off­nominal situations that could possibly occur during a flight (e.g. go-around and missed approach with 01;ily one functioning engine). At their core, SOPs prescribe the shared mental model between the PF and PM for a given phase of flight or off-nominal scenario [66]. With this shared mental model, the two pilots can easily cobrdinate their activities. Both the PF and PM know what tasks they are responsible for at any point in the flight. When t~e phase of flight changes, the tasks performed by each pilot also change. In this sense, a change in the phase of flight is a critical event that inherently triggers dynamic task allocation. And when this dynamic task allocation occurs, the PF does not have to tell the PM what to do. The

1PM has

the authority to start doing his tasks without being prompted. Equating the PM with the automation, I PF-PM dynamic task allocation is analogous to adaptive automation.

5. METHODS OF IMPLEMENTING DYNAMIC TASK

ALLOCATION IN OPERATIONAL SYSTEMS

There is a clear difference in the sharing of authority between human-automation and human-human 1ynamic task allocation. In human-automation dynam~c task allocation, the human is the sole decision authority. In human-human dynamic task allocation, both pilots serve as the decision authority. The PF and PM interaction can serve as a model for how human-automation dyna~c task allocation can be accomplished in commercial air~raft. ln short, the automation needs to be thought of as a member of the team instead of just a tool. A number of researchers support this assertion [ 11,34,61,68-70], but it n~eds to become a larger guiding force for the direction of future research. I The following sections hypothesize how human-automation dynamic task allocation may be implemented for ant~cipated nominal and off-nominal situations and unanticipated off-

6

nominal situations following the model of PF-PM dynamic task allocation.

Anticipated Nominal and Off-Nominal Situations

ln order to be fully accepted by the pilots, the automation needs to be predictable. The automation should not appear non-deterministic to the pilots [71 ]. Pilots should never need to ask themselves "What's the automation doing now?" or "How did I get into this mode?" [11] . A way to help the automation perform more predictably is to create SOPs that describe the tasks allocated to the PF, the PM, and the automation in anticipated nominal and off-nominal situations. A change in the flight phase or status is anticipated to cause a change in the pilots' workload, and the active SOP would change in response to this. In turn, the automation would re-allocate its task in accordance with the SOP. This is the same concept as triggering on critical events and the "working agreements" employed by the naval command and control system described in Section 3 [38-40]. In both, the pilots do not have to put their tasks on hold and have a dialog with the automation about what it should be doing. This is especially critical in a situation where the pilots' workload is increasing.

Beyond simply understanding the tasks for which it is responsible under each SOP, the automation must also recognize when the active SOP has changed. One of the pilots can pass this information along to the automation, either by pushing a button to change the automation mode, as is currently done in the cockpit, or by stating it out loud to a future autopilot that can understand and respond to human speech. However, this is undesirable as it creates an additional task for the pilots. Ideally, the automation would contain the necessary sensors and algorithms to correctly determine the current phase of flight on its own.

There is one important caveat to this ability - the automation must be sensitive enough to minimize the number of incorrect diagnoses. Incorrectly determining the aircraft's phase of flight or the presence of an off-nominal situation would lead to the engagement of the wrong SOP. This would lead to a mismatch between what the automation is doing and what the pilots expect it to be doing. At best this leads to distrust in the automation and the reluctance to use it. And, at worst, it can cause an accident. For example, if the automation determines that the aircraft is on an unstabilized approach at the decision altitude when the approach is in fact stabilized (a "false positive" or type I error), the automation will incorrectly attempt to pitch the aircraft up and apply thrust to execute a go-around. If the pilots are unable to determine why the automation is acting in this manner, they may fight against it and cause an accident. The same is true if the automation fails to recognize situations in which a go-around is actually required (a "false negative" or type II error). It is critical that the sensitivity of the automation's sensors and algorithms be high enough to minimize the number of type I and II errors to a level where pilots are confident in the automation's predictability.

Unanticipated Off-Nominal Situations

While SOPs currently exist for a large number of ant cipated nominal and off-nominal scenarios, they cannot b~ created for every possibility. There may be times when an unanticipated system failure or confluence of event~ occurs that causes the pilots' workload to increase without a change in the phase of flight. In these situations, the automation can still provide assistance to the pilots by offloading non­critical tasks so they can troubleshoot the problem. Jn order to recognize these situations, the automation would heed the ability to measure the pilots ' workload in real-time. f his can be accomplished by measuring the operators ' performance or psychophysiological metrics, as described in Sectlon 2.

However, the same caveat about the sens1t1v1ty of the automation's diagnostic abilities still holds tr~e . The automation needs to have the ability to correctly! discern whether or not the pilots are actually working hard. Type I or II errors will decrease the predictability and reduce the pilots' trust in the automation.

6. CONCLUSION I

The current body of research into dynamic task allocation strongly suggests employing some degree of ~daptive automation in a complex aerospace system, like a commercial airliner, could help to keep the o~erators '

workload at a manageable level. This would help to preserve situation awareness, performance, and manual flying skills. While there is great promise for dynamic task allocation, most of the research has been at the level pf basic science. A few systems employing dynamic task allocation have been tested in the laboratory but are not yet implemented in real-world operations. Furthermo[e, this paper's case study of nominal and off-nominal approach and landing in commercial aircraft has shown t?at the automation requires the pilots to tell it which tasks to do at any particular time. This is adaptable automation, ~nd not the adaptive automation which the literature enthusiAstically promotes. There is a need to understand why adaptable automation is the chosen implementation of dynalf ic task allocation, and what research needs to be performed to promote the use of adaptive automation in real-world operational systems. j This paper argues that pilot flying-pilot monitoring

1ynamic

task allocation can serve as a model for how ~uman­automation dynamic task allocation can be best structured in complex aerospace systems. In anticipated situations, standard operating procedures can give the automatidn some authority to re-allocate tasks while remaining predictable. In unanticipated situations, the automation can notice an increase in the pilots ' workload and assist by taki?.g over non-critical tasks. Both of these abilities require that the automation have the sensitivity to correctly d~agnose

changes in the phase of flight, off-nominal situatior s, and the pilots ' workload with a low probability of false positives and negativ"- Othe.-wi,., the automafon will I appem-

7

unpredictable and may cause increased cognitive workload, a greater reluctance for the pilots to use the automation, and possibly even an accident.

Limitations

This paper is not without its limitations, the largest being that it has focused primarily on commercial aviation. While this is an important domain, it is certainly not the only one that may benefit from dynamic task allocation. Previous research has investigated UA V control [20,22,28,29], simulated air traffic control [12,24,25,47], and naval command and control [38-40] among others. Each of these domains employs automation in a different way to satisfy their unique requirements. However, it is likely that the particular inter-operator teamwork and coordination for each domain can serve as a model for the human-automation dynamic task allocation. Future research should continue to investigate these other domains from both analytical and experimental perspectives.

Recommendations for Future Research

This paper concludes with recommendations for future research drawn from the discussion above. These recommendations are in two main areas - the development of standard operating procedures for dynamic task allocation and the improvement of the sensors and algorithms to measure the system state and operator workload:

Standard Operating Procedures - Develop standard operating procedures for the use of dynamic task allocation that consider the automation as another team member. These SOPs should be created for individual aerospace systems based on the operational requirements, operator training, and automation capabilities. Once these SOPs are developed, they should be compared to the established dynamic task allocation guidelines in the literature in order to highlight areas that are more complex than the current guidelines state.

Algorithms and Sensors to Measure System State and Operator Workload - Continue the development of sensors and algorithms that will allow the automation to determine the phase of flight, off-nominal situations, and the operators ' workload. In the development of these new technologies, their sensitivity should be quantified under a variety of operational settings. The operators' perception of the automation ' s predictability based on a particular suite of sensors and algorithms should also be determined. Finally, guidelines should be created that specify the required measurement technology sensitivity for optimal dynamic task allocation in specific operational settings.

ACKNOWLEDGMENTS

This work supported by the National Space Biomedical Research Institute through NASA NCC 9-58, Project HFP02001. Copyright © 2014 by the C. S. Draper Laboratory, Inc. All rights reserved.

REFERENCES

[l] Bainbridge, L. (1983). Ironies of Au 'omation. Automatica 19( 6), 77 5-779. I

[2] Lowy, J. (2011, Aug. 31). Automation linked to l~ss of pilot skills, leading to crashes. Denver Post. ~etneved from http://www.denverpost.com/ci 18792981 [

[3]

[4]

[5]

Stock, S., Putnam, J., & Carroll, J. (2013, 1ug. 20). Commercial Pilots: Addicted to Automation? NBC Bay Area. Retrieved from http://www.nbcbayarea.com/investigations/Commercial -Pilots-Addicted-to-Automation--221727971.html

Gillen, M.W. (2010, July). Diminishing ! Skills? AeroSafety World, 30-34.

Wood, S. (2004). Flight Crew Reliance on Autpmation. (CAA PAPER 2004/10). West Sussex, Uf: Civil Aviation Authority.

[6] Federal Aviation Administration. (2013). Manual Flight Operations. (SAFO 13002). Washington, DC: US Department of Transportation.

[7] Oppermann, R. (1994). Adaptive User Fupport. Hillsdale, NJ: Erlbaum.

[8]

[9]

[l O]

[11]

[12]

[13]

Bailey, N.R., Scerbo M.W., Freeman, F.G., Mikulka, P.J., & Scott, L.A. (2006). Comparison of a Brain­Based Adaptive System and a Manual Adaptable System for Invoking Automation. Human Factors 48(4), 693-709. I

Scerbo, M.W. (2007). Adaptive Automatio~. In R. Parasuraman & M. Rizzo (Eds.). Neuroergonomics: The Brain at Work (239-252). New York: I Oxford University Press.

Sarter, N.B. & Woods, D.B. (1995). How in thb World Did We Ever Get into That Mode? Mode Ef,or and Awareness in Supervisory Control. Human Factors 37(1), 5-19.

Woods, D.B. & Sarter, N.B. (1998). Learn~~ from Automation Surprises and "Going Sour" Accidents: Progress on Human-Centered Automation. (Report no. NASA-CR-1998 -207061). ~~shin_gton, D.C.: fational Aeronautics and Space Admm1strat1on.

Clamann, M.P. & Kaber, D.B. (2003). Authority in Adaptive Automation Applied to Various Stages of

. p I . Human-Machine System Information rocessmg. Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting (543-547). i . Hancock, P.A. (2007). On the Process of Automat10n Transition in Multitask Human-Machine System~. IEEE Transactions on Systems, Man and Cybernetics, IPart A: Systems and Humans, 37(4), 586-598.

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

8

Sauer, J., Kao, C.S., Wastell, D., & Nickel, P. (2011). Explicit control of adaptive automation under different levels of environmental stress. Ergonomics 54(8), 755-766.

Parasuraman, R., Mouloua, M., Molloy, R., & Hilburn, B. (1993). Adaptive Function Allocation Reduces Performance Cost of Static Automation. In J.G. Morrison (Ed.), The Adaptive Function allocation for Intelligent Cockpits (AF AIC) program: Interim research and guidelines for the application of adapti~e

automation (37-42). Warminster, PA: Naval Arr Warfare Center - Aircraft Division.

Parasuraman, R., Mouloua, M., & Molloy, R. (1996). Effects of Adaptive Task Allocation on Monitoring of Automated Systems. Human Factors 38(4), 665-679.

Sharit, J. (1997). Allocation of Functions. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics (301-339). New York: Wiley.

Miller, C.A., Guerlain, S., & Hannen, M.D. (1999). The Rotorcraft Pilot's Associate Cockpit Information Manager: Acceptable Behavior from a New Crew Member? American Helicopter Society 55th Annual Forum, Montreal, Quebec.

Scerbo, M.W. (1996). Theoretical Perspectives on Adaptive Automation. In R. Parasuraman and M. Mouloua (Eds.), Automation and Human Performance: Theory and Applications (37-63). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Parasuraman, R., Mouloua M., & Hilburn, B. (1999). Adaptive Aiding and Adaptive Task Allocation Enhance Human-Machine Interaction. Proceedings of the Third Conference on Automation Technology and Human Performance (119-123). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Scallen, S.F. & Hancock, P.A. (2001). Implementing Adaptive Function Allocation. The International Journal of Aviation Psychology 11(2), 197-221.

de Visser, E. & Parasuraman R. (2011). Adaptive Aiding of Human-Robot Teaming: Effects of Imperfect Automation on Performance, Trust, and Workload. Journal of Cognitive Engineering and Decision Making 5(2), 209-231.

Hancock, P.A. (2007). Procedure and Dynamic Display Relocation on Performance in a Multitask Environment. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 37(1), 47-57.

Kaber, D.B. & Riley J.M. (1999). Adaptive Automation of a Dynamic Control Task Based on Secondary Task Workload Measurement. International Journal of Cognitive Ergonomics 3(3), 169-187.

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

Clamann, M. P., Wright, M.C., & Kaber, D.~. (2002). Comparison of Performance Effects of J\daptive Automation Applied to Various Stages of I Human­Machine System Information Processing. Pr1ceedings of the Human Factors and Ergonomics Society 46th Annual Meeting (342-346).

Kaber, D. B., Wright, M.C. , Prinzel, L.J ., & Clamann, M.P. (2005). Adaptive Automation of Human-~achine System Information-Processing Functions. Human Factors 47(4), 730-741.

l Parasuraman, R., Cosenzo, K.A., & de Visser, E. (2009). Adaptive Automation for Human S~ervision of Multiple Uninhabited Vehicles: Effects o Change Detection, Situation Awareness, and Mental orkload. Military Psychology 21(2), 270-297.

Calhoun, G.L., Ward, V.B.R., & Ruff H.A. (2011). Performance-based Adaptive Automation for Supervisory Control. Proceedings of the Humai;i Factors and Ergonomics Society 55th Annual Meeting (2059-2063). Thousand Oaks, CA: Sage Publications.

Fidopiastis, C.M., Drexler, J. , Barber, D., Cosenzo, K., Barnes, M., Chen, J.Y.C., & Nicholson, D.1 (2009). Impact of Automation and Task Load on Urr,nanned System Operator' s Eye Movement Patterns. In D.D. Schmorrow, I.V. Estabrooke, & M. Grootjen (Eds.) Foundations of Augmented Cognition, Neuroergonomics and Operational Neuroscience. Berlin: Springer. I

I de Greef, T.E. , Lafeber, H., van Oostendorp, H., & Lindenberg, J. (2009). Eye Movement as Indicators of Mental Workload to Trigger Adaptive Automation Foundations of Augmented Cognition. Procee~ings of the 5th International Conference on Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience (219-228). Berlin: Springer.

Casali, J. G., & Wierwille, W. W. (1984). On the measurement of pilot perceptual workload: a comparison of assessment techniques addressing sensitivity and intrusion issues. Ergonomics, i 21(10), 1033-1050.

Wilson, G. F. (2002). An Analysis of Mental J orkload in Pilots During Flight Using ijviultiple Psychophysiological Measures. International JoUmal of Aviation Psychology, 12(1), 3-18.

St. John, M., Kobus, K. L., Morrison, J. G., & Schmorrow, D. (2004). Overview of the DARPA Augmented Cognition Technical Int~gration Experiment. International Journal of Human-Computer Interaction, 17(2), 131-149.

9

[34] Cummings, M. L. (2010). Technology Impedances to Augmented Cognition. Ergonomics in Design, 18(2), 25-27.

[35] Rencken, W.D. & Durrant-Whyte, H.F. (1993). A quantitative model for adaptive task allocation in human-computer interfaces. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 23(4), 1072-1090.

[36] Goodrich, M.A. , Boer, E.R., Crandall, J.W., Ricks, R.W., & Quigley, M.L. (2004). Behavioral Entropy in Human-Robot Interaction. Proceedings of the 2004 Performance Metrics for Intelligent Systems Workshop (93-100).

[37] Klein, M. & van Lambalgen, R. (2011). Design of an Optimal Automation System: Finding a Balance between a Human' s Task Engagement and Exhaustion. Modem Approaches in Applied Intelligence (98-108). Berlin: Springer.

[38] Arciszewski, H.F.R., de Greef, T.E., & van Delft, J.B. (2009). Adaptive Automation in a Naval Combat Management System. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 39(6), 1188-1199.

[39] de Greef, T.E. & Arciszewski, H.F.R. (2009). Triggering Adaptive Automation in Naval Command and Control. In Cong S. (Ed.), Frontiers in Adaptive Control (165-188). Vienna: lnTech.

[40] de Greef, T.E., Arciszewski, H.F.R. , & Neerincx, M.A. (2010). Adaptive Automation Based on an Object­Oriented Task Model: Implementation and Evaluation in a Realistic C2 Environment. Journal of Cognitive Engineering and Decision Making 4(2), 152-182.

[41] Sherry, R.R. & Ritter, F.E. (2002). Dynamic Task Allocation: Issues for Implementing Adaptive Intelligent Automation. (Report no. ACS 2002-2). University Park, PA: The Pennsylvania State University.

[42] Marsden, P. & Kirby, M. (2004). Allocation of Functions. In N. Stanton, A. Hedge, K. Brookhuis, E. Salas, & H. Hendrick (Eds.), Handbook of Human Factors and Ergonomics Methods (338-346). Boca Raton, FL: CRC Press.

[43] Fitts, P.M. (Ed.). (1951). Human Engineering for an Effective Air-Navigation and Traffic-Control System. Washington, D.C.: National Research Council.

[44] Billings, C. E. (1996). Human-Centered Aviation Automation: Principles and Guidelines. (Technical memorandum 110381). Moffett Field, CA: National Aeronautics and Space Administration.

[45]

[46]

[47]

[48]

[49]

[50]

[51]

[52]

[53]

[54]

[55]

Endsley, M. R., & Kaber, D. B. (1999). ~ eve! of automation effects on performance, situation a~areness and workload in a dynamic control task. Ergonomics, 42(3), 462-492.

Parasuraman, R. , Sheridan, T.B., & Wickens, C.D. (2000). A Model for Types and Levels o' Human Interaction with Automation. IEEE Transactions on Systems, Man and Cybernetics, Part A: Sysf ms and Humans 30(3), 286-297.

Kaber, D.B. & Endsley M.R. (2003). The Biff.ects of Level of Automation and Adaptive Automation on Human Performance, Situation Awareness and Workload in a Dynamic Control Task. THeoretical Issues in Ergonomics Science 5(2), 113-153.

1 Parasuraman, R. , Deaton, J.E., Morrison, J.G.1

Barnes, M. (1992). Theory and Design of Adaptive Automation in Aviation Systems. (Report no. NAWCAIDWAR-92033-60). Warminster, PA: Naval Air Warfare Center - Aircraft Division.

Morrison, J.G. , Cohen, D., & Gluckman, J.P. (1993). Prospective Principles and Guidelines for the qesign of Adaptively Automated Crewstations. In J.G. Morrison (Ed.), The Adaptive Function allocation for Intelligent Cockpits (AF AIC) program: Interim research and guidelines for the application of adaptive auto~ation ( 1-6). Warminster, PA: Naval Air Warfare crenter -Aircraft Division. I Steinhauser, N.B., Pavlas, D., & Hancock, P.A

1 (2009).

Design Principles for Adaptive Automation and Aiding. Ergonomics in Design 17(2), 6-10.

Banks, S.B. & Lizza, C.S. (1991). Pilot's Assj ciate: A cooperative, knowledge-based system application. IEEE Expert, 6(3), 18-29.

Inagaki, T. (2003). Adaptive Automation: Sharing and Trading of Control. In E. Hollnagel (Ed.), Handbook of Cognitive Task Design (147-169). Mahw~h, NJ: Lawrence Erlbaum Associates, Inc.

Hammer, J.M. (2009). Intelligent Interfaces. In J.A. Wise, V.D. Hopkin, & D.J. Garland (Eds.), Handbook of Aviation Human Factors (2nd ed.) (24-1 l 24-16). Boca Raton, FL: CRC Press.

Miller, C.A. (2005). Levels of Automation in the Brave New World: Adaptive Autonomy, Virtual Presclnce and Swarms-Oh My! Proceedings of the Human Fa9tors and Ergonomics Society 49th Annual Meeting (901-905). Thousand Oaks, CA: Sage Publications.

Miller, C. A. & Parasuraman R. (2007). Designing for Flexible Interaction Between Humans and Automation: Delegation Interfaces for Supervisory Control. !Human Factors 49(1), 57-75.

10

[56] Shaw, T., Emfield, A., Garcia, A., de Visser, E., Miller, C., Parasuraman, R., & Fem, L. (2010). Evaluating the Benefits and Potential Costs of Automation Delegation for Supervisory Control of Multiple UAVs. Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting (1498-1502). Thousand Oaks, CA: Sage Publications.

[57] Miller, C. A. , Shaw, T., Emfield, A., Hamel!, J. , de Visser, E., Parasuraman, R., & Musliner, D. (2011). Delegating to Automation. Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting (95-99). Thousand Oaks, CA: Sage Publications.

[58] A Review of FAA' s Efforts to Reduce Costs and Ensure Safety and Efficiency Through Realignment and Consolidation: Hearing before the House Transportation and Infrastructure Subcommittee on Aviation, I 12th Congress (2012). (Testimony of Paul M. Rinaldi). Retried from http://www.natca.org/UL WS iteResources/natcaweb/Res ources/file/Legislative%20Center/Congressiona1%20Te stimony!Pau1HouseMay312012.pdf

[59] Annett, J. (2004). Hierarchical Task Analysis. ln D. Diape & N.A. Stanton (Eds.), The Handbook of Task Analysis for Human-Computer Interaction (67-82). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

[60] Stanton, N.A. (2006). Hierarchical Task Analysis: Developments, Applications, and Extensions. Applied Ergonomics 37(1), 55-79.

[61] The Boeing Company. (2003). 767 Flight Crew Training Manual. (3rd rev.).

[62] The Boeing Company. (2003). 767-300 Operations Manual. (6th rev.).

[63]

[64]

[65]

[66]

Flight Safety Foundation. (1998). Being Prepared to Go Around. In Approach -and-Landing Accident Reduction Toolkit (Briefing note 6.1.). Alexandria, VA: Flight Safety Foundation.

Flight Safety Foundation. (1998). Manual Go-Around. In Approach -and-Landing Accident Reduction Toolkit (Briefing note 6.2.). Alexandria, VA: Flight Safety Foundation.

Flight Safety Foundation. (1998). Operating Philosophy. In Approach -and-Landing Accident Reduction Toolkit (Briefing note 1.1 .). Alexandria, VA: Flight Safety Foundation.

Federal Aviation Administration. (2003). Standard Operating Procedures for Flight Deck Crewmembers. (AC No. 120-71A). Washington, DC: US Department of Transportation.

[67]

[68]

[69]

Klein, M. & van Lambalgen, R. (2011). Design of an Optimal Automation System: Finding a Balance between a Human's Task Engagement and Exhaustion. Modem Approaches in Applied Intelligence ~98-108). Berlin: Springer. I

Klein, G., Woods, D. W., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten Challehges for Making Automation a "Team Player" in Joint Human­Agent Activity. IEEE Intelligent Systems, 19(6), 91-95.

Christoffersen, K. & Woods, D. D. (2002). How to Make Automated Systems Team Players. In ~· Salas (Ed.), Advances in Human Performance ~d sognitive Engineering Research, Vol. 2 (1-12). Bmgley, UK: Emerald Group Publishing Ltd.

[70] Goldman, C. V., & Degani, A. (2012). A Team­Oriented Framework for Human-Automation Interaction: Implication for the Design of an Advanced Cruise Control System. Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting (2354). Thousand Oaks, CA: Sage Publications.

[71] Domeich, M.C., Rogers, W., Whitlow, S.D., & DeMers, R. Analysis of the Risks and Benefits of Flight Deck Adaptive Systems. Proceedings of the Ht1:ma1Jn Factors and Ergonomics Society 56th Annual Meetmg (I' 5-79).

BIOGRAPHY I

Aaron Johnson is a Ph.D. student in the Department of Aeronautics and Astronautics at the Massac1usetts Institute of Technology, wz~h an expected graduation in 201~. His research focuses on dynamiq task allocation between human operators

and vehicle automation in complex aerosP_ace syf e~s. He has participated in the MIT+K/2 pro1ect, creating educational videos about science and engineerin& with his labmates that have over 250,000 c?llective. yiews. Aaron is also interested in sports analytics, partzcularly for the NFL. Originally from Pittsburgh, PA, Aaron earned a bachelor's degree in aerospace engineering from the University of Michigan in 2008 and a Master's degree in aeronautics and astronautics from MIT in

1

20 I 0.

Kevin R. Duda is Senior Member of the Technical Staff in the Human­Systems Collaboration Group at the Charles Stark Draper Laborhtory. He has worked a variety of programs related to the design and analysis of spacecraft automation,

decision support systems, and mission planning. He currently is the Pl on a project modeling and analyzing lunar lander supervisory control performance, an1 is a ca-I on a pmjcct de.;gn;ng and ana/yz;ng luna' 'f """

11

manual control performance; both projects are fended by the National Space Biomedical Research Institute. He is an instrument rated private pilot and holds a B.S. in Aerospace Engineering from Embry-Riddle Aeronautical University and a MS. and Ph.D. in Aeronautics and Astronautics from MIT

Charles M. Oman is a Senior Lecturer and Senior Research Engineer in the Department of Aeronautics and Astronautics and the former Director of the Man Vehicle Laboratory at MIT. His research addresses the physiological and cognitive limitations of humans in aircraft

and spacecraft. He has flown numerous experiments on Shuttle/Space/ab. He previously lead the Sensorimotor Adaptation Research Team of the National Space Biomedical Research Institute, and has been Pl on two NSBRJ projects and a co-I on two others. He holds a B.S. in Aerospace & Mechanical Sciences from Princeton University and an MS and Ph.D. in Aeronautics and Astronautics from MIT

,,~··· · ~ t - - -_- - '- --_ - ' ::~

';. .·.~·~· .·. It. • I . . I 1

' ~

'~ ~·

Thomas B. Sheridan is Professor Emeritus in both the Department of Mechanical Engineering and Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He received the BS degree from Purdue Univ., the MS degree from Univ. of California at Los Angeles,

the ScD degree from MIT, and the Dr. (honorary) from Delft Univ. of Technology, Netherlands. As Director of the MIT Human-Machine Systems Laboratory for most of his career his research and teaching activities included experimentation, analysis, modeling, and design related to human interaction and safety for aviation, space and undersea robots, nuclear power plant operations, surgery, and system simulation. He has authored five books on these topics. He has served as president of the IEEE Systems, Man and Cybernetics Society and president of the Human Factors and Ergonomics Society. He is a member of the National Academy of Engineering, and is a private pilot.

~ ria' c ., ~ ....

I 1. Complete approKh .... I preparations

= ;· ., H 1.1. Get the weather = ., and likely IUrMay n

=-;;· ~ H 1.2. lnstan the most ..., = lilatly approach in "' ll':" theFMS

-+--::I

= -<"' H 1.3. Set up both mv "' r;;· and comm rMlios 0 - ....

N z 0 Hu.- ... 3 approach and s· ~ lanclint instrucUons --l from ATC O'I --l

> "'O H 1.5. Brief the "'O .,

approach 0 = n :r

= H 1.6. Set autobralle ::I Q.

t"' = y 1.7. s.tGPWS ::I Q. automatic s·

•m1n1mums• wrbal l!O ""C messac• = ., -....

2. Descend to outer marker altrtude

r---------, 2.1. Set Mllopilot for I

L--~~'!~- -' 2.2. Descend

0. Safety land aircraft

3. Slow down to outer marker speed

r---------, 3.1. Set autothrottle t

L _ ~-"!_'#-~ - -'

3.2. Retard ttuottle

3.3. Extend flaps

4. Identify the desinid headinc ttm win

intercept the localizer

5. Tum aircraft to intercept localizer

r- --------, 4 5.1. Set autopilot for I

L-~~~--'

5.2. Tum airplane to newheadinc

Complete 1, then 2 and 3 in either order, then 4, then S Complete all subtasks in order

KEY Color Border

I Manual Control I I z,;P~;; ;:s; ;N~ tt t 1

1 using a uto pilot

11 I Automatic I ----------

1 Manual OR Automatic I I Always complete task I

I Manual AND Automatic I

::r: ~· µ ....... ()

e:.. >-3 e; :;;-"

5' ~

~ rJl r:n·> 0 ""'C >-+, ""'C ZM 0 z s ~ ........... ~ ~ ::::; > °' -...)

~ "O 8 g. [ l' § 0..

...... ....,

"'J iiQ' = ., ~

=-N I

= ;;· ., = ., n =-;:::;· = '"'3 = "' ~ > = = '<" "' (ii' 0 .... z 0 3 =· ~ -.I

°' -.I

> "O "O ., 0 = n =-= = Q.

t"' = = Q.

=· ~ "'Q = ., -N

6. Complete ..,.,,-oach chedcffst

7. Intercept localizer

r--------:-i f 7.1. SeUutopilot for I

L--~ ~~'!.- -'

7.2. Tum airplane to new heading

r---------, I 7.3. Enter approach I

L_J~~---'

0. Safely land aircraft

8. C.pture glide slope 9. Drop landing par 10. Plan go.around and runway nit/taxi

r---------10.1. Set autopilot :

L-~~~~~-'

Complete 6, then 7, then 8, then 9 and 10 m either order Complete all subtasks 1n order

KEY Color Border

I Manual Control I I ~;P7e;: ~s; ;N~ ";f I 1

1 using autopilot I I Automatic I __________ J

I 1

I I Always complete task I Manua OR Automatk

I Manual AND Automatic I

~ !10" = ., ~ .... ~ I

= ;· ., Ill ., ~ =-;· 2:.. '"'3 Ill

"' :-;-

> = Ill

'<"° "' r;;· 0 ....

.... 2 0 3 s· 2:.. --l O'I --l

> "'O "'O ., 0 Ill ~ =-Ill

= Q.

t""' Ill

= Q. s· ~ '"d Ill ., -(,.I

_____ ..., _____ ., l 11. Prepare final I

approach configuration I , ___________ , r - -- ------,. ~ 11.1. Enpp left and I

L-~~~-J r---------, I 11.2 Arm FLARE and I

L-R~~-~-J r-- --- ---1

11.3. Check llUtoland I L---~!' ____ 1

.---------. I lU. DKlde I whether or not to 1

use autoland I '----------.---------. I 11.5. If landine I : manually. cfiKOMeet I L - _ !_ut!'!!! __ J

0. Safely land aircraft

12. Final approach

12.1. Arm speedtnlce tor

automatic deployment

12.2. ComplN landing checklist

r---------, 12.3. Disconnect *

L-.!~!'!~ - -'

12.4. Cut throttles at SO fHt altitude

12.5. Flare

13. Touchdown

13.1. Apply wheel brakes

13.2. Deploy thrust rewrsers

13.3. Deploy speedbralcn

13.4. Stffr to maintain centerline

13.5. Chedt nMH'Hl"S, spoilers,

and autobraklls

13.6. Stow thrust nMlfSerS

.-----:------, 13. 7. Oisconnect ,

autopilot and I L-~~1e __ J

Complete 11, then 12, then 13 Complete all subtasks in order

KEY Color Border

I Manual Control I ~;p;;; ;;s; ;N~ a I 1

1· using autopilot

11

I Automatic ----------

I I Always complete task I Manual OR Automatic

I Manual ANDAutomatic


Recommended