+ All Categories
Home > Documents > *D=3>B 3>6 &DC?=3C;?>% 2B7! ,;BDB7! (;BDB7! &4DB7 · ./.)CibY -553 bYYX bch VY WcbWYdhiU``m...

*D=3>B 3>6 &DC?=3C;?>% 2B7! ,;BDB7! (;BDB7! &4DB7 · ./.)CibY -553 bYYX bch VY WcbWYdhiU``m...

Date post: 16-Mar-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
24
HUMAN FACTORS, 1997,39(2),230-253 Humans and Automation: Use, Misuse, Disuse, Abuse RAJA PARASURAMAN,1 Catholic University of America, Washington, D.C., and VICTOR RILEY, Honeywell Technology Center, Minneapolis, Minnesota This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to overreliance on automation, which can result in failures of moni- toring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is com- monly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by design- ers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the au- tomation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use. INTRODUCTION The revolution ushered in by the digital com- puter in the latter half of this century has funda- mentally changed many characteristics of work, leisure, travel, and other human activities. Even more radical changes are anticipated in the next century as computers increase in power, speed, and "intelligence." These factors sustain much of the drive toward automation in the workplace and elsewhere, as more capable computer hard- ware and software become available at low cost. I Requests for reprints should be sent to Raja Parasuraman, Cognitive Science Laboratory, Catholic University of America, Washington, DC 20064. Technical issues-how automation functions are implemented and the characteristics of the associated sensors, controls, and software- dominate most writing on automation technol- ogy. This is not surprising, given the sophistica- tion and ingenuity of design of many such systems (e.g., automatic landing of an aircraft). The economic benefits that automation can pro- vide, or is perceived to offer, also tend to focus public attention on the technical capabilities of automation, which have been amply documented in such diverse domains as aviation (Spitzer, 1987), automobiles (IVHS America, 1992), manu- facturing (Bessant, Levy, Ley, Smith, & Tran- field, 1992), medicine (Thompson, 1994), robotics © 1997, Human Factors and Ergonomics Society. All rights reserved.
Transcript

HUMAN FACTORS 199739(2)230-253

Humans and Automation Use MisuseDisuse Abuse

RAJA PARASURAMAN1Catholic University of America Washington DC and VICTOR RILEYHoneywell Technology Center Minneapolis Minnesota

This paper addresses theoretical empirical and analytical studies pertaining tohuman use misuse disuse and abuse of automation technology Use refers to thevoluntary activation or disengagement of automation by human operators Trustmental workload and risk can influence automation use but interactions betweenfactors and large individual differences make prediction of automation use difficultMisuse refers to overreliance on automation which can result in failures of moni-toring or decision biases Factors affecting the monitoring of automation includeworkload automation reliability and consistency and the saliency of automationstate indicators Disuse or the neglect or underutilization of automation is com-monly caused by alarms that activate falsely This often occurs because the base rateof the condition to be detected is not considered in setting the trade-off between falsealarms and omissions Automation abuse or the automation of functions by design-ers and implementation by managers without due regard for the consequences forhuman performance tends to define the operators roles as by-products of the au-tomation Automation abuse can also promote misuse and disuse of automation byhuman operators Understanding the factors associated with each of these aspects ofhuman use of automation can lead to improved system design effective trainingmethods and judicious policies and procedures involving automation use

INTRODUCTION

The revolution ushered in by the digital com-puter in the latter half of this century has funda-mentally changed many characteristics of workleisure travel and other human activities Evenmore radical changes are anticipated in the nextcentury as computers increase in power speedand intelligence These factors sustain much ofthe drive toward automation in the workplaceand elsewhere as more capable computer hard-ware and software become available at low cost

I Requests for reprints should be sent to Raja ParasuramanCognitive Science Laboratory Catholic University of AmericaWashington DC 20064

Technical issues-how automation functionsare implemented and the characteristics of theassociated sensors controls and software-dominate most writing on automation technol-ogy This is not surprising given the sophistica-tion and ingenuity of design of many suchsystems (eg automatic landing of an aircraft)The economic benefits that automation can pro-vide or is perceived to offer also tend to focuspublic attention on the technical capabilities ofautomation which have been amply documentedin such diverse domains as aviation (Spitzer1987) automobiles (IVHS America 1992) manu-facturing (Bessant Levy Ley Smith amp Tran-field 1992) medicine (Thompson 1994) robotics

copy 1997 Human Factors and Ergonomics Society All rights reserved

HUMAN USE OF AUTOMATION

(Sheridan 1992)and shipping (Grabowski amp Wal-lace 1993)

Humans work with and are considered essen-tial to all of these systems However in com-parison with technical capabilities human capa-bilities-human performance and cognition inautomated systems-are much less frequentlywritten about or discussed in public forums Thisstems not from a relative lack of knowledge(Bainbridge 1983 Billings 1991 Chambers ampNagel 1985 Hopkin 1995 Mouloua amp Parasura-man 1994 Parasuraman amp Mouloua 1996 Ras-mussen 1986 Riley 1995 Sheridan 1992 Wick-ens 1994 Wiener amp Curry 1980 Woods 1996)but rather from a much greater collective em-phasis on the technological than on the humanaspects of automation

In this paper we examine human performanceaspects of the technological revolution known asautomation We analyze the factors influencinghuman use of automation in domains such asaviation manufacturing ground transportationand medicine though our treatment does not fo-cus on anyone of these systems Consideration ofthese factors is important not only to systemscurrently under development such as automa-tion tools for air traffic management (Erzberger1992) but also to far-reaching system conceptsthat may be implemented in the future such asfree flight (Planzer amp Hoffman 1995 Radioand Technical Committee on Aeronautics 1995)

A prevalent assumption about automation isthat it resides in tyrannical machines that replacehumans a view made popular by Chaplin in hismovie Modern Times However it has becomeevident that automation does not supplant hu-man activity rather it changes the nature of thework that humans do often in ways unintendedand unanticipated by the designers of automa-tion In modem times humans are consumers ofautomation We discuss the human usage pat-tems of automation in this paper

First however some restrictions of scopeshould be noted We focus on the influence ofautomation on individual task performance Wedo not consider in any detail the impact of auto-

June 1997-231

mation on team (Bowers Oser Salas amp Cannon-Bowers 1996) or job performance (Smith ampCarayon 1995) or on organizational behavior(Gerwin amp Leung 1986 Sun amp Riis 1994) Wealso do not examine the wider sociological so-ciopsychological or sociopolitical aspects of au-tomation and human behavior (Sheridan 1980Zuboff 1988) though such issues are becomingincreasingly important to consider in automationdesign (Hancock 1996 Nickerson 1995)

What Is Automation

We define automation as the execution by amachine agent (usually a computer) of a functionthat was previously carried out by a humanWhat is considered automation will thereforechange with time When the reallocation of afunction from human to machine is completeand permanent then the function will tend to beseen simply as a machine operation not as auto-mation Examples of this include starter motorsfor cars and automatic elevators By the sametoken such devices as automatic teller machinescruise controls in cars and the flight manage-ment system (FMS) in aircraft qualify as automa-tion because they perform functions that are alsoperformed manually by humans Todays auto-mation could well be tomorrows machine

Automation of physical functions has freed hu-mans from many time-consuming and labor-intensive activities however full automation ofcognitive functions such as decision makingplanning and creative thinking remains rareCould machine displacement of human thinkingbecome more commonplace in the future Inprinciple this might be possible For exampledevices such as optical disks are increasingly re-placing books as repositories of large amounts ofinformation Evolutionary neurobiologists havespeculated that extemal means of storing infor-mation and knowledge (as opposed to intemalstorage in the human brain) not only have playedan important role in the evolution of human con-sciousness but will also do so in its future devel-opment (Donald 1991) Hence permanent alloca-tion of higher cognitive functions to machines

232-June 1997

need not be conceptually problematic Moreoversuch a transfer will probably not replace butrather will modify human thinking In practicehowever despite more than three decades of re-search on artificial intelligence neural networksand the development in Schanks (1984) termsof the cognitive computer enduring transferof thinking skills to machines has proven verydifficult

Automation can be usefully characterized by acontinuum of levels rather than as an all-or-noneconcept (McDaniel 1988 Riley 1989 Sheridan1980) Under full manual control a particularfunction is controlled by the human with no ma-chine control At the other extreme of full auto-mation the machine controls all aspects of thefunction including its monitoring and only itsproducts (not its internal operations) are visibleto the human operator

Different levels of automation can be identifiedbetween these extremes For example Sheridan(1980) identified 10 such levels of automation inhis seventh level the automation carries out afunction and informs the operator to that effectbut the operator cannot control the output Riley(1989) defined automation levels as the combina-tion of particular values along two dimensionsintelligence and autonomy Automation withhigh autonomy can carry out functions only withinitiating input from the operator At the highestlevels the functions cannot be overridden by thehuman operator (eg the flight envelope protec-tion function of the Airbus 320 aircraft)

Human Roles in Automated Systems

Until recently the primary criteria for applyingautomation were technological feasibility andcost To the extent that automation could per-form a function more efficiently reliably or ac-curately than the human operator or merely re-place the operator at a lower cost automationhas been applied at the highest level possibleTechnical capability and low cost are valid rea-sons for automation if there is no detrimentalimpact on human (and hence system) perfor-

HUMAN FACTORS

mance in the resulting system As we discusslater however this is not always the case In theultimate extension of this practice automationwould completely replace operators in systemsAutomation has occasionally had this effect (egin some sectors of the manufacturing industry)but more generally automation has not com-pletely displaced human workers Although in layterms it is easiest to think of an automated sys-tem as not including a human most such sys-tems including unmanned systems such asspacecraft involve human operators in a super-visory or monitoring role

One of the considerations preventing the totalremoval of human operators from such systemshas been the common perception that humansare more flexible adaptable and creative thanautomation and thus are better able to respond tochanging or unforeseen conditions In a sensethen one might consider the levels of automationand operator involvement that are permitted in asystem design as reflecting the relative levels oftrust in the designer on one hand and the opera-tor on the other Given that no designer of auto-mation can foresee all possibilities in a complexenvironment one approach is to rely on the hu-man operator to exercise his or her experienceand judgment in using automation Usually (butnot always) the operator is given override author-ity and some discretion regarding the use ofautomation

This approach however tends to define thehuman operators roles and responsibilities interms of the automation (Riley 1995) Designerstend to automate everything that leads to an eco-nomic benefit and leave the operator to managethe resulting system Several important humanfactors issues emerge from this approach includ-ing consequences of inadequate feedback aboutthe automations actions and intentions (Nor-man 1990) awareness and management of au-tomation modes (Sarter amp Woods 1994) under-reliance on automation (Sorkin 1988) andoverreliance on automation (Parasuraman Mol-loy amp Singh 1993 Riley 1994b) An extensivelist of human factors concerns associated with

HUMAN USE OF AUTOMATION

cockpit automation was recently compiled byFunk Lyall and Riley (1995)

Incidents and Accidents

Unfortunately the ability to address humanperformance issues systematically in design andtraining has lagged behind the application of au-tomation and issues have come to light as a re-sult of accidents and incidents The need for bet-ter feedback about the automations state wasrevealed in a number of controlled flight intoterrain aircraft accidents in which the crew se-lected the wrong guidance mode and indicationspresented to the crew appeared similar to whenthe system was tracking the glide slope perfectly(Corwin Funk Levitan amp Bloomfield 1993) Thedifficulty of managing complex flight guidancemodes and maintaining awareness of whichmode the aircraft was in was demonstrated byaccidents attributed to pilot confusion regardingthe current mode (Sarter amp Woods 1994) Forexample an Airbus A320 crashed in StrasbourgFrance when the crew apparently confused thevertical speed and flight path angle modes (Min-istere de rEquipement des Transports et duTourisme 1993)

Underreliance on automation was demon-strated in railroad accidents in which crewschose to neglect speed constraints and their as-sociated alerts Even after one such accident nearBaltimore in 1987 inspectors found that the trainoperators were continuing to tape over the buzz-ers that warned them of speed violations (Sorkin1988) Finally overreliance on automation was acontributing cause in an accident near Colum-bus Ohio in 1994 Apilot who demonstrated lowconfidence in his own manual control skills andtended to rely heavily on the automatic pilot dur-ing nighttime low-visibility approaches failed tomonitor the aircrafts airspeed during final ap-proach in a nighttime snowstorm and crashedshort of the runway (National TransportationSafety Board [NTSB] 1994)

Most such accidents result from multiplecauses and it can be difficult to untangle thevarious contributing factors Whenever automa-tion is involved in an accident the issue of how

June 1997-233

the operator used that automation is of interestbut it may be difficult to say that the operatorused the automation too much too little or oth-erwise inappropriately Often the best one can dois to conclude that the operator having used theautomation in a certain way certain conse-quences followed The lessons learned from theseconsequences then join the growing body of les-sons related to automation design and use Inmost cases the operator is not clearly wrong inusing or not using the automation Having deter-mined that the operator must be trusted to applyexperience and judgment in unforeseen circum-stances he or she is granted the authority to de-cide when and how to use it (though manage-ment may limit this authority to a greater orlesser extent)

This brings up the question of how operatorsmake decisions to use automation How do theydecide whether or not to use automation Dothey make these decisions rationally or based onnonrational factors Are automation usage deci-sions appropriate given the relative performancesof operator and automation When and why dopeople misuse automation

Overoiew

In this paper we examine the factors influenc-ing the use misuse disuse and abuse of automa-tion Two points should be emphasized regardingour terminology First we include in our discus-sion of human use of automation not only humanoperators of systems but also designers supervi-sors managers and regulators This necessarilymeans that any human error associated with useof automation can include the human operatorthe designer or even management error ex-amples of each are provided throughout this paper

Second in using terms such as misuse disuseand abuse no pejorative intent is implied towardany of these groups We define misuse as overre-liance on automation (eg using it when itshould not be used failing to monitor it effec-tively) disuse as underutilization of automation(eg ignoring or turning off automated alarms orsafety systems) and abuse as inappropriate ap-plication of automation by designers or managers

234-June 1997

(eg automation that fails to consider the conse-quences for human performance in the resultingsystem)

USE OF AUTOMATION

The catastrophic accidents in Strasbourg Bal-timore and elsewhere are a powerful reminderthat the decision to use (or not to use) automa-tion can be one of the most important decisions ahuman operator can make particularly in time-critical situations What factors influence this de-cision Several authors (Lee amp Moray 1992Muir 1988) have suggested that automation re-liability and the operators trust in automationare major factors Riley (1989) examined severalother factors that might also influence automa-tion use decisions including how much workloadthe operator was experiencing and how muchrisk was involved in the situation He proposedthat automation usage was a complex interactivefunction of these and other factors Others (Mc-Clumpha amp James 1994 Singh Molloy amp Para-suraman 1993a 1993b) have suggested that op-erator attitudes toward automation mightinfluence automation usage We discuss the im-pact of each of these factors

Attitudes Toward Automation

It is easy to think of examples in which auto-mation usage and attitudes toward automationare correlated Often these attitudes are shapedby the reliability or accuracy of the automationFor example automatic braking systems are par-ticularly helpful when driving on wet or icyroads and drivers who use these systems havefavorable attitudes toward them Smoke detec-tors are prone to false alarms however and aredisliked by many people some of whom mightdisable them In either case automation use (orlack of use) reflects perceived reliability In otherinstances attitudes may not be so closely linkedto automation reliability For example many el-derly people tend not to use automatic teller ma-chines because of a generally negative attitudetoward computer technology and a more positiveattitude toward social interaction with other hu-mans (bank tellers) There are also people who

HUMAN FACTORS

prefer not to use automatic brakes and somepeople like smoke alarms

Attitudes toward automation vary widely amongindividuals (Helmreich 1984 McClumpha ampJames 1994 Singh Deaton amp Parasuraman1993) Understanding these attitudes-positiveand negative general and specific---constitutes afirst step toward understanding human use ofautomation

Wiener (1985 1989) queried pilots of auto-mated aircraft about their attitudes toward dif-ferent cockpit systems Anotable finding was thatonly a minority of the pilots agreed with the state-ment automation reduces workload In fact asubstantial minority of the pilots thought that au-tomation had increased their workload Laterstudies revealed that a major source of the in-creased workload was the requirement to repro-gram automated systems such as the FMS whenconditions changed (eg having to land at a dif-ferent runway than originally planned) Thusmany pilots felt that automation increased work-load precisely at the time when it was neededmost-that is during the high-workload phase ofdescent and final approach Subsequent moreformal questionnaire studies have also revealedsubstantial individual differences in pilot atti-tudes toward cockpit automation (McClumpha ampJames 1994 Singh et al 1993a)

Beliefs and attitudes are not necessarily linkedto behaviors that are consistent with those atti-tudes To what extent are individual attitudes to-ward automation consistent with usage patternsof automation The issue remains to be exploredfully In the case of a positive view of automationattitudes and usage may be correlated Examplesinclude the horizontal situation indicator whichpilots use for navigation and find extremely help-ful and automatic hand-offs between airspacesectors which air traffic controllers find useful inreducing their workload

More generally attitudes may not necessarilybe reflected in behavior Two recent studiesfound no relationship between attitudes towardautomation and actual reliance on automationduring multiple-task performance (Riley 1994a1996 Singh et aI 1993b) Furthermore there

HUMAN USE OF AUTOMATION

may be differences between general attitudes to-ward automation (ie all automation) and do-main-specific attitudes (eg cockpit automa-tion) For all these reasons it may be difficult topredict automation usage patterns on the basis ofquestionnaire data alone Performance data onactual human operator usage of automation areneeded We now tum to such evidence

Mental Workload

One of the fundamental reasons for introduc-ing automation into complex systems is to lessenthe chance of human error by reducing the op-erators high mental workload However thisdoes not always occur (Edwards 1977 Wiener1988) Nevertheless one might argue that an op-erator is more likely to choose to use automationwhen his or her workload is high than when it islow or moderate Surprisingly there is little evi-dence in favor of this assertion Riley (1994a) hadcollege students carry out a simple step-trackingtask and a character classification task that couldbe automated He found that manipulating thedifficulty of the tracking task had no impact onthe students choice to use automation in theclassification task The overall level of automa-tion usage in this group was quite low-less thanabout 50 A possible reason could be that theseyoung adults typically prefer manual over auto-mated control as reflected in their interest incomputer video games that require high levels ofmanual skill However in a replication study car-ried out with pilots who turned on the automa-tion much more frequently no relationship be-tween task difficulty and automation usage wasfound

The difficulty manipulation used by Riley(1994a) may have been insufficient to raise work-load significantly given that task performancewas only slightly affected Moreover the partici-pants had to perform only two simple discrete-trial artificial tasks Perhaps task difficulty ma-nipulations affect automation usage only in amultitask environment with dynamic tasks re-sembling those found in real work settings Har-ris Hancock and Arthur (1993) used three flighttasks-tracking system monitoring and fuel

June 1997-235

management-and gave participants the optionof automating the tracking task Following ad-vance notice of an increase in task difficultythere was a trend toward use of automation as aworkload management strategy However indi-vidual variability in automation usage patternsobscured any significant relationship betweentask load increase and automation usage

The evidence concerning the influence of taskload on automation usage is thus unclear Nev-ertheless human operators often cite excessiveworkload as a factor in their choice of automa-tion Riley Lyall and Wiener (1993) reportedthat workload was cited as one of two most im-portant factors (the other was the urgency of thesituation) in pilots choice of such automation asthe autopilot FMS and flight director duringsimulated flight However these data alsoshowed substantial individual differences Pilotswere asked how often in actual line perfor-mance various factors influenced their automa-tion use decisions For most factors examinedmany pilots indicated that a particular factorrarely influenced their decision whereas an al-most equal number said that the same factor in-fluenced their decisions quite often very fewgave an answer in the middle

Studies of human use of automation typicallyfind large individual differences Riley (1994a)found that the patterns of automation use dif-fered markedly between those who cited fatigueas an influence and those who cited other factorsMoreover there were substantial differences be-tween students and pilots even though the taskdomain was artificial and had no relation to avia-tion Within both pilot and student groups werestrong differences among individuals in automa-tion use These results suggest that differentpeople employ different strategies when makingautomation use decisions and are influenced bydifferent considerations

Subjective perceptions and objective measure-ment of performance are often dissociated (Yehamp Wickens 1988) Furthermore the nature ofworkload in real work settings can be fundamen-tally different from workload in most laboratorytasks For example pilots are often faced with

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

(Sheridan 1992)and shipping (Grabowski amp Wal-lace 1993)

Humans work with and are considered essen-tial to all of these systems However in com-parison with technical capabilities human capa-bilities-human performance and cognition inautomated systems-are much less frequentlywritten about or discussed in public forums Thisstems not from a relative lack of knowledge(Bainbridge 1983 Billings 1991 Chambers ampNagel 1985 Hopkin 1995 Mouloua amp Parasura-man 1994 Parasuraman amp Mouloua 1996 Ras-mussen 1986 Riley 1995 Sheridan 1992 Wick-ens 1994 Wiener amp Curry 1980 Woods 1996)but rather from a much greater collective em-phasis on the technological than on the humanaspects of automation

In this paper we examine human performanceaspects of the technological revolution known asautomation We analyze the factors influencinghuman use of automation in domains such asaviation manufacturing ground transportationand medicine though our treatment does not fo-cus on anyone of these systems Consideration ofthese factors is important not only to systemscurrently under development such as automa-tion tools for air traffic management (Erzberger1992) but also to far-reaching system conceptsthat may be implemented in the future such asfree flight (Planzer amp Hoffman 1995 Radioand Technical Committee on Aeronautics 1995)

A prevalent assumption about automation isthat it resides in tyrannical machines that replacehumans a view made popular by Chaplin in hismovie Modern Times However it has becomeevident that automation does not supplant hu-man activity rather it changes the nature of thework that humans do often in ways unintendedand unanticipated by the designers of automa-tion In modem times humans are consumers ofautomation We discuss the human usage pat-tems of automation in this paper

First however some restrictions of scopeshould be noted We focus on the influence ofautomation on individual task performance Wedo not consider in any detail the impact of auto-

June 1997-231

mation on team (Bowers Oser Salas amp Cannon-Bowers 1996) or job performance (Smith ampCarayon 1995) or on organizational behavior(Gerwin amp Leung 1986 Sun amp Riis 1994) Wealso do not examine the wider sociological so-ciopsychological or sociopolitical aspects of au-tomation and human behavior (Sheridan 1980Zuboff 1988) though such issues are becomingincreasingly important to consider in automationdesign (Hancock 1996 Nickerson 1995)

What Is Automation

We define automation as the execution by amachine agent (usually a computer) of a functionthat was previously carried out by a humanWhat is considered automation will thereforechange with time When the reallocation of afunction from human to machine is completeand permanent then the function will tend to beseen simply as a machine operation not as auto-mation Examples of this include starter motorsfor cars and automatic elevators By the sametoken such devices as automatic teller machinescruise controls in cars and the flight manage-ment system (FMS) in aircraft qualify as automa-tion because they perform functions that are alsoperformed manually by humans Todays auto-mation could well be tomorrows machine

Automation of physical functions has freed hu-mans from many time-consuming and labor-intensive activities however full automation ofcognitive functions such as decision makingplanning and creative thinking remains rareCould machine displacement of human thinkingbecome more commonplace in the future Inprinciple this might be possible For exampledevices such as optical disks are increasingly re-placing books as repositories of large amounts ofinformation Evolutionary neurobiologists havespeculated that extemal means of storing infor-mation and knowledge (as opposed to intemalstorage in the human brain) not only have playedan important role in the evolution of human con-sciousness but will also do so in its future devel-opment (Donald 1991) Hence permanent alloca-tion of higher cognitive functions to machines

232-June 1997

need not be conceptually problematic Moreoversuch a transfer will probably not replace butrather will modify human thinking In practicehowever despite more than three decades of re-search on artificial intelligence neural networksand the development in Schanks (1984) termsof the cognitive computer enduring transferof thinking skills to machines has proven verydifficult

Automation can be usefully characterized by acontinuum of levels rather than as an all-or-noneconcept (McDaniel 1988 Riley 1989 Sheridan1980) Under full manual control a particularfunction is controlled by the human with no ma-chine control At the other extreme of full auto-mation the machine controls all aspects of thefunction including its monitoring and only itsproducts (not its internal operations) are visibleto the human operator

Different levels of automation can be identifiedbetween these extremes For example Sheridan(1980) identified 10 such levels of automation inhis seventh level the automation carries out afunction and informs the operator to that effectbut the operator cannot control the output Riley(1989) defined automation levels as the combina-tion of particular values along two dimensionsintelligence and autonomy Automation withhigh autonomy can carry out functions only withinitiating input from the operator At the highestlevels the functions cannot be overridden by thehuman operator (eg the flight envelope protec-tion function of the Airbus 320 aircraft)

Human Roles in Automated Systems

Until recently the primary criteria for applyingautomation were technological feasibility andcost To the extent that automation could per-form a function more efficiently reliably or ac-curately than the human operator or merely re-place the operator at a lower cost automationhas been applied at the highest level possibleTechnical capability and low cost are valid rea-sons for automation if there is no detrimentalimpact on human (and hence system) perfor-

HUMAN FACTORS

mance in the resulting system As we discusslater however this is not always the case In theultimate extension of this practice automationwould completely replace operators in systemsAutomation has occasionally had this effect (egin some sectors of the manufacturing industry)but more generally automation has not com-pletely displaced human workers Although in layterms it is easiest to think of an automated sys-tem as not including a human most such sys-tems including unmanned systems such asspacecraft involve human operators in a super-visory or monitoring role

One of the considerations preventing the totalremoval of human operators from such systemshas been the common perception that humansare more flexible adaptable and creative thanautomation and thus are better able to respond tochanging or unforeseen conditions In a sensethen one might consider the levels of automationand operator involvement that are permitted in asystem design as reflecting the relative levels oftrust in the designer on one hand and the opera-tor on the other Given that no designer of auto-mation can foresee all possibilities in a complexenvironment one approach is to rely on the hu-man operator to exercise his or her experienceand judgment in using automation Usually (butnot always) the operator is given override author-ity and some discretion regarding the use ofautomation

This approach however tends to define thehuman operators roles and responsibilities interms of the automation (Riley 1995) Designerstend to automate everything that leads to an eco-nomic benefit and leave the operator to managethe resulting system Several important humanfactors issues emerge from this approach includ-ing consequences of inadequate feedback aboutthe automations actions and intentions (Nor-man 1990) awareness and management of au-tomation modes (Sarter amp Woods 1994) under-reliance on automation (Sorkin 1988) andoverreliance on automation (Parasuraman Mol-loy amp Singh 1993 Riley 1994b) An extensivelist of human factors concerns associated with

HUMAN USE OF AUTOMATION

cockpit automation was recently compiled byFunk Lyall and Riley (1995)

Incidents and Accidents

Unfortunately the ability to address humanperformance issues systematically in design andtraining has lagged behind the application of au-tomation and issues have come to light as a re-sult of accidents and incidents The need for bet-ter feedback about the automations state wasrevealed in a number of controlled flight intoterrain aircraft accidents in which the crew se-lected the wrong guidance mode and indicationspresented to the crew appeared similar to whenthe system was tracking the glide slope perfectly(Corwin Funk Levitan amp Bloomfield 1993) Thedifficulty of managing complex flight guidancemodes and maintaining awareness of whichmode the aircraft was in was demonstrated byaccidents attributed to pilot confusion regardingthe current mode (Sarter amp Woods 1994) Forexample an Airbus A320 crashed in StrasbourgFrance when the crew apparently confused thevertical speed and flight path angle modes (Min-istere de rEquipement des Transports et duTourisme 1993)

Underreliance on automation was demon-strated in railroad accidents in which crewschose to neglect speed constraints and their as-sociated alerts Even after one such accident nearBaltimore in 1987 inspectors found that the trainoperators were continuing to tape over the buzz-ers that warned them of speed violations (Sorkin1988) Finally overreliance on automation was acontributing cause in an accident near Colum-bus Ohio in 1994 Apilot who demonstrated lowconfidence in his own manual control skills andtended to rely heavily on the automatic pilot dur-ing nighttime low-visibility approaches failed tomonitor the aircrafts airspeed during final ap-proach in a nighttime snowstorm and crashedshort of the runway (National TransportationSafety Board [NTSB] 1994)

Most such accidents result from multiplecauses and it can be difficult to untangle thevarious contributing factors Whenever automa-tion is involved in an accident the issue of how

June 1997-233

the operator used that automation is of interestbut it may be difficult to say that the operatorused the automation too much too little or oth-erwise inappropriately Often the best one can dois to conclude that the operator having used theautomation in a certain way certain conse-quences followed The lessons learned from theseconsequences then join the growing body of les-sons related to automation design and use Inmost cases the operator is not clearly wrong inusing or not using the automation Having deter-mined that the operator must be trusted to applyexperience and judgment in unforeseen circum-stances he or she is granted the authority to de-cide when and how to use it (though manage-ment may limit this authority to a greater orlesser extent)

This brings up the question of how operatorsmake decisions to use automation How do theydecide whether or not to use automation Dothey make these decisions rationally or based onnonrational factors Are automation usage deci-sions appropriate given the relative performancesof operator and automation When and why dopeople misuse automation

Overoiew

In this paper we examine the factors influenc-ing the use misuse disuse and abuse of automa-tion Two points should be emphasized regardingour terminology First we include in our discus-sion of human use of automation not only humanoperators of systems but also designers supervi-sors managers and regulators This necessarilymeans that any human error associated with useof automation can include the human operatorthe designer or even management error ex-amples of each are provided throughout this paper

Second in using terms such as misuse disuseand abuse no pejorative intent is implied towardany of these groups We define misuse as overre-liance on automation (eg using it when itshould not be used failing to monitor it effec-tively) disuse as underutilization of automation(eg ignoring or turning off automated alarms orsafety systems) and abuse as inappropriate ap-plication of automation by designers or managers

234-June 1997

(eg automation that fails to consider the conse-quences for human performance in the resultingsystem)

USE OF AUTOMATION

The catastrophic accidents in Strasbourg Bal-timore and elsewhere are a powerful reminderthat the decision to use (or not to use) automa-tion can be one of the most important decisions ahuman operator can make particularly in time-critical situations What factors influence this de-cision Several authors (Lee amp Moray 1992Muir 1988) have suggested that automation re-liability and the operators trust in automationare major factors Riley (1989) examined severalother factors that might also influence automa-tion use decisions including how much workloadthe operator was experiencing and how muchrisk was involved in the situation He proposedthat automation usage was a complex interactivefunction of these and other factors Others (Mc-Clumpha amp James 1994 Singh Molloy amp Para-suraman 1993a 1993b) have suggested that op-erator attitudes toward automation mightinfluence automation usage We discuss the im-pact of each of these factors

Attitudes Toward Automation

It is easy to think of examples in which auto-mation usage and attitudes toward automationare correlated Often these attitudes are shapedby the reliability or accuracy of the automationFor example automatic braking systems are par-ticularly helpful when driving on wet or icyroads and drivers who use these systems havefavorable attitudes toward them Smoke detec-tors are prone to false alarms however and aredisliked by many people some of whom mightdisable them In either case automation use (orlack of use) reflects perceived reliability In otherinstances attitudes may not be so closely linkedto automation reliability For example many el-derly people tend not to use automatic teller ma-chines because of a generally negative attitudetoward computer technology and a more positiveattitude toward social interaction with other hu-mans (bank tellers) There are also people who

HUMAN FACTORS

prefer not to use automatic brakes and somepeople like smoke alarms

Attitudes toward automation vary widely amongindividuals (Helmreich 1984 McClumpha ampJames 1994 Singh Deaton amp Parasuraman1993) Understanding these attitudes-positiveand negative general and specific---constitutes afirst step toward understanding human use ofautomation

Wiener (1985 1989) queried pilots of auto-mated aircraft about their attitudes toward dif-ferent cockpit systems Anotable finding was thatonly a minority of the pilots agreed with the state-ment automation reduces workload In fact asubstantial minority of the pilots thought that au-tomation had increased their workload Laterstudies revealed that a major source of the in-creased workload was the requirement to repro-gram automated systems such as the FMS whenconditions changed (eg having to land at a dif-ferent runway than originally planned) Thusmany pilots felt that automation increased work-load precisely at the time when it was neededmost-that is during the high-workload phase ofdescent and final approach Subsequent moreformal questionnaire studies have also revealedsubstantial individual differences in pilot atti-tudes toward cockpit automation (McClumpha ampJames 1994 Singh et al 1993a)

Beliefs and attitudes are not necessarily linkedto behaviors that are consistent with those atti-tudes To what extent are individual attitudes to-ward automation consistent with usage patternsof automation The issue remains to be exploredfully In the case of a positive view of automationattitudes and usage may be correlated Examplesinclude the horizontal situation indicator whichpilots use for navigation and find extremely help-ful and automatic hand-offs between airspacesectors which air traffic controllers find useful inreducing their workload

More generally attitudes may not necessarilybe reflected in behavior Two recent studiesfound no relationship between attitudes towardautomation and actual reliance on automationduring multiple-task performance (Riley 1994a1996 Singh et aI 1993b) Furthermore there

HUMAN USE OF AUTOMATION

may be differences between general attitudes to-ward automation (ie all automation) and do-main-specific attitudes (eg cockpit automa-tion) For all these reasons it may be difficult topredict automation usage patterns on the basis ofquestionnaire data alone Performance data onactual human operator usage of automation areneeded We now tum to such evidence

Mental Workload

One of the fundamental reasons for introduc-ing automation into complex systems is to lessenthe chance of human error by reducing the op-erators high mental workload However thisdoes not always occur (Edwards 1977 Wiener1988) Nevertheless one might argue that an op-erator is more likely to choose to use automationwhen his or her workload is high than when it islow or moderate Surprisingly there is little evi-dence in favor of this assertion Riley (1994a) hadcollege students carry out a simple step-trackingtask and a character classification task that couldbe automated He found that manipulating thedifficulty of the tracking task had no impact onthe students choice to use automation in theclassification task The overall level of automa-tion usage in this group was quite low-less thanabout 50 A possible reason could be that theseyoung adults typically prefer manual over auto-mated control as reflected in their interest incomputer video games that require high levels ofmanual skill However in a replication study car-ried out with pilots who turned on the automa-tion much more frequently no relationship be-tween task difficulty and automation usage wasfound

The difficulty manipulation used by Riley(1994a) may have been insufficient to raise work-load significantly given that task performancewas only slightly affected Moreover the partici-pants had to perform only two simple discrete-trial artificial tasks Perhaps task difficulty ma-nipulations affect automation usage only in amultitask environment with dynamic tasks re-sembling those found in real work settings Har-ris Hancock and Arthur (1993) used three flighttasks-tracking system monitoring and fuel

June 1997-235

management-and gave participants the optionof automating the tracking task Following ad-vance notice of an increase in task difficultythere was a trend toward use of automation as aworkload management strategy However indi-vidual variability in automation usage patternsobscured any significant relationship betweentask load increase and automation usage

The evidence concerning the influence of taskload on automation usage is thus unclear Nev-ertheless human operators often cite excessiveworkload as a factor in their choice of automa-tion Riley Lyall and Wiener (1993) reportedthat workload was cited as one of two most im-portant factors (the other was the urgency of thesituation) in pilots choice of such automation asthe autopilot FMS and flight director duringsimulated flight However these data alsoshowed substantial individual differences Pilotswere asked how often in actual line perfor-mance various factors influenced their automa-tion use decisions For most factors examinedmany pilots indicated that a particular factorrarely influenced their decision whereas an al-most equal number said that the same factor in-fluenced their decisions quite often very fewgave an answer in the middle

Studies of human use of automation typicallyfind large individual differences Riley (1994a)found that the patterns of automation use dif-fered markedly between those who cited fatigueas an influence and those who cited other factorsMoreover there were substantial differences be-tween students and pilots even though the taskdomain was artificial and had no relation to avia-tion Within both pilot and student groups werestrong differences among individuals in automa-tion use These results suggest that differentpeople employ different strategies when makingautomation use decisions and are influenced bydifferent considerations

Subjective perceptions and objective measure-ment of performance are often dissociated (Yehamp Wickens 1988) Furthermore the nature ofworkload in real work settings can be fundamen-tally different from workload in most laboratorytasks For example pilots are often faced with

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

232-June 1997

need not be conceptually problematic Moreoversuch a transfer will probably not replace butrather will modify human thinking In practicehowever despite more than three decades of re-search on artificial intelligence neural networksand the development in Schanks (1984) termsof the cognitive computer enduring transferof thinking skills to machines has proven verydifficult

Automation can be usefully characterized by acontinuum of levels rather than as an all-or-noneconcept (McDaniel 1988 Riley 1989 Sheridan1980) Under full manual control a particularfunction is controlled by the human with no ma-chine control At the other extreme of full auto-mation the machine controls all aspects of thefunction including its monitoring and only itsproducts (not its internal operations) are visibleto the human operator

Different levels of automation can be identifiedbetween these extremes For example Sheridan(1980) identified 10 such levels of automation inhis seventh level the automation carries out afunction and informs the operator to that effectbut the operator cannot control the output Riley(1989) defined automation levels as the combina-tion of particular values along two dimensionsintelligence and autonomy Automation withhigh autonomy can carry out functions only withinitiating input from the operator At the highestlevels the functions cannot be overridden by thehuman operator (eg the flight envelope protec-tion function of the Airbus 320 aircraft)

Human Roles in Automated Systems

Until recently the primary criteria for applyingautomation were technological feasibility andcost To the extent that automation could per-form a function more efficiently reliably or ac-curately than the human operator or merely re-place the operator at a lower cost automationhas been applied at the highest level possibleTechnical capability and low cost are valid rea-sons for automation if there is no detrimentalimpact on human (and hence system) perfor-

HUMAN FACTORS

mance in the resulting system As we discusslater however this is not always the case In theultimate extension of this practice automationwould completely replace operators in systemsAutomation has occasionally had this effect (egin some sectors of the manufacturing industry)but more generally automation has not com-pletely displaced human workers Although in layterms it is easiest to think of an automated sys-tem as not including a human most such sys-tems including unmanned systems such asspacecraft involve human operators in a super-visory or monitoring role

One of the considerations preventing the totalremoval of human operators from such systemshas been the common perception that humansare more flexible adaptable and creative thanautomation and thus are better able to respond tochanging or unforeseen conditions In a sensethen one might consider the levels of automationand operator involvement that are permitted in asystem design as reflecting the relative levels oftrust in the designer on one hand and the opera-tor on the other Given that no designer of auto-mation can foresee all possibilities in a complexenvironment one approach is to rely on the hu-man operator to exercise his or her experienceand judgment in using automation Usually (butnot always) the operator is given override author-ity and some discretion regarding the use ofautomation

This approach however tends to define thehuman operators roles and responsibilities interms of the automation (Riley 1995) Designerstend to automate everything that leads to an eco-nomic benefit and leave the operator to managethe resulting system Several important humanfactors issues emerge from this approach includ-ing consequences of inadequate feedback aboutthe automations actions and intentions (Nor-man 1990) awareness and management of au-tomation modes (Sarter amp Woods 1994) under-reliance on automation (Sorkin 1988) andoverreliance on automation (Parasuraman Mol-loy amp Singh 1993 Riley 1994b) An extensivelist of human factors concerns associated with

HUMAN USE OF AUTOMATION

cockpit automation was recently compiled byFunk Lyall and Riley (1995)

Incidents and Accidents

Unfortunately the ability to address humanperformance issues systematically in design andtraining has lagged behind the application of au-tomation and issues have come to light as a re-sult of accidents and incidents The need for bet-ter feedback about the automations state wasrevealed in a number of controlled flight intoterrain aircraft accidents in which the crew se-lected the wrong guidance mode and indicationspresented to the crew appeared similar to whenthe system was tracking the glide slope perfectly(Corwin Funk Levitan amp Bloomfield 1993) Thedifficulty of managing complex flight guidancemodes and maintaining awareness of whichmode the aircraft was in was demonstrated byaccidents attributed to pilot confusion regardingthe current mode (Sarter amp Woods 1994) Forexample an Airbus A320 crashed in StrasbourgFrance when the crew apparently confused thevertical speed and flight path angle modes (Min-istere de rEquipement des Transports et duTourisme 1993)

Underreliance on automation was demon-strated in railroad accidents in which crewschose to neglect speed constraints and their as-sociated alerts Even after one such accident nearBaltimore in 1987 inspectors found that the trainoperators were continuing to tape over the buzz-ers that warned them of speed violations (Sorkin1988) Finally overreliance on automation was acontributing cause in an accident near Colum-bus Ohio in 1994 Apilot who demonstrated lowconfidence in his own manual control skills andtended to rely heavily on the automatic pilot dur-ing nighttime low-visibility approaches failed tomonitor the aircrafts airspeed during final ap-proach in a nighttime snowstorm and crashedshort of the runway (National TransportationSafety Board [NTSB] 1994)

Most such accidents result from multiplecauses and it can be difficult to untangle thevarious contributing factors Whenever automa-tion is involved in an accident the issue of how

June 1997-233

the operator used that automation is of interestbut it may be difficult to say that the operatorused the automation too much too little or oth-erwise inappropriately Often the best one can dois to conclude that the operator having used theautomation in a certain way certain conse-quences followed The lessons learned from theseconsequences then join the growing body of les-sons related to automation design and use Inmost cases the operator is not clearly wrong inusing or not using the automation Having deter-mined that the operator must be trusted to applyexperience and judgment in unforeseen circum-stances he or she is granted the authority to de-cide when and how to use it (though manage-ment may limit this authority to a greater orlesser extent)

This brings up the question of how operatorsmake decisions to use automation How do theydecide whether or not to use automation Dothey make these decisions rationally or based onnonrational factors Are automation usage deci-sions appropriate given the relative performancesof operator and automation When and why dopeople misuse automation

Overoiew

In this paper we examine the factors influenc-ing the use misuse disuse and abuse of automa-tion Two points should be emphasized regardingour terminology First we include in our discus-sion of human use of automation not only humanoperators of systems but also designers supervi-sors managers and regulators This necessarilymeans that any human error associated with useof automation can include the human operatorthe designer or even management error ex-amples of each are provided throughout this paper

Second in using terms such as misuse disuseand abuse no pejorative intent is implied towardany of these groups We define misuse as overre-liance on automation (eg using it when itshould not be used failing to monitor it effec-tively) disuse as underutilization of automation(eg ignoring or turning off automated alarms orsafety systems) and abuse as inappropriate ap-plication of automation by designers or managers

234-June 1997

(eg automation that fails to consider the conse-quences for human performance in the resultingsystem)

USE OF AUTOMATION

The catastrophic accidents in Strasbourg Bal-timore and elsewhere are a powerful reminderthat the decision to use (or not to use) automa-tion can be one of the most important decisions ahuman operator can make particularly in time-critical situations What factors influence this de-cision Several authors (Lee amp Moray 1992Muir 1988) have suggested that automation re-liability and the operators trust in automationare major factors Riley (1989) examined severalother factors that might also influence automa-tion use decisions including how much workloadthe operator was experiencing and how muchrisk was involved in the situation He proposedthat automation usage was a complex interactivefunction of these and other factors Others (Mc-Clumpha amp James 1994 Singh Molloy amp Para-suraman 1993a 1993b) have suggested that op-erator attitudes toward automation mightinfluence automation usage We discuss the im-pact of each of these factors

Attitudes Toward Automation

It is easy to think of examples in which auto-mation usage and attitudes toward automationare correlated Often these attitudes are shapedby the reliability or accuracy of the automationFor example automatic braking systems are par-ticularly helpful when driving on wet or icyroads and drivers who use these systems havefavorable attitudes toward them Smoke detec-tors are prone to false alarms however and aredisliked by many people some of whom mightdisable them In either case automation use (orlack of use) reflects perceived reliability In otherinstances attitudes may not be so closely linkedto automation reliability For example many el-derly people tend not to use automatic teller ma-chines because of a generally negative attitudetoward computer technology and a more positiveattitude toward social interaction with other hu-mans (bank tellers) There are also people who

HUMAN FACTORS

prefer not to use automatic brakes and somepeople like smoke alarms

Attitudes toward automation vary widely amongindividuals (Helmreich 1984 McClumpha ampJames 1994 Singh Deaton amp Parasuraman1993) Understanding these attitudes-positiveand negative general and specific---constitutes afirst step toward understanding human use ofautomation

Wiener (1985 1989) queried pilots of auto-mated aircraft about their attitudes toward dif-ferent cockpit systems Anotable finding was thatonly a minority of the pilots agreed with the state-ment automation reduces workload In fact asubstantial minority of the pilots thought that au-tomation had increased their workload Laterstudies revealed that a major source of the in-creased workload was the requirement to repro-gram automated systems such as the FMS whenconditions changed (eg having to land at a dif-ferent runway than originally planned) Thusmany pilots felt that automation increased work-load precisely at the time when it was neededmost-that is during the high-workload phase ofdescent and final approach Subsequent moreformal questionnaire studies have also revealedsubstantial individual differences in pilot atti-tudes toward cockpit automation (McClumpha ampJames 1994 Singh et al 1993a)

Beliefs and attitudes are not necessarily linkedto behaviors that are consistent with those atti-tudes To what extent are individual attitudes to-ward automation consistent with usage patternsof automation The issue remains to be exploredfully In the case of a positive view of automationattitudes and usage may be correlated Examplesinclude the horizontal situation indicator whichpilots use for navigation and find extremely help-ful and automatic hand-offs between airspacesectors which air traffic controllers find useful inreducing their workload

More generally attitudes may not necessarilybe reflected in behavior Two recent studiesfound no relationship between attitudes towardautomation and actual reliance on automationduring multiple-task performance (Riley 1994a1996 Singh et aI 1993b) Furthermore there

HUMAN USE OF AUTOMATION

may be differences between general attitudes to-ward automation (ie all automation) and do-main-specific attitudes (eg cockpit automa-tion) For all these reasons it may be difficult topredict automation usage patterns on the basis ofquestionnaire data alone Performance data onactual human operator usage of automation areneeded We now tum to such evidence

Mental Workload

One of the fundamental reasons for introduc-ing automation into complex systems is to lessenthe chance of human error by reducing the op-erators high mental workload However thisdoes not always occur (Edwards 1977 Wiener1988) Nevertheless one might argue that an op-erator is more likely to choose to use automationwhen his or her workload is high than when it islow or moderate Surprisingly there is little evi-dence in favor of this assertion Riley (1994a) hadcollege students carry out a simple step-trackingtask and a character classification task that couldbe automated He found that manipulating thedifficulty of the tracking task had no impact onthe students choice to use automation in theclassification task The overall level of automa-tion usage in this group was quite low-less thanabout 50 A possible reason could be that theseyoung adults typically prefer manual over auto-mated control as reflected in their interest incomputer video games that require high levels ofmanual skill However in a replication study car-ried out with pilots who turned on the automa-tion much more frequently no relationship be-tween task difficulty and automation usage wasfound

The difficulty manipulation used by Riley(1994a) may have been insufficient to raise work-load significantly given that task performancewas only slightly affected Moreover the partici-pants had to perform only two simple discrete-trial artificial tasks Perhaps task difficulty ma-nipulations affect automation usage only in amultitask environment with dynamic tasks re-sembling those found in real work settings Har-ris Hancock and Arthur (1993) used three flighttasks-tracking system monitoring and fuel

June 1997-235

management-and gave participants the optionof automating the tracking task Following ad-vance notice of an increase in task difficultythere was a trend toward use of automation as aworkload management strategy However indi-vidual variability in automation usage patternsobscured any significant relationship betweentask load increase and automation usage

The evidence concerning the influence of taskload on automation usage is thus unclear Nev-ertheless human operators often cite excessiveworkload as a factor in their choice of automa-tion Riley Lyall and Wiener (1993) reportedthat workload was cited as one of two most im-portant factors (the other was the urgency of thesituation) in pilots choice of such automation asthe autopilot FMS and flight director duringsimulated flight However these data alsoshowed substantial individual differences Pilotswere asked how often in actual line perfor-mance various factors influenced their automa-tion use decisions For most factors examinedmany pilots indicated that a particular factorrarely influenced their decision whereas an al-most equal number said that the same factor in-fluenced their decisions quite often very fewgave an answer in the middle

Studies of human use of automation typicallyfind large individual differences Riley (1994a)found that the patterns of automation use dif-fered markedly between those who cited fatigueas an influence and those who cited other factorsMoreover there were substantial differences be-tween students and pilots even though the taskdomain was artificial and had no relation to avia-tion Within both pilot and student groups werestrong differences among individuals in automa-tion use These results suggest that differentpeople employ different strategies when makingautomation use decisions and are influenced bydifferent considerations

Subjective perceptions and objective measure-ment of performance are often dissociated (Yehamp Wickens 1988) Furthermore the nature ofworkload in real work settings can be fundamen-tally different from workload in most laboratorytasks For example pilots are often faced with

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

cockpit automation was recently compiled byFunk Lyall and Riley (1995)

Incidents and Accidents

Unfortunately the ability to address humanperformance issues systematically in design andtraining has lagged behind the application of au-tomation and issues have come to light as a re-sult of accidents and incidents The need for bet-ter feedback about the automations state wasrevealed in a number of controlled flight intoterrain aircraft accidents in which the crew se-lected the wrong guidance mode and indicationspresented to the crew appeared similar to whenthe system was tracking the glide slope perfectly(Corwin Funk Levitan amp Bloomfield 1993) Thedifficulty of managing complex flight guidancemodes and maintaining awareness of whichmode the aircraft was in was demonstrated byaccidents attributed to pilot confusion regardingthe current mode (Sarter amp Woods 1994) Forexample an Airbus A320 crashed in StrasbourgFrance when the crew apparently confused thevertical speed and flight path angle modes (Min-istere de rEquipement des Transports et duTourisme 1993)

Underreliance on automation was demon-strated in railroad accidents in which crewschose to neglect speed constraints and their as-sociated alerts Even after one such accident nearBaltimore in 1987 inspectors found that the trainoperators were continuing to tape over the buzz-ers that warned them of speed violations (Sorkin1988) Finally overreliance on automation was acontributing cause in an accident near Colum-bus Ohio in 1994 Apilot who demonstrated lowconfidence in his own manual control skills andtended to rely heavily on the automatic pilot dur-ing nighttime low-visibility approaches failed tomonitor the aircrafts airspeed during final ap-proach in a nighttime snowstorm and crashedshort of the runway (National TransportationSafety Board [NTSB] 1994)

Most such accidents result from multiplecauses and it can be difficult to untangle thevarious contributing factors Whenever automa-tion is involved in an accident the issue of how

June 1997-233

the operator used that automation is of interestbut it may be difficult to say that the operatorused the automation too much too little or oth-erwise inappropriately Often the best one can dois to conclude that the operator having used theautomation in a certain way certain conse-quences followed The lessons learned from theseconsequences then join the growing body of les-sons related to automation design and use Inmost cases the operator is not clearly wrong inusing or not using the automation Having deter-mined that the operator must be trusted to applyexperience and judgment in unforeseen circum-stances he or she is granted the authority to de-cide when and how to use it (though manage-ment may limit this authority to a greater orlesser extent)

This brings up the question of how operatorsmake decisions to use automation How do theydecide whether or not to use automation Dothey make these decisions rationally or based onnonrational factors Are automation usage deci-sions appropriate given the relative performancesof operator and automation When and why dopeople misuse automation

Overoiew

In this paper we examine the factors influenc-ing the use misuse disuse and abuse of automa-tion Two points should be emphasized regardingour terminology First we include in our discus-sion of human use of automation not only humanoperators of systems but also designers supervi-sors managers and regulators This necessarilymeans that any human error associated with useof automation can include the human operatorthe designer or even management error ex-amples of each are provided throughout this paper

Second in using terms such as misuse disuseand abuse no pejorative intent is implied towardany of these groups We define misuse as overre-liance on automation (eg using it when itshould not be used failing to monitor it effec-tively) disuse as underutilization of automation(eg ignoring or turning off automated alarms orsafety systems) and abuse as inappropriate ap-plication of automation by designers or managers

234-June 1997

(eg automation that fails to consider the conse-quences for human performance in the resultingsystem)

USE OF AUTOMATION

The catastrophic accidents in Strasbourg Bal-timore and elsewhere are a powerful reminderthat the decision to use (or not to use) automa-tion can be one of the most important decisions ahuman operator can make particularly in time-critical situations What factors influence this de-cision Several authors (Lee amp Moray 1992Muir 1988) have suggested that automation re-liability and the operators trust in automationare major factors Riley (1989) examined severalother factors that might also influence automa-tion use decisions including how much workloadthe operator was experiencing and how muchrisk was involved in the situation He proposedthat automation usage was a complex interactivefunction of these and other factors Others (Mc-Clumpha amp James 1994 Singh Molloy amp Para-suraman 1993a 1993b) have suggested that op-erator attitudes toward automation mightinfluence automation usage We discuss the im-pact of each of these factors

Attitudes Toward Automation

It is easy to think of examples in which auto-mation usage and attitudes toward automationare correlated Often these attitudes are shapedby the reliability or accuracy of the automationFor example automatic braking systems are par-ticularly helpful when driving on wet or icyroads and drivers who use these systems havefavorable attitudes toward them Smoke detec-tors are prone to false alarms however and aredisliked by many people some of whom mightdisable them In either case automation use (orlack of use) reflects perceived reliability In otherinstances attitudes may not be so closely linkedto automation reliability For example many el-derly people tend not to use automatic teller ma-chines because of a generally negative attitudetoward computer technology and a more positiveattitude toward social interaction with other hu-mans (bank tellers) There are also people who

HUMAN FACTORS

prefer not to use automatic brakes and somepeople like smoke alarms

Attitudes toward automation vary widely amongindividuals (Helmreich 1984 McClumpha ampJames 1994 Singh Deaton amp Parasuraman1993) Understanding these attitudes-positiveand negative general and specific---constitutes afirst step toward understanding human use ofautomation

Wiener (1985 1989) queried pilots of auto-mated aircraft about their attitudes toward dif-ferent cockpit systems Anotable finding was thatonly a minority of the pilots agreed with the state-ment automation reduces workload In fact asubstantial minority of the pilots thought that au-tomation had increased their workload Laterstudies revealed that a major source of the in-creased workload was the requirement to repro-gram automated systems such as the FMS whenconditions changed (eg having to land at a dif-ferent runway than originally planned) Thusmany pilots felt that automation increased work-load precisely at the time when it was neededmost-that is during the high-workload phase ofdescent and final approach Subsequent moreformal questionnaire studies have also revealedsubstantial individual differences in pilot atti-tudes toward cockpit automation (McClumpha ampJames 1994 Singh et al 1993a)

Beliefs and attitudes are not necessarily linkedto behaviors that are consistent with those atti-tudes To what extent are individual attitudes to-ward automation consistent with usage patternsof automation The issue remains to be exploredfully In the case of a positive view of automationattitudes and usage may be correlated Examplesinclude the horizontal situation indicator whichpilots use for navigation and find extremely help-ful and automatic hand-offs between airspacesectors which air traffic controllers find useful inreducing their workload

More generally attitudes may not necessarilybe reflected in behavior Two recent studiesfound no relationship between attitudes towardautomation and actual reliance on automationduring multiple-task performance (Riley 1994a1996 Singh et aI 1993b) Furthermore there

HUMAN USE OF AUTOMATION

may be differences between general attitudes to-ward automation (ie all automation) and do-main-specific attitudes (eg cockpit automa-tion) For all these reasons it may be difficult topredict automation usage patterns on the basis ofquestionnaire data alone Performance data onactual human operator usage of automation areneeded We now tum to such evidence

Mental Workload

One of the fundamental reasons for introduc-ing automation into complex systems is to lessenthe chance of human error by reducing the op-erators high mental workload However thisdoes not always occur (Edwards 1977 Wiener1988) Nevertheless one might argue that an op-erator is more likely to choose to use automationwhen his or her workload is high than when it islow or moderate Surprisingly there is little evi-dence in favor of this assertion Riley (1994a) hadcollege students carry out a simple step-trackingtask and a character classification task that couldbe automated He found that manipulating thedifficulty of the tracking task had no impact onthe students choice to use automation in theclassification task The overall level of automa-tion usage in this group was quite low-less thanabout 50 A possible reason could be that theseyoung adults typically prefer manual over auto-mated control as reflected in their interest incomputer video games that require high levels ofmanual skill However in a replication study car-ried out with pilots who turned on the automa-tion much more frequently no relationship be-tween task difficulty and automation usage wasfound

The difficulty manipulation used by Riley(1994a) may have been insufficient to raise work-load significantly given that task performancewas only slightly affected Moreover the partici-pants had to perform only two simple discrete-trial artificial tasks Perhaps task difficulty ma-nipulations affect automation usage only in amultitask environment with dynamic tasks re-sembling those found in real work settings Har-ris Hancock and Arthur (1993) used three flighttasks-tracking system monitoring and fuel

June 1997-235

management-and gave participants the optionof automating the tracking task Following ad-vance notice of an increase in task difficultythere was a trend toward use of automation as aworkload management strategy However indi-vidual variability in automation usage patternsobscured any significant relationship betweentask load increase and automation usage

The evidence concerning the influence of taskload on automation usage is thus unclear Nev-ertheless human operators often cite excessiveworkload as a factor in their choice of automa-tion Riley Lyall and Wiener (1993) reportedthat workload was cited as one of two most im-portant factors (the other was the urgency of thesituation) in pilots choice of such automation asthe autopilot FMS and flight director duringsimulated flight However these data alsoshowed substantial individual differences Pilotswere asked how often in actual line perfor-mance various factors influenced their automa-tion use decisions For most factors examinedmany pilots indicated that a particular factorrarely influenced their decision whereas an al-most equal number said that the same factor in-fluenced their decisions quite often very fewgave an answer in the middle

Studies of human use of automation typicallyfind large individual differences Riley (1994a)found that the patterns of automation use dif-fered markedly between those who cited fatigueas an influence and those who cited other factorsMoreover there were substantial differences be-tween students and pilots even though the taskdomain was artificial and had no relation to avia-tion Within both pilot and student groups werestrong differences among individuals in automa-tion use These results suggest that differentpeople employ different strategies when makingautomation use decisions and are influenced bydifferent considerations

Subjective perceptions and objective measure-ment of performance are often dissociated (Yehamp Wickens 1988) Furthermore the nature ofworkload in real work settings can be fundamen-tally different from workload in most laboratorytasks For example pilots are often faced with

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

234-June 1997

(eg automation that fails to consider the conse-quences for human performance in the resultingsystem)

USE OF AUTOMATION

The catastrophic accidents in Strasbourg Bal-timore and elsewhere are a powerful reminderthat the decision to use (or not to use) automa-tion can be one of the most important decisions ahuman operator can make particularly in time-critical situations What factors influence this de-cision Several authors (Lee amp Moray 1992Muir 1988) have suggested that automation re-liability and the operators trust in automationare major factors Riley (1989) examined severalother factors that might also influence automa-tion use decisions including how much workloadthe operator was experiencing and how muchrisk was involved in the situation He proposedthat automation usage was a complex interactivefunction of these and other factors Others (Mc-Clumpha amp James 1994 Singh Molloy amp Para-suraman 1993a 1993b) have suggested that op-erator attitudes toward automation mightinfluence automation usage We discuss the im-pact of each of these factors

Attitudes Toward Automation

It is easy to think of examples in which auto-mation usage and attitudes toward automationare correlated Often these attitudes are shapedby the reliability or accuracy of the automationFor example automatic braking systems are par-ticularly helpful when driving on wet or icyroads and drivers who use these systems havefavorable attitudes toward them Smoke detec-tors are prone to false alarms however and aredisliked by many people some of whom mightdisable them In either case automation use (orlack of use) reflects perceived reliability In otherinstances attitudes may not be so closely linkedto automation reliability For example many el-derly people tend not to use automatic teller ma-chines because of a generally negative attitudetoward computer technology and a more positiveattitude toward social interaction with other hu-mans (bank tellers) There are also people who

HUMAN FACTORS

prefer not to use automatic brakes and somepeople like smoke alarms

Attitudes toward automation vary widely amongindividuals (Helmreich 1984 McClumpha ampJames 1994 Singh Deaton amp Parasuraman1993) Understanding these attitudes-positiveand negative general and specific---constitutes afirst step toward understanding human use ofautomation

Wiener (1985 1989) queried pilots of auto-mated aircraft about their attitudes toward dif-ferent cockpit systems Anotable finding was thatonly a minority of the pilots agreed with the state-ment automation reduces workload In fact asubstantial minority of the pilots thought that au-tomation had increased their workload Laterstudies revealed that a major source of the in-creased workload was the requirement to repro-gram automated systems such as the FMS whenconditions changed (eg having to land at a dif-ferent runway than originally planned) Thusmany pilots felt that automation increased work-load precisely at the time when it was neededmost-that is during the high-workload phase ofdescent and final approach Subsequent moreformal questionnaire studies have also revealedsubstantial individual differences in pilot atti-tudes toward cockpit automation (McClumpha ampJames 1994 Singh et al 1993a)

Beliefs and attitudes are not necessarily linkedto behaviors that are consistent with those atti-tudes To what extent are individual attitudes to-ward automation consistent with usage patternsof automation The issue remains to be exploredfully In the case of a positive view of automationattitudes and usage may be correlated Examplesinclude the horizontal situation indicator whichpilots use for navigation and find extremely help-ful and automatic hand-offs between airspacesectors which air traffic controllers find useful inreducing their workload

More generally attitudes may not necessarilybe reflected in behavior Two recent studiesfound no relationship between attitudes towardautomation and actual reliance on automationduring multiple-task performance (Riley 1994a1996 Singh et aI 1993b) Furthermore there

HUMAN USE OF AUTOMATION

may be differences between general attitudes to-ward automation (ie all automation) and do-main-specific attitudes (eg cockpit automa-tion) For all these reasons it may be difficult topredict automation usage patterns on the basis ofquestionnaire data alone Performance data onactual human operator usage of automation areneeded We now tum to such evidence

Mental Workload

One of the fundamental reasons for introduc-ing automation into complex systems is to lessenthe chance of human error by reducing the op-erators high mental workload However thisdoes not always occur (Edwards 1977 Wiener1988) Nevertheless one might argue that an op-erator is more likely to choose to use automationwhen his or her workload is high than when it islow or moderate Surprisingly there is little evi-dence in favor of this assertion Riley (1994a) hadcollege students carry out a simple step-trackingtask and a character classification task that couldbe automated He found that manipulating thedifficulty of the tracking task had no impact onthe students choice to use automation in theclassification task The overall level of automa-tion usage in this group was quite low-less thanabout 50 A possible reason could be that theseyoung adults typically prefer manual over auto-mated control as reflected in their interest incomputer video games that require high levels ofmanual skill However in a replication study car-ried out with pilots who turned on the automa-tion much more frequently no relationship be-tween task difficulty and automation usage wasfound

The difficulty manipulation used by Riley(1994a) may have been insufficient to raise work-load significantly given that task performancewas only slightly affected Moreover the partici-pants had to perform only two simple discrete-trial artificial tasks Perhaps task difficulty ma-nipulations affect automation usage only in amultitask environment with dynamic tasks re-sembling those found in real work settings Har-ris Hancock and Arthur (1993) used three flighttasks-tracking system monitoring and fuel

June 1997-235

management-and gave participants the optionof automating the tracking task Following ad-vance notice of an increase in task difficultythere was a trend toward use of automation as aworkload management strategy However indi-vidual variability in automation usage patternsobscured any significant relationship betweentask load increase and automation usage

The evidence concerning the influence of taskload on automation usage is thus unclear Nev-ertheless human operators often cite excessiveworkload as a factor in their choice of automa-tion Riley Lyall and Wiener (1993) reportedthat workload was cited as one of two most im-portant factors (the other was the urgency of thesituation) in pilots choice of such automation asthe autopilot FMS and flight director duringsimulated flight However these data alsoshowed substantial individual differences Pilotswere asked how often in actual line perfor-mance various factors influenced their automa-tion use decisions For most factors examinedmany pilots indicated that a particular factorrarely influenced their decision whereas an al-most equal number said that the same factor in-fluenced their decisions quite often very fewgave an answer in the middle

Studies of human use of automation typicallyfind large individual differences Riley (1994a)found that the patterns of automation use dif-fered markedly between those who cited fatigueas an influence and those who cited other factorsMoreover there were substantial differences be-tween students and pilots even though the taskdomain was artificial and had no relation to avia-tion Within both pilot and student groups werestrong differences among individuals in automa-tion use These results suggest that differentpeople employ different strategies when makingautomation use decisions and are influenced bydifferent considerations

Subjective perceptions and objective measure-ment of performance are often dissociated (Yehamp Wickens 1988) Furthermore the nature ofworkload in real work settings can be fundamen-tally different from workload in most laboratorytasks For example pilots are often faced with

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

may be differences between general attitudes to-ward automation (ie all automation) and do-main-specific attitudes (eg cockpit automa-tion) For all these reasons it may be difficult topredict automation usage patterns on the basis ofquestionnaire data alone Performance data onactual human operator usage of automation areneeded We now tum to such evidence

Mental Workload

One of the fundamental reasons for introduc-ing automation into complex systems is to lessenthe chance of human error by reducing the op-erators high mental workload However thisdoes not always occur (Edwards 1977 Wiener1988) Nevertheless one might argue that an op-erator is more likely to choose to use automationwhen his or her workload is high than when it islow or moderate Surprisingly there is little evi-dence in favor of this assertion Riley (1994a) hadcollege students carry out a simple step-trackingtask and a character classification task that couldbe automated He found that manipulating thedifficulty of the tracking task had no impact onthe students choice to use automation in theclassification task The overall level of automa-tion usage in this group was quite low-less thanabout 50 A possible reason could be that theseyoung adults typically prefer manual over auto-mated control as reflected in their interest incomputer video games that require high levels ofmanual skill However in a replication study car-ried out with pilots who turned on the automa-tion much more frequently no relationship be-tween task difficulty and automation usage wasfound

The difficulty manipulation used by Riley(1994a) may have been insufficient to raise work-load significantly given that task performancewas only slightly affected Moreover the partici-pants had to perform only two simple discrete-trial artificial tasks Perhaps task difficulty ma-nipulations affect automation usage only in amultitask environment with dynamic tasks re-sembling those found in real work settings Har-ris Hancock and Arthur (1993) used three flighttasks-tracking system monitoring and fuel

June 1997-235

management-and gave participants the optionof automating the tracking task Following ad-vance notice of an increase in task difficultythere was a trend toward use of automation as aworkload management strategy However indi-vidual variability in automation usage patternsobscured any significant relationship betweentask load increase and automation usage

The evidence concerning the influence of taskload on automation usage is thus unclear Nev-ertheless human operators often cite excessiveworkload as a factor in their choice of automa-tion Riley Lyall and Wiener (1993) reportedthat workload was cited as one of two most im-portant factors (the other was the urgency of thesituation) in pilots choice of such automation asthe autopilot FMS and flight director duringsimulated flight However these data alsoshowed substantial individual differences Pilotswere asked how often in actual line perfor-mance various factors influenced their automa-tion use decisions For most factors examinedmany pilots indicated that a particular factorrarely influenced their decision whereas an al-most equal number said that the same factor in-fluenced their decisions quite often very fewgave an answer in the middle

Studies of human use of automation typicallyfind large individual differences Riley (1994a)found that the patterns of automation use dif-fered markedly between those who cited fatigueas an influence and those who cited other factorsMoreover there were substantial differences be-tween students and pilots even though the taskdomain was artificial and had no relation to avia-tion Within both pilot and student groups werestrong differences among individuals in automa-tion use These results suggest that differentpeople employ different strategies when makingautomation use decisions and are influenced bydifferent considerations

Subjective perceptions and objective measure-ment of performance are often dissociated (Yehamp Wickens 1988) Furthermore the nature ofworkload in real work settings can be fundamen-tally different from workload in most laboratorytasks For example pilots are often faced with

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

236-June 1997

deciding whether to fly a clearance (from air traf-fic control) through the FMS through simpleheading or altitude changes on the glareshieldpanel or through manual control Casner (1994)found that pilots appear to consider the predict-ability of the flight path and the demands of othertasks in reaching their decision When the flightpath is higWy predictable overall workload maybe reduced by accepting the higher short-termworkload associated with programming theFMS whereas when the future flight path is moreuncertain such a workload investment may notbe warranted To fully explore the implications ofworkload on automation use the workload at-tributes of particular systems of interest shouldbe better represented in the flight deck thiswould include a trade-off between short-termand long-term workload investments

Cognitive Overhead

In addition to the workload associated with theoperators other tasks a related form of work-load-that associated with the decision to use theautomation itself-may also influence automa-tion use Automation usage decisions may berelatively straightforward if the advantages of us-ing the automation are clear cut When the ben-efit offered by automation is not readily appar-ent however or if the benefit becomes clear onlyafter much thought and evaluation then the cog-nitive overhead involved may persuade the op-erator not to use the automation (Kirlik 1993)

Overhead can be a significant factor even forroutine actions for which the advantages of au-tomation are clear-for example entering textfrom a sheet of paper into a word processor Onechoice a labor-intensive one is to enter the textmanually with a keyboard Alternatively a scan-ner and an optical character recognition (OCR)program can be used to enter the text into theword processor Even though modem OCR pro-grams are quite accurate for clearly typed textmost people would probably choose not to usethis form of automation because the time in-volved in setting up the automation and correct-ing errors may be perceived as not worth the ef-

HUMAN FACTORS

fort (Only if several sheets of paper had to beconverted would the OCR option be considered)

Cognitive overhead may be important withhigh-level automation that provides the humanoperator with a solution to a complex problemBecause these aids are generally used in uncer-tain probabilistic environments the automatedsolution mayor may not be better than a manualone As a result the human operator may expendconsiderable cognitive resources in generating amanual solution to the problem comparing itwith the automated solution and then pickingone of the solutions If the operator perceives thatthe advantage offered by the automation is notsufficient to overcome the cognitive overhead in-volved then he or she may simply choose not touse the automation and do the task manually

Kirlik (1993) provided empirical evidence ofthis phenomenon in a dual-task study in whichan autopilot was available to participants for aprimary flight task He found that none of theparticipants used the automation as intended-that is as a task-shedding device to allow atten-tion to be focused on the secondary task when itwas present but not otherwise Kirlik (1993) hy-pothesized that factors such as an individualsmanual control skills the time needed to engagethe autopilot and the cost of delaying the second-ary task while engaging the autopilot may haveinfluenced automation use patterns Using aMarkov modeling analysis to identify the optimalstrategies of automation use given each of thesefactors he found that conditions exist for whichthe optimal choice is not to use the automation

Trust

Trust often determines automation usage Op-erators may not use a reliable automated systemif they believe it to be untrustworthy Converselythey may continue to rely on automation evenwhen it malfunctions Muir (1988) argued thatindividuals trust for machines can be affected bythe same factors that influence trust between in-dividuals for example people trust others if theyare reliable and honest but they lose trust whenthey are let down or betrayed and the subsequentredevelopment of trust takes time She found that

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

use of an automated aid to control a simulatedsoft drink manufacturing plant was correlatedwith a simple subjective measure of trust inthat aid

Using a similar process control simulation Leeand Moray (1992) also found a correlation be-tween automation reliance and subjective trustalthough their participants also tended to be bi-ased toward manual control and showed inertiain their allocation policy In a subsequent studyLee and Moray (1994) found that participantschose manual control if their confidence in theirown ability to control the plant exceeded theirtrust of the automation and that they otherwisechose automation

A factor in the development of trust is automa-tion reliability Several studies have shown thatoperators use of automation reflects automationreliability though occasional failures of automa-tion do not seem to be a deterrent to future use ofthe automation Riley (1994a) found that collegestudents and pilots did not delay turning on au-tomation after recovery from a failure in factmany participants continued to rely on the auto-mation during the failure Parasuraman et a1(1993) found that even after the simulated cata-strophic failure of an automated engine-monitoring system participants continued torely on the automation for some time though toa lesser extent than when the automation wasmore reliable

These findings are surprising in view of earlierstudies suggesting that operator trust in automa-tion is slow to recover following a failure of theautomation (Lee amp Moray 1992) Several pos-sible mitigating factors could account for the dis-crepancy

First if automation reliability is relatively highthen operators may come to rely on the automa-tion so that occasional failures do not substan-tially reduce trust in the automation unless thefailures are sustained Asecond factor may be theease with which automation behaviors and stateindicators can be detected (Molloy amp Parasura-man 1994) As discussed earlier the overheadinvolved in enabling or disengaging automationmay be another factor Finally the overall com-

June 1997-237

plexity of the task may be relevant complex taskdomains may prompt different participants toadopt different task performance strategies andoperator use of automation may be influenced bythese task performance strategies as well as byfactors directly related to the automation

Confidence Risk and Other Factors

Several other factors are probably also impor-tant in influencing the choice to use or not to useautomation Some of these factors may have adirect influence whereas others may interactwith the factors already discussed For examplethe influence of cognitive overhead may be par-ticularly evident if the operators workload is al-ready high Under such circumstances operatorsmay be reluctant to use automation even if it isreliable accurate and generally trustworthy Leeand Moray (1992) and Riley (1994a) also identi-fied self-confidence in ones manual skills as animportant factor in automation usage If trust inautomation is greater than self-confidence auto-mation would be engaged but not otherwise

Riley (1994a) suggested that this interactioncould be moderated by other factors such as therisk associated with the decision to use or not touse automation He outlined a model of automa-tion usage based on a number of factors (see Fig-ure 1) The factors for which he found supportinclude automation reliability trust in the auto-mation self-confidence in ones own capabilitiestask complexity risk learning about automationstates and fatigue However he did not find thatself-confidence was necessarily justified partici-pants in his studies were not able to accuratelyassess their own performance and use the auto-mation accordingly again showing the dissocia-tion between subjective estimates and perfor-mance mentioned earlier Furthermore largeindividual differences were found in almost allaspects of automation use decisions This canmake systematic prediction of automation usageby individuals difficult much as the predictionof human error is problematic even when thefactors that give rise to errors are understood(Reason 1990)

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

238-June 1997 HUMAN FACTORS

bullbullbull ~ operator accuracy 4 I I - i I system accuracy

workload skill ~

1 I I 1 I I 1 I I task Complexity I

perceive~workload +~ bullbullbull I II I chinbullbullbull r I rna e accuracy

_--- I~f trust In automation

perceived risk

fatigue

risk state learning

Figure 1 Interactions between factors influencingautomation use Solid arrows rep-resent relationships supported by experimentaldata dotted arrows are hypothesizedrelationshipsor relationshipsthat depend on the systemin questionReproducedfromParasuraman amp Mouloua (1996) with permission from LawrenceErlbaum Associates

Practical Implications

These results suggest that automation use de-cisions are based on a complex interaction ofmany factors and are subject to strongly diver-gent individual considerations Although many ofthe factors have been examined and the mostimportant identified predicting the automationuse of an individual based on knowledge of thesefactors remains a difficult prospect Given thisconclusion in a human-centered automationphilosophy the decision to use or not to use au-tomation is left to the operator (within limits setby management) Having granted the operatorthis discretion designers and operators shouldrecognize the essential unpredictability of howpeople will use automation in specific circum-stances if for no other reason than the presenceof these individual differences

If automation is to be used appropriately po-tential biases and influences on this decisionshould be recognized by training personnel de-velopers and managers Individual operators

should be made aware of the biases they maybring to the use of automation For example if anindividual is more likely to rely on automationwhen tired he or she should be made aware thatfatigue may lead to overreliance and be taught torecognize the implications of that potential biasFinally policies and procedures may profitablyhigWight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat might produce suboptimal strategies

MISUSE OF AUTOMATION

Most automated systems are reliable and usu-ally work as advertised Unfortunately some mayfail or behave unpredictably Because such oc-currences are infrequent however people willcome to trust the automation However canthere be too much trust Just as mistrust can leadto disuse of alerting systems excessive trust canlead operators to rely uncritically on automation

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

without recognizing its limitations or fail tomonitor the automations behavior Inadequatemonitoring of automated systems has been im-plicated in several aviation incidents-for in-stance the crash of Eastern Flight 401 in theFlorida Everglades The crew failed to notice thedisengagement of the autopilot and did not moni-tor their altitude while they were busy diagnosinga possible problem with the landing gear (NTSB1973) Numerous other incidents and accidentssince this crash testify to the potential for humanoperators overreliance on automation

Overreliance on Automation

The Air Transport Association (ATA1989) andFederal Aviation Administration (FAA 1990)have expressed concern about the potential reluc-tance of pilots to take over from automated sys-tems In a dual-task study Riley (1994b) foundthat although almost all students turned the au-tomation off when it failed almost half the pilotsdid not even though performance on the taskwas substantially degraded by the failed automa-tion and the participants were competing forawards based on performance Even though thetask had no relation to aviation pilots showedsignificantly more reliance on automation thandid students in all conditions However the factthat almost half the pilots used the automationwhen it failed whereas the rest turned it off isfurther evidence of marked individual differencesin automation use decisions

Overreliance on automation represents an as-pect of misuse that can result from several formsof human error including decision biases andfailures of monitoring It is not only untrainedoperators who show these tendencies Will (1991)found that skilled subject matter experts had mis-placed faith in the accuracy of diagnostic expertsystems (see also Weick 1988) The AviationSafety Reporting System (ASRS) also containsmany reports from pilots that mention monitor-ing failures linked to excessive trust in or overre-liance on automated systems such as the autopi-lot or FMS (Mosier Skitka amp Korte 1994 Singhet aI 1993a 1993b)

June 1997-239

Decision Biases

Human decision makers exhibit a variety of bi-ases in reaching decisions under uncertainty(eg underestimating the influence of the baserate or being overconfident in their decisions)Many of these biases stem from the use of deci-sion heuristics (Tversky amp Kahneman 1984) thatpeople use routinely as a strategy to reduce thecognitive effort involved in solving a problem(Wickens 1992) For example Tversky and Kah-neman (1984) showed that even expert decisionmakers use the heuristic of representativeness inmaking decisions This can lead to errors when aparticular event or symptom is highly represen-tative of a particular condition but highly un-likely for other reasons (eg a low base rate)Although heuristics are a useful alternative toanalytical or normative methods (eg utilitytheory or Bayesian statistics) and generally leadto successful decision making they can resultin biases that lead to substandard decisionperformance

Automated systems that provide decision sup-port may reinforce the human tendency to useheuristics and the susceptibility to automationbias (Mosier amp Skitka 1996) Although relianceon automation as a heuristic may be an effectivestrategy in many cases overreliance can lead toerrors as is the case with any decision heuristicAutomation bias may result in omission errorsin which the operator fails to notice a problemor take an action because the automated aid failsto inform the operator Such errors includemonitoring failures which are discussed in moredetail later Commission errors occur when op-erators follow an automated directive that isinappropriate

Mosier and Skitka (1996) also pointed out thatreliance on the decisions of automation can makehumans less attentive to contradictory sources ofevidence In a part-task simulation study MosierHeers Skitka and Burdick (1996) reported thatpilots tended to use automated cues as a heuristicreplacement for information seeking They foundthat pilots tended not to use disconfirming evi-dence available from cockpit displays when there

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

240-June 1997

was a conflict between expected and actual auto-mation performance Automation bias error rateswere also found to be similar for student and pro-fessional pilot samples indicating that expertisedoes not guarantee immunity from this bias(Mosier Skitka Burdick amp Heers 1996)

Human Monitoring Errors

Automation bias represents a case of inappro-priate decision making linked to overreliance onautomation In addition operators may not suf-ficiently monitor the inputs to automated sys-tems in order to reach effective decisions shouldthe automation malfunction or fail It is oftenpointed out that humans do not make goodmonitors In fact human monitoring can be veryefficient For example in a high-fidelity simula-tion study of air traffic control Hilburn Jornaand Parasuraman (1995) found that comparedwith unaided performance experienced control-lers using an automated descent advisor werequicker to respond to secondary malfunctions(pilots not replying to data-linked clearances)

Monitoring in other environments such as in-tensive care units and process control is alsogenerally efficient (Parasuraman Mouloua Mol-loy amp Hilburn 1996) When the number of op-portunities for failure is considered-virtually ev-ery minute for these continuous 24-h systems-then the relatively low frequency of monitoringerrors is striking As Reason (1990) pointed outthe opportunity ratio for skill-based and rule-based errors is relatively low The absolute num-ber of errors may be high however and the ap-plication of increased levels of automation inthese systems creates more opportunities for fail-ures of monitoring as the number of automatedsubsystems alarms decision aids and so onincrease

McClellan (1994) discussed pilot monitoringfor various types of autopilot failures in aircraftFAA certification of an autopilot requires detec-tion of hard-over uncommanded banks (up to amaximum of 60deg) within 3 s during test flightAlthough such autopilot failures are relativelyeasy to detect because they are so salient slow-over rolls in which the autopilot rolls the air-

HUMAN FACTORS

craft gradually and smoothly are much less sa-lient and can go undetected until the aircraftwings are nearly vertical (eg the 1985 China Air-lines incident NTSB 1986) McClellan pointedout that autopilot failures though infrequent dooccur and that incidents and accidents can beavoided not only by appropriate certification butalso by training

All autopilot certification theory and testing isbased on the human pilot identifyingan autopi-lot failure and promptly disabling the autopi-lot It may not always be easy to quicklyidentify an actual autopilot failure because amalfunction could manifest itself in variousways Instead of taking time to troubleshoot anautopilot failure [pilots]must treat everyunex-pected maneuverwhen the autopilot is engagedas a failure and immediatelydisable the autopi-lot and trim system(P 80)

Although poor monitoring can have multipledeterminants operator overreliance on automa-tion may be an important contributor Mosier etal (1994) found that 77 of ASRS incidents inwhich overreliance on automation was suspectedinvolved a probable failure in monitoring Simi-lar incidents have occurred elsewhere For ex-ample a satellite-based navigational systemfailed silently in a cruise ship that ran agroundoff Nantucket Island The crew did not monitorother sources of position information that wouldhave indicated that they had drifted off course(National Transportation Safety Board 1997b)

Manual task load Parasuraman and colleagues(1993 1994) have examined the factors influenc-ing monitoring of automation and found that theoverall task load imposed on the operator whichdetermined the operators attention strategies isan important factor In their studies participantssimultaneously performed tracking and fuelmanagement tasks manually and had to monitoran automated engine status task Participantswere required to detect occasional automationfailures by identifying engine malfunctions notdetected by the automation In the constant reli-ability condition automation reliability was in-variant over time whereas in the variablereliability condition automation reliability var-ied from low to high every 10 min Participants

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

detected more than 70 of malfunctions on theengine status task when they performed the taskmanually while simultaneously carrying outtracking and fuel management However whenthe engine status task was under automation con-trol detection of malfunctions was markedly re-duced in the constant reliability condition

In a separate experiment the same conditionswere administered but participants performedonly the monitoring task without the trackingand fuel management tasks Individuals werenow nearly perfect (gt 95) in detecting failuresin the automated control of the engine statustask which was the only task These results pointto the potential cost of long-term automation onsystem performance and show that operators canbe poor at monitoring automation when theyhave to perform other manual tasks simulta-neously Other studies have shown that poor au-tomation monitoring is exhibited by pilots as wellas nonpilots (Parasuraman et al 1994) and alsowhen only a single automation failure occursduring the simulation (Molloy amp Parasuraman1996)

Automation reliability and consistency Themonitoring performance of participants in thesestudies supports the view that reliable automa-tion engenders trust (Lee amp Moray 1992) Thisleads to a reliance on automation that is asso-ciated with only occasional monitoring of itsefficiency suggesting that a critical factor inthe development of this phenomenon might bethe constant unchanging reliability of theautomation

Conversely automation with inconsistent reli-ability should not induce trust and should there-fore be monitored more closely This predictionwas supported by the Parasuraman et al (1993)finding that monitoring performance was signifi-cantly higher in the variable reliability conditionthan in the constant reliability condition (see Fig-ure 2) The absolute level of automation reliabil-ity may also affect monitoring performanceMay Molloy and Parasuraman (1993) found thatthe detection rate of automation failures variedinversely with automation reliability

June 1997-241

100

90t80loo

UlOw70wa

~3 Variable-Reliabilitya- 60ClConstant-Reliabilityzu

2z 501-0U_WI- 40I-ClWiideg0 30

I-l20Cl

10

023456 789

10-MINUTE BLOCKSFigure 2 Effects of consistency of automation reliabil-ity (constant or variable)on monitoringperformanceunder automationBasedon data fromParasuramanetal (1993)

Machine Monitoring

Can the problem of poor human monitoring ofautomation itself be mitigated by automationSome monitoring tasks can be automated suchas automated checklists for preflight procedures(eg Palmer amp Degani 1991) though this maymerely create another system that the operatormust monitor Pattern recognition methods in-cluding those based on neural networks can alsobe used for machine detection of abnormal con-ditions (eg Gonzalez amp Howington 1977) Ma-chine monitoring may be an effective designstrategy in some instances especially for lower-level functions and is used extensively in manysystems particularly process control

However automated monitoring may not pro-vide a general solution to the monitoring prob-lem for at least two reasons First automatedmonitors can increase the number of alarmswhich is already high in many settings Secondto protect against failure of automated monitorsdesigners may be tempted to put in another sys-tem that monitors the automated monitor a pro-cess that could lead to infinite regress Thesehigh-level monitors can fail also sometimes si-lently Automated warning systems can also lead

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

242-June 1997

to reliance on warning signals as the primary in-dicator of system malfunctions rather than assecondary checks (Wiener amp Curry 1980)

Making Automation Behaviors and StateIndicators Salient

The monitoring studies described earlier indi-cate that automation failures are difficult to de-tect if the operators attention is engaged else-where Neither centrally locating the automatedtask (Singh Molloy amp Parasuraman 1997) norsuperimposing it on the primary manual task(Duley Westerman Molloy amp Parasuraman inpress) mitigates this effect These results suggestthat attentional rather than purely visual factors(eg nonfoveal vision) underlie poor monitoringTherefore making automation state indicatorsmore salient may enhance monitoring

One possibility is to use display integration toreduce the attentional demands associated withdetecting a malfunction in an automated taskIntegration of elements within a display is onemethod for reducing attentional demands of faultdetection particularly if the integrated compo-nents combine to form an emergent feature suchas an object or part of an object (Bennett amp Flach1992 Woods Wise amp Hanes 1981) If the emer-gent feature is used to index a malfunction de-tection of the malfunction could occur preatten-tively and in parallel with other tasks

Molloy and Parasuraman (1994 see also Mol-loy Deaton amp Parasuraman 1995) examinedthis possibility with a version of an engine statusdisplay that is currently implemented in manycockpits a CRT-based depiction of engine instru-ments and caution and warning messages Thedisplay consisted of four circular gauges showingdifferent engine parameters The integrated formof this display was based on one developed byAbbott (1990)-the Engine Monitoring and CrewAlerting System (EMACS)-in which the four en-gine parameters were shown as columns on a de-viation bar graph Parameter values above or be-low normal were displayed as deviations from ahorizontal line (the emergent feature) represent-ing normal operation

HUMAN FACTORS

Pilots and nonpilots were tested with these en-gine status displays using the same paradigm de-veloped by Parasuraman et al (1993) In themanual condition participants were responsiblefor identifying and correcting engine malfunc-tions Performance (detection rate) undermanual conditions was initially equated for thebaseline and EMACS tasks In the automatedcondition a system routine detected malfunc-tions without the need for operator interventionhowever from time to time the automationroutine failed to detect malfunctions which theparticipants were then required to manage Al-though participants detected only about a thirdof the automation failures with the nonintegratedbaseline display they detected twice as many fail-ures with the integrated EMACS display

Adaptive task allocation may provide anothermeans of making automation behaviors more sa-lient by refreshing the operators memory of theautomated task (Lewandowsky amp Nikolic 1995Parasuraman Bahri Deaton Morrison amp Bar-nes 1992) The traditional approach to automa-tion is based on a policy of allocation of functionin which either the human or the machine hasfull control of a task (Fitts 1951) An alternativephilosophy variously termed adaptive task allo-cation or adaptive automation sees function allo-cation between humans and machines as flexible(Hancock amp Chignell 1989 Rouse 1988 Scerbo1996) For example the operator can activelycontrol a process during moderate workload al-locate this function to an automated subsystemduring peak workload if necessary and retakemanual control when workload diminishes Thissuggests that one method of improving monitor-ing of automation might be to insert brief pe-riods of manual task performance after a longperiod of automation and then to return the taskto automation (Byrne amp Parasuraman 1996Parasuraman 1993)

Parasuraman Mouloua and Molloy (1996)tested this idea using the same flight simulationtask developed by Parasuraman et al (1993)They found that after a 40-min period of auto-mation a 10-min period in which the task was

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

reallocated to the operator had a beneficial im-pact on subsequent operator monitoring underautomation Similar results were obtained in asubsequent study in which experienced pilotsserved as participants (Parasuraman et al 1994)These results encourage further research intoadaptive systems (see Scerbo 1996 for a com-prehensive review) However such systems maysuffer from the same problems as traditional au-tomation if implemented from a purely technology-driven approach without considering user needsand capabilities (Billings amp Woods 1994)

Display techniques that afford direct percep-tion of system states (Vicente amp Rasmussen1990) may also improve the saliency of automa-tion states and provide better feedback about theautomation to the operator which may in turnreduce overreliance Billings (1991) has pointedout the importance of keeping the human opera-tor informed about automated systems This isclearly desirable and making automation stateindicators salient would achieve this objectiveHowever literal adherence to this principle (egproviding feedback about all automated systemsfrom low-level detectors to high-level decisionaids) could lead to an information explosion andincrease workload for the operator We suggestthat saliency and feedback would be particularlybeneficial for automation that is designed to haverelatively high levels of autonomy

System Authority and Autonomy

Excessive trust can be a problem in systemswith high-authority automation The operatorwho believes that the automation is 100reliablewill be unlikely to monitor inputs to the automa-tion or to second-guess its outputs In Langers(1989) terms the human makes a prematurecognitive commitment which affects his or hersubsequent attitude toward the automation Theautonomy of the automation could be such thatthe operator has little opportunity to practice theskills involved in performing the automated taskmanually If this is the case then the loss in theoperators own skills relative to the performanceof the automation will tend to lead to an evengreater reliance on the automation (see Lee amp

June 1997-243

Moray 1992) creating a vicious circle (Mosier etal 1994 Satchell 1993)

Sarter and Woods (1995) proposed that thecombination of high authority and autonomy ofautomation creates multiple agents who mustwork together for effective system performanceAlthough the electronic copilot (Chambers amp Na-gel 1985) is still in the conceptual stage for thecockpit current cockpit automation possessesmany qualities consistent with autonomousagentlike behavior Unfortunately because theproperties of the automation can create strongbut silent partners to the human operator mu-tual understanding between machine and humanagents can be compromised (Sarter 1996) Thisis exemplified by the occurrence of FMS modeerrors in advanced cockpits in which pilots havebeen found to carry out an action appropriate forone mode of the FMS when in fact the FMS wasin another mode (Sarter amp Woods 1994)

Overreliance on automated solutions may alsoreduce situation awareness (Endsley 1996Sarter amp Woods 1991 Wickens 1994) For ex-ample advanced decision aids have been pro-posed that will provide air traffic controllers withresolution advisories on potential conflicts Con-trollers may come to accept the proposed solu-tions as a matter of routine This could lead to areduced understanding of the traffic picture com-pared with when the solution is generated manu-ally (Hopkin 1995) Whitfield Ball and Ord(1980) reported such a loss of the mental pic-ture in controllers who tended to use automatedconflict resolutions under conditions of highworkload and time pressure

Practical Implications

Taken together these results demonstrate thatoverreliance on automation can and does hap-pen supporting the concerns expressed by theATA (1989) and FAA (1990) in their human fac-tors plans System designers should be aware ofthe potential for operators to use automationwhen they probably should not to be susceptibleto decision biases caused by overreliance on au-tomation to fail to monitor the automation asclosely as they should and to invest more trust

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

244-June 1997

in the automation than it may merit Scenariosthat may lead to overreliance on automationshould be anticipated and methods developed tocounter it

Some strategies that may help in this regardinclude ensuring that state indications are salientenough to draw operator attention requiringsome level of active operator involvement in theprocess (Billings 1991) and ensuring that theother demands on operator attention do not en-courage the operator to ignore the automatedprocesses Other methods that might be em-ployed to promote better monitoring include theuse of display integration and adaptive taskallocation

DISUSE OF AUTOMATION

Few technologies gain instant acceptancewhen introduced into the workplace Human op-erators may at first dislike and even mistrust anew automated system As experience is gainedwith the new system automation that is reliableand accurate will tend to earn the trust of opera-tors This has not always been the case with newtechnology Early designs of some automatedalerting systems such as the Ground ProximityWarning System (GPWS) were not trusted by pi-lots because of their propensity for false alarmsWhen corporate policy or federal regulationmandates the use of automation that is nottrusted operators may resort to creative disable-ment of the device (Satchell 1993)

Unfortunately mistrust of alerting systems iswidespread in many work settings because of thefalse alarm problem These systems are set with adecision threshold or criterion that minimizesthe chance of a missed warning while keeping thefalse alarm rate below some low value Two im-portant factors that influence the device falsealarm rate and hence the operators trust in anautomated alerting system are the values of thedecision criterion and the base rate of the haz-ardous condition

The initial consideration for setting the deci-sion threshold of an automated warning systemis the cost of a missed signal versus that of a falsealarm Missed signals (eg total engine failure)

HUMAN FACTORS

have a phenomenally high cost yet their fre-quency is undoubtedly very low However if asystem is designed to minimize misses at allcosts then frequent device false alarms mayresult A low false alarm rate is necessary foracceptance of warning systems by human opera-tors Accordingly setting a strict decision cri-terion to obtain a low false alarm rate wouldappear to be good design practice

However a very stringent criterion may notprovide sufficient advance warning In an analy-sis of automobile collision warning systems Far-ber and Paley (1993) suggested that too Iowafalse alarm rate may also be undesirable becauserear-end collisions occur very infrequently (per-haps once or twice in the lifetime of a driver) Ifthe system never emits a false alarm then thefirst time the warning sounds would be just be-fore a crash Under these conditions the drivermight not respond alertly to such an improbableevent Farber and Paley (1993) speculated that anideal system would be one that signals a collision-possible condition even though the driver wouldprobably avoid a crash Although technically afalse alarm this type of information might beconstrued as a warning aid in allowing improvedresponse to an alarm in a collision-likely situa-tion Thus all false alarms need not necessarily beharmful This idea is similar to the concept of alikelihood-alarm in which more than the usualtwo alarm states are used to indicate several pos-sible levels of the dangerous condition rangingfrom very unlikely to very certain (Sorkin Kan-towitz amp Kantowitz 1988)

Setting the decision criterion for a low falsealarm rate is insufficient by itself for ensuringhigh alarm reliability Despite the best intentionsof designers the availability of the most ad-vanced sensor technology and the developmentof sensitive detection algorithms one fact mayconspire to limit the effectiveness of alarms theIowa priori probability or base rate of most haz-ardous events If the base rate is low as it often isfor many real events then the posterior probabil-ity of a true alarm-the probability that given analarm a hazardous condition exists--can be loweven for sensitive warning systems

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION June 1997-245

Parasuraman Hancock and Olofinboba(1997) carried out a Bayesian analysis to examinethe dependence of the posterior probability onthe base rate for a system with a given detectionsensitivity (d) Figure 3 shows a family of curvesrepresenting the different posterior probabilitiesof a true alarm when the decision criterion (13) isvaried for a warning system with fixed sensitivity(in this example d = 47) For example 13 can beset so that this warning system misses only 1 ofevery 1000 hazardous events (hit rate = 999)while having a false alarm rate of 0594

Despite the high hit rate and relatively low falsealarm rate the posterior odds of a true alarmwith such a system could be very low For ex-ample when the a priori probability (base rate)of a hazardous condition is low (say 001) only 1in 59 alarms that the system emits represents atrue hazardous condition (posterior probability =

0168) It is not surprising then that many hu-

man operators tend to ignore and turn offalarms-they have cried wolf once too often (Sor-kin 1988) As Figure 3 indicates reliably highposterior alarm probabilities are guaranteed foronly certain combinations of the decision crite-rion and the base rate

Even if operators do attend to an alarm theymay be slow to respond when the posterior prob-ability is low Getty Swets Pickett and Goun-thier (1995) tested participants response to avisual alert while they performed a tracking taskParticipants became progressively slower to re-spond as the posterior probability of a true alarmwas reduced from 75 to 25 A case of a prisonescape in which the escapee deliberately set off amotion detector alarm knowing that the guardswould be slow to respond provides a real-life ex-ample of this phenomenon (Casey 1993)

These results indicate that designers of auto-mated alerting systems must take into account

01000800600400200

000

10

- 08aUJ-Dgt-c 06CllJJ0bullDbull 040II)-en0D 02 B

A Priori Probability (p)Base Rate

Figure 3 Posterior probability (P[SIR) of a hazardous condition S given an alarmresponse R for an automated warning system with fixed sensitivity d = 47 plotted asa function of a priori probability (base rate) of S From Parasuraman Hancock andOlofinboba (1997) Reprinted with permission of Taylor amp Francis

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

246-June 1997

not only the decision threshold at which thesesystems are set (Kuchar amp Hansman 1995Swets 1992) but also the a priori probabilities ofthe condition to be detected (Parasuraman Han-cock et aI 1997) Only then will operators tendto trust and use the system In many warningsystems designers accept a somewhat high falsealarm rate if it ensures a high hit rate The reasonfor this is that the costs of a miss can be ex-tremely high whereas the costs of a false alarmare thought to be much lower

For example if a collision avoidance systemsdecision criterion is set high enough to avoid alarge number of false alarms it may also miss areal event and allow two airplanes to collideHaving set the criterion low enough to ensurethat very few real events are missed the de-signers must accept a higher expected level offalse alarms Normally this is thought to incur avery low cost (eg merely the cost of executingan unnecessary avoidance maneuver) but theconsequences of frequent false alarms and con-sequential loss of trust in the system are oftennot included in this trade-off

Practical Implications

The costs of operator disuse because of mis-trust of automation can be substantial Operatordisabling or ignoring of alerting systems hasplayed a role in several accidents In the Conrailaccident near Baltimore in 1987 investigatorsfound that the alerting buzzer in the train cabhad been taped over Subsequent investigationshowed similar disabling of alerting systems inother cabs even after prior notification of an in-spection was given

Interestingly following another recent trainaccident involving Amtrak and Marc trains nearWashington DC there were calls for fittingtrains with automated braking systems Thissolution seeks to compensate for the possibilitythat train operators ignore alarms by overridingthe operator and bringing a train that is in viola-tion of a speed limit to a halt This is one of thefew instances in which a conscious design deci-sion is made to allow automation to overridethe human operator and it reflects as was sug-

HUMAN FACTORS

gested earlier an explicit expression of the rela-tive levels of trust between the human operatorand automation In most cases this trade-off isdecided in favor of the operator in this casehowever it has been made in favor of automationspecifically because the operator has been judgeduntrustworthy

Designers of alerting systems must take intoaccount both the decision threshold and the baserate of the hazardous condition in order for op-erators to trust and use these systems The costsof unreliable automation may be hidden If highfalse alarm rates cause operators not to use asystem and disuse results in an accident systemdesigners or managers determine that the opera-tor was at fault and implement automation tocompensate for the operators failures The op-erator may not trust the automation and couldattempt to defeat it the designer or manager doesnot trust the operator and puts automation in aposition of greater authority This cycle of opera-tor distrust of automation and designer or man-ager distrust of the operator may lead to abuse ofautomation

ABUSE OF AUTOMATION

Automation abuse is the automation of func-tions by designers and implementation by man-agers without due regard for the consequencesfor human (and hence system) performance andthe operators authority over the system The de-sign and application of automation whether inaviation or in other domains has typically beentechnology centered Automation is appliedwhere it provides an economic benefit by per-forming a task more accurately or more reliablythan the human operator or by replacing the op-erator at a lower cost As mentioned previouslytechnical and economic factors are valid reasonsfor automation but only if human performancein the resulting system is not adversely affected

When automation is applied for reasons ofsafety it is often because a particular incident oraccident identifies cases in which human errorwas seen to be a major contributing factor De-signers attempt to remove the source of error by

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

automating functions carried out by the humanoperator The design questions revolve aroundthe hardware and software capabilities requiredto achieve machine control of the function Lessattention is paid to how the human operator willuse the automation in the new system or howoperator tasks will change As Riley (1995)pointed out when considering how automationis used rather than designed one moves awayfrom the normal sphere of influence (and inter-est) of system designers

Nevertheless understanding how automationis used may help developers produce better auto-mation After all designers do make presump-tions about operator use of automation Theimplicit assumption is that automation willreduce operator errors For example automatedsolutions have been proposed for many errorsthat automobile drivers make automated navi-gation systems for route-finding errors colli-sion avoidance systems for braking too late be-hind a stopped vehicle and alertness indicatorsfor drowsy drivers The critical human factorsquestions regarding how drivers would use suchautomated systems have not been examined(Hancock amp Parasuraman 1992 Hancock Para-suraman amp Byrne 1996)

Several things are wrong with this approachFirst one cannot remove human error from thesystem simply by removing the human operatorIndeed one might think of automation as ameans of substituting the designer for the opera-tor To the extent that a system is made less vul-nerable to operator error through the introduc-tion of automation it is made more vulnerable todesigner error As an example of this considerthe functions that depend on the weight-on-wheels sensors on some modern aircraft The pi-lot is not able to deploy devices to stop the air-plane on the runway after landing unless thesystem senses that the gear are on the groundthis prevents the pilot from inadvertently deploy-ing the spoilers to defeat lift or operate the thrustreversers while still in the air These protectionsare put in place because of a lack of trust in thepilot to not do something unreasonable and po-

June 1997-247

tentially catastrophic If the weight-on-wheelssensor fails however the pilot is prevented fromdeploying these devices precisely when they areneeded This represents an error of the designerand it has resulted in at least one serious incident(Poset 1992) and accident (Main CommissionAircraft Accident Investigation 1994)

Second certain management practices or cor-porate policies may prevent human operatorsfrom using automation effectively particularlyunder emergency conditions The weight-on-wheels sensor case represents an example of thehuman operator not being able to use automa-tion because of prior decisions made by the de-signer of automation Alternatively even thoughautomation may be designed to be engaged flex-ibly management may not authorize its use incertain conditions This appears to have been thecase in a recent accident involving a local transittrain in Gaithersburg Maryland The train col-lided with a standing train in a heavy snowstormwhen the automatic speed control system failedto slow the train sufficiently when approachingthe station because of snow on the tracks It wasdetermined that the management decision torefuse the train operators request to run the trainmanually because of poor weather was a majorfactor in the accident (National TransportationSafety Board 1997a) Thus automation can alsoact as a surrogate for the manager just as it canfor the system designer

Third the technology-centered approach mayplace the operator in a role for which humans arenot well suited Indiscriminate application of au-tomation without regard to the resulting rolesand responsibilities of the operator has led tomany of the current complaints about automa-tion for example that it raises workload whenworkload is already high and that it is difficult tomonitor and manage In many cases it has re-duced operators to system monitors a conditionthat can lead to overreliance as demonstratedearlier

Billings (1991) recognized the danger of defin-ing the operators role as a consequence of theapplication of automation His human-centered

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

248-June 1997

approach calls for the operator to be given anactive role in system operation regardless ofwhether automation might be able to perform thefunction in question better than the operatorThis recommendation reflects the idea that theoverall system may benefit more by having anoperator who is aware of the environmental con-ditions the system is responding to and the statusof the process being performed by virtue of ac-tive involvement in the process than by havingan operator who may not be capable of recogniz-ing problems and intervening effectively even ifit means that system performance may not be asgood as it might be under entirely automatic con-trol Underlying this recognition is the under-standing that only human operators can begranted fiduciary responsibility for system safetyso the human operator should be at the heartof the system with full authority over all itsfunctions

Fourth when automation is granted a highlevel of authority over system functions the op-erator requires a proportionately high level offeedback so that he or she can effectively monitorthe states behaviors and intentions of the auto-mation and intervene if necessary The more re-moved the operator is from the process the morethis feedback must compensate for this lack ofinvolvement it must overcome the operatorscomplacency and demand attention and it mustovercome the operators potential lack of aware-ness once that attention is gained The impor-tance of feedback has been overlooked in somehighly automated systems (Norman 1990)When feedback has to compensate for the lack ofdirect operator involvement in the system ittakes on an additional degree of importance

In general abuse of automation can lead toproblems with costs that can reduce or even nul-lify the economic or other benefits that automa-tion can provide Moreover automation abusecan lead to misuse and disuse of automation byoperators If this results in managers implement-ing additional high-level automation further dis-use or misuse by operators may follow and soon in a vicious circle

HUMAN FACTORS

CONCLUSIONS DESIGNING FORAUTOMATION USAGE

Our survey of the factors associated with theuse misuse disuse and abuse of automationpoints to several practical implications for de-signing for more effective automation usageThroughout this paper we have suggested manystrategies for designing training for and manag-ing automation based on these considerationsThese strategies can be summarized as follows

Automation Use

1 Better operator knowledge of how the auto-mation works results in more appropriate use ofautomation Knowledge of the automation de-sign philosophy may also encourage more appro-priate use

2 Although the influences of many factors af-fecting automation use are known large indi-vidual differences make systematic prediction ofautomation use by specific operators difficultFor this reason policies and procedures shouldhighlight the importance of taking specific con-siderations into account when deciding whetheror not to use automation rather than leaving thatdecision vulnerable to biases and other factorsthat may result in suboptimal strategies

3 Operators should be taught to make rationalautomation use decisions

4 Automation should not be difficult or timeconsuming to turn on or off Requiring a highlevel of cognitive overhead in managing automa-tion defeats its potential workload benefitsmakes its use less attractive to the operator andmakes it a more likely source of operator error

Automation Misuse

1 System designers regulators and operatorsshould recognize that overreliance happens andshould understand its antecedent conditions andconsequences Factors that may lead to overreli-ance should be countered For example work-load should not be such that the operator failsto monitor automation effectively Individual

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

operators who demonstrate a bias toward over-reliance because of specific factors should betaught to recognize these biases and compensatefor them Overreliance on automation may alsosignal a low level of self-confidence in the opera-tors own manual control skills suggesting thatfurther training or evaluation of the operatorssuitability for the job is needed

2 Operators use automation cues as heuristicsfor making decisions Although the use of heuris-tics is usually effective occasionally it may leadto error because of decision biases Training isrequired to recognize and counter decision biasesthat may lead to overreliance on automation

3 Although it is often pointed out that humanmonitoring is subject to errors in many instancesoperational monitoring can be efficient Humanmonitoring tends to be poor in work environ-ments that do not conform to well-establishedergonomics design principles in high-workloadsituations and in systems in which the automa-tion is higWy autonomous and there is little op-portunity for manual experience with the auto-mated tasks

4 Feedback about the automations states ac-tions and intentions must be provided and itmust be salient enough to draw operator atten-tion when he or she is complacent and informa-tive enough to enable the operator to interveneeffectively

Automation Disuse

1 The impact of automation failures such asfalse alarm rates on subsequent operator reli-ance on the automation should be considered aspart of the process of setting automation perfor-mance requirements otherwise operators maygrow to mistrust the automation and stop usingit When the operator makes an error system de-signers and managers may grow to mistrust theoperator and look to automation as the ultimateauthority in the system

2 Designers of automated alerting systemsmust take into account not only the decisionthreshold at which these systems are set but alsothe base rate of the hazardous condition to bedetected

June 1997-249

3 Designers of automated alerting systemsshould consider using alarms that indicate whena dangerous situation is possible (likelihoodalarms) rather than encouraging the operator torely on the alarm as the final authority on theexistence of a dangerous condition

Automation Abuse

1 The operators role should be defined basedon the operators responsibilities and capabili-ties rather than as a by-product of how the au-tomation is implemented

2 The decision to apply automation to a func-tion should take into account the need for activeoperator involvement in the process even if suchinvolvement reduces system performance fromwhat might be achieved with a fully automatedsolution keeping the operator involved providessubstantial safety benefits by keeping the opera-tor informed and able to intervene

3 Automation simply replaces the operatorwith the designer To the extent that a system ismade less vulnerable to operator error throughthe application of automation it is made morevulnerable to designer error The potential forand costs of designer error must be consideredwhen making this trade-off

4 Automation can also act as a surrogate forthe manager If the designer applied a specificautomation philosophy to the design of the sys-tem that philosophy should be provided to themanager so that operational practices are not im-posed that are incompatible with the design Inaddition just as system designers must be madeaware of automation-related issues so mustthose who dictate how it will be used

Finally two themes merit special emphasisFirst many of the problems of automation mis-use disuse and abuse arise from differing expec-tations among the designers managers and op-erators of automated systems Our purpose is notto assign blame to designers managers or opera-tors but to point out that the complexities of theoperational environment and individual humanoperators may cause automation to be used inways different from how designers and managersintend Discovering the root causes of these

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

2SD-June 1997

differences is a necessary step toward informingthe expectations of designers and managers sothat operators are provided with automation thatbetter meets their needs and are given the author-ity and decision-making tools required to use theautomation to its best effect

Second individual differences in automationuse are ubiquitous Human use of automation iscomplex subject to a wide range of influencesand capable of exhibiting a wide range of pat-terns and characteristics That very complexitymakes the study of automation a large undertak-ing but the growing importance of automation insystems makes such study increasingly impera-tive Better understanding of why automation isused misused disused and abused will help fu-ture designers managers and operators of sys-tems avoid many of the errors that have plaguedthose of the past and present Application of thisknowledge can lead to improved systems the de-velopment of effective training curricula and theformulation of judicious policies and proceduresinvolving automation use

ACKNOWLEDGMENTS

We thank Kathy Abbott Peter Hancock Kathy Mosier BillHowell and two anonymous reviewers for helpful commentsOna previous draft of this paper This paper also benefited fromdiscussions with past and present members of the Human Fac-tors of Automation group at the Cognitive Science Laboratoryincluding Evan Byrne John Deaton Jackie Duley Scott Gal-ster Brian Hilburn Anthony Masalonis Robert Molloy Mus-tapha Mouloua and Indramani Singh This research was sup-ported by research grant NAG-I-1296 from the NationalAeronautics and Space Administration Langley Research Cen-ter Langley Virginia to the Catholic University of America(RP) and by contract DTFAOI-91-C-0039 from the FederalAviation Administration to Honeywell Technology Center(VR) The views presented in this paper are those of the au-thors and not necessarily representative of these organizations

REFERENCESAbbott T S (1990) A simulation evaluation of the engine moni-

toring and control display (NASA Tech Memorandum1960) Hampton VANASA Langley Research Center

Air Transport Association (1989) National plan to enhanceaviation safety through human factors improvements Wash-ington DC Author

Bainbridge 1 (1983) Ironies of automation Automatica 19775-779

Bennett K amp Flach J M (1992) Graphical displays Implica-tions for divided attention focused attention and problemsolving Human Factors 34 513-533

Bessant J Levy P Ley C Smith S amp Tranfield D (1992)Organization design for factory 2000 International Journalof Human Factors in Manufacturing 2 95-125

Billings C E (1991) Human-centered aircraft automation A

HUMAN FACTORS

concept and guidelines (NASATech Memorandum 103885)Moffett Field CA NASA Ames Research Center

Billings C E amp Woods D D (1994) Concerns about adaptiveautomation in aviation systems In M Mouloua amp R Para-suraman (Eds) Human performance in automated systemsCurrent research and trends (pp 264-269) Hillsdale NJErlbaum

Bowers C A Oser R J Salas E amp Cannon-Bowers J A(1996) Team performance in automated systems In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 243-263)Hillsdale NJ Erlbaum

Byrne E A amp Parasuraman R (1996) Psychophysiology andadaptive automation Biological Psychology 42 249-268

Casey S (1993) Set phasers on stun Santa Barbara CA Ae-gean

Casner S (1994) Understanding the determinants of problem-solving behavior in a complex environment Human Fac-tors 36 580-596

Chambers N amp Nagel D C (1985) Pilots of the future Hu-man or computer Communications of the Association forComputing Machinery 28 1187-1199

Corwin W H Funk H Levitan 1 amp Bloomfield J (1993)Flight crew information requirements (Contractor RepDTFA- 91-C-00040) Washington DC Federal Aviation Ad-ministration

Donald M (1991) Origins of the modem mind Three stages inthe evolution of culture and cognition Cambridge MA Har-vard University Press

Duley J A Westerman S Molloy R amp Parasuraman R (inpress) Effects of display superimposition on monitoring ofautomated tasks In Proceedings of the 9th InternationalSymposium on Aviation Psychology Columbus Ohio StateUniversity

Edwards E (1977) Automation in civil transport aircraft Ap-plied Ergonomics 4 194-198

Endsley M (1996) Automation and situation awareness In RParasuraman amp M Mouloua (Eds) Automation and hu-man performance Theory and applications (pp 163-181)Hillsdale NJ Erlbaum

Erzberger H (1992) CTAS Computer intelligence for air trafficcontrol in the terminal area (NASA Tech Memorandum103959) Moffett Field CA NASA Ames Research Center

Farber E amp Paley M (1993 April) Using freeway traffic datato estimate the effectiveness of rear-end collision countermea-sures Paper presented at the Third Annual IVHS AmericaMeeting Washington DC

Federal Aviation Administration (1990) The national plan foraviation human factors Washington DC Author

Fitts P M (1951) Human engineering for an effective air navi-gation and traffic control system Washington DC NationalResearch Council

Funk K Lyall B amp Riley V (1995) Perceived human factorsproblems of flightdeck automation (Phase I Final Rep) Cor-vallis Oregon State University

Gerwin D amp Leung T K (1986) The organizational impactsof flexible manufacturing systems In T Lupton (Ed) Hu-man factors Man machines and new technology (pp 157-170) New York Springer-Verlag

Getty D J Swets J A Pickett R M amp Gounthier D (1995)System operator response to warnings of danger A labora-tory investigation of the effects of the predictive value of awarning on human response time Journal of ExperimentalPsychology Applied 1 19-33

Gonzalez R C amp Howington 1 C (1977) Machine recogni-tion of abnormal behavior in nuclear reactors IEEE Trans-actions on Systems Man and Cybernetics SMC-7 717-728

Grabowski M amp Wallace W A (t993) An expert system formaritime pilots Its design and assessment using gamingManagement Science 39 1506-1520

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

Hancock P A (1996) Teleology for technology In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 461-497) HillsdaleNJ Erlbaum

Hancock P A amp Chignell M H (1989) Intelligent interfacesAmsterdam Elsevier

Hancock P A amp Parasuraman R (1992) Human factors andsafety in the design of intelligent vehicle-highway systemsJournal of Safety Research 23 181-198

Hancock P A Parasuraman R amp Byrne E A (1996) Driver-centered issues in advanced automation for motor vehiclesIn R Parasuraman amp M Mouloua (Eds) Automation andhuman performance Theory and applications (pp 337-364)Hillsdale NJ Erlbaum

Harris W Hancock P A amp Arthur E (1993) The effect oftask load projection on automation use performance andworkload In Proceedings of the 7th International Sympo-sium on Aviation psychology (pp 890A-890F) ColumbusOhio State University

Helmreich R L (1984) Cockpit management attitudes Hu-man Factors 26 583-589

Hilburn B Jorna P G A M amp Parasuraman R (1995) Theeffect of advanced ATC automation on mental workloadand monitoring performance An empirical investigation inDutch airspace In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 1382-1387) ColumbusOhio State University

Hopkin V D (1995) Human factors in air-traffic control Lon-don Taylor amp Francis

IVHS America (1992) Strategic plan for IVHS in the UnitedStates Washington DC Author

Kirlik A (1993) Modeling strategic behavior in human-automation interaction Why an aid can (and should) gounused Human Factors 35 221-242

Kuchar J K amp Hansman R J (1995) A probabilistic meth-odology for the evaluation of alerting system performanceIn Proceedings of the IFACIFlPIFORSIEA SymposiumCambridge MA

Langer E (1989) Mindfulness Reading MA Addison-WesleyLee J D amp Moray N (1992) Trust control strategies and

allocation of function in human-machine systems Ergo-nomics 35 1243-1270

Lee J D amp Moray N (1994) Trust self-confidence and op-erators adaptation to automation International Journal ofHuman-Computer Studies 40 153-184

Lewandowsky S amp Nikolic D (1995 April) A connectionistapproach to modeling the effects of automation Paper pre-sented at the 8th International Symposium on Aviation Psy-chology Columbus OH

Main Commission Aircraft Accident Investigation-WARSAW(1994) Report on the accident to Airbus A320-21 aircraft inWarsaw on 14 September 1993 Warsaw Author

May P Molloy R amp Parasuraman R (1993 October) Effectsof automation reliability and failure rate on monitoring per-formance in a multitask environment Paper presented at theAnnual Meeting of the Human Factors Society Seattle WA

McClellan J M (1994 June) Can you trust your autopilotFlying 76-83

McClumpha A amp James M (1994 June) Understanding au-tomated aircraft In M Mouloua amp R Parasuraman (Eds)Human performance in automated systems Recent researchand trends (pp 314-319) Hillsdale NJ Erlbaum

McDaniel J W (1988) Rules for fighter cockpit automation InProceedings of the IEEE National Aerospace and ElectronicsConference (pp 831-838) New York IEEE

Ministere de IEquipement des Transports et du Tourisme(1993) Rapport de la Commission dEnquete sur IAccidentsurvenu Ie 20 Janvier 1992 pres du Mont Saite Odile a lAir-bus A320 Immatricule F-GGED Exploite par lay CompagnieAir Inter Paris Author

June 1997-251

Molloy R Deaton J E amp Parasuraman R (1995) Monitor-ing performance with the EMACS display in an automatedenvironment In Proceedings of the 8th International Sym-posium on Aviation Psychology (pp 68-73) Columbus OH

Molloy R amp Parasuraman R (1994) Automation-inducedmonitoring inefficiency The role of display integration andredundant color coding In M Mouloua amp R Parasuraman(Eds) Human performance in automated systems Currentresearch and trends (pp 224-228) Hillsdale NJ Erlbaum

Molloy R amp Parasuraman R (1996) Monitoring an auto-mated system for a single failure Vigilance and task com-plexity effects Human Factors 38 311-322

Mosier K Heers S Skitka L amp Burdick M (1996 March)Patterns in the use of cockpit automation Paper presented atthe 2nd Automation Technology and Human PerformanceConference Cocoa Beach FL

Mosier K amp Skitka L J (1996) Human decision makers andautomated decision aids Made for each other In R Para-suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 201-220) HillsdaleNJ Erlbaum

Mosier K Skitka L Burdick M amp Heers S (1996) Auto-mation bias accountability and verification behaviors InProceedings of the Human Factors and Ergonomics Society40th Annual Meeting (pp 204-208) Santa Monica CA Hu-man Factors and Ergonomics Society

Mosier K Skitka L J amp Korte K J (1994) Cognitive andsocial psychological issues in flight crewautomation inter-action In M Mouloua amp R Parasuraman (Eds) Humanperformance in automated systems Current research andtrends (pp 191-197) Hillsdale NJ Erlbaum

Mouloua M amp Parasuraman R (1994) Human performancein automated systems Recent research and trends HillsdaleNJ Erlbaum

Muir B M (1988) Trust between humans and machines andthe design of decision aids In E Hollnagel G Mancini ampD D Woods (Eds) Cognitive engineering in complex dy-namic worlds (pp 71-83) London Academic

National Transportation Safety Board (1973) Eastern AirlinesL-1011 Miami Florida 20 December 972 (Rep NTSB-AAR-73-14) Washington DC Author

National Transportation Safety Board (1986) China AirlinesB-747-SP 300 NM northwest of San Francisco 19 February1985 (Rep NTSB-AAR-86-03) Washington DC Author

National Transportation Safety Board (1994) Aircraft accidentreport Stall and loss of control on final approach (RepNTSB-AAR-9407) Washington DC Author

National Transportation Safety Board (l997a) Collision ofWashington Metropolitan Transit Authority Train TIll withstanding train at Shady Grove Station near GaithersburgMD January 6 1996 (Rep NTSB-ATL-96-MR008) Wash-ington DC Author

National Transportation Safety Board (1997b) Grounding ofthe Panamanian passenger ship Royal Majesty on Rose andCrown shoal near Nantucket Massachusetts June 10 1995(Rep NTSBMAR-97-01) Washington DC Author

Nickerson R S (1995) Human interaction with computersand robots International Journal of Human Factors inManufacturing 5 5-27

Norman D (1990) The problem with automation Inappropri-ate feedback and interaction not over-automation Proceed-ings of the Royal Society of London B237 585-593

Palmer E amp Degani A (1991) Electronic checklists Evalua-tion of two levels of automation In Proceedings of the 6thInternational Symposium on Aviation Psychology (pp 178-183) Columbus Ohio State University

Parasuraman R (1993) Effects of adaptive function allocationon human performance In D J Garland amp J A Wise(Eds) Human factors and advanced aviation technologies

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

252-June 1997

(pp 147-157) Daytona Beach FL Embry-Riddle Aeronau-tical University Press

Parasuraman R Bahri T Deaton J Morrison J amp BarnesM (1992) Theory and design of adaptive automation in avia-tion systems (Progress Rep NAWCADWAR-92033-60)Warminster PA Naval Air Warfare Center Aircraft Divi-sion

Parasuraman R Hancock P A amp Olofinboba O (1997)Alarm effectiveness in driver-centered collision-warningsystems Ergonomics 40 390-399

Parasuraman R Molloy R amp Singh I 1 (1993) Perfor-mance consequences of automation-induced compla-cency International Journal of Aviation Psychology 31-23

Parasurarnan R amp Mouloua M (Eds) (1996) Automationand human performance Theory and applications HillsdaleNJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1994) Monitor-ing automation failures in human-machine systems In MMouloua amp R Parasuraman (Eds) Human performance inautomated systems Current research and trends (pp 45-49)Hillsdale NJ Erlbaum

Parasuraman R Mouloua M amp Molloy R (1996) Effects ofadaptive task allocation on monitoring of automated sys-tems Human Factors 38 665-679

Parasuraman R Mouloua M Molloy R amp Hilburn B(1996) Monitoring automated systems In R Parasuramanamp M Mouloua (Eds) Automation and human performanceTheory and applications (pp 91-115) Hillsdale NJErlbaum

Phillips D (1995 August II) System failure cited in shipgrounding Washington Post p A7

Planzer N amp Hoffman M A (1995) Advancing free flightthrough human factors (Workshop Rep) Washington DCFAA

Poset R K (1992) No brakes no reversers Air Line Pilot 61(4)28-29

Radio and Technical Committee on Aeronautics (1995) Freeflight implementation (RTCA Task Force 3 Rep) Washing-ton DC Author

Rasmussen J (1986) Information processing and human-machine interaction Amsterdam North-Holland

Reason J T (1990) Human error Cambridge England Cam-bridge University Press

Riley V (1989) A general model of mixed-initiative human-machine systems In Proceedings of the Human Factors So-ciety 33rd Annual Meeting (pp 124-128) Santa Monica CAHuman Factors and Ergonomics Society

Riley V (I 994a) Human use of automation Unpublished doc-toral dissertation University of Minnesota

Riley V (I 994b) A theory of operator reliance on automationIn M Mouloua amp R Parasuraman (Eds) Human perfor-mance in automated systems Recent research and trends (pp8-14) Hillsdale NJ Erlbaum

Riley V (1995) What avionics engineers should know aboutpilots and automation In Proceedings of the 14th AlANIEEE Digital Avionics Systems Conference (pp 252-257)Cambridge MA AIAA

Riley V (1996) Operator reliance on automation Theory anddata In R Parasuraman amp M Mouloua (Eds) Automationand human performance Theory and applications (pp 19-35) Hillsdale NJ Erlbaum

Riley V Lyall B amp Wiener E (1993) Analytic methods forflight-deck automation design and evaluation phase two re-port Pilot use of automation (Tech Rep) Minneapolis MNHoneywell Technology Center

Rouse W B (1988) Adaptive aiding for humancomputer con-troL Human Factors 30 431-438

Sarter N (1996) Cockpit automation From quantity to qual-ity from individual pilots to multiple agents In R Para-

HUMAN FACTORS

suraman amp M Mouloua (Eds) Automation and human per-formance Theory and applications (pp 267-280) HillsdaleNJ Erlbaum

Sarter N amp Woods D D (1991) Situation awareness A criti-cal but ill-defined phenomenon International Journal ofAviation Psychology 1 45-57

Sarter N amp Woods D D (1994) Pilot interaction with cockpitautomation II An experimental study of pilots model andawareness of the flight management system InternationalJournal of Aviation Psychology 4 1-28

Sarter N amp Woods D D (1995) Strong silent and out-of-the-loop Properties of advanced cockpit automation andtheir impact on human-automation interaction (Tech RepCSEL 95-TR-0I) Columbus Ohio State University Cogni-tive Systems Engineering Laboratory

Satchell P (1993) Cockpit monitoring and alerting systems AI-dershot England Ashgate

Scerbo M S (1996) Theoretical perspectives on adaptive au-tomation In R Parasuraman amp M Mouloua (Eds) Auto-mation and human performance Theory and applications(pp 37-63) Hillsdale NJ Erlbaum

Schank R (1984) The cognitive computer Reading MA Addi-son-Wesley

Sheridan T (1980) Computer control and human alienationTechnology Review 10 61-73

Sheridan T (1992) Telerobotics automation and supervisorycontrol Cambridge MIT Press

Singh I 1 Deaton J E amp Parasuraman R (1993 October)Development of a scale to measure pilot attitudes to cockpitautomation Paper presented at the Annual Meeting of theHuman Factors Society Seattle WA

Singh I 1 Molloy R amp Parasuraman R (l993a) Automa-tion-induced complacency Development of the compla-cency-potential rating scale International Journal of Avia-tion Psychology 3 111-121

Singh I 1 Molloy R amp Parasuraman R (l993b) Individualdifferences in monitoring failures of automation Journal ofGeneral Psychology 120 357-373

Singh I 1 Molloy R amp Parasuraman R (1997) Automation-related monitoring inefficiency The role of display loca-tion International Journal of Human Computer Studies 4617-30

Smith M J amp Carayon P (1995) New technology automa-tion and work organization International Journal of Hu-man Factors in ManufacturingS 95-116

Sorkin R D (1988) Why are people turning off our alarmsJournal of the Acoustical Society of America 84 1107-1108

Sorkin R D Kantowitz B H amp Kantowitz S C (1988)Likelihood alarm displays Human Factors 30 445-459

Spitzer C R (1987) Digital avionics systems Englewood CliffsNJ Prentice-Hall

Sun H amp Riis J O (1994) Organizational technical strate-gic and managerial issues along the implementation pro-cess of advanced manufacturing technologies-A generalframework and implementation International Journal ofHuman Factors in Manufacturing 4 23-36

Swets J A (1992) The science of choosing the right decisionthreshold in high-stakes diagnostics American Psychologist47 522-532

Thompson J M (1994) Medical decision making and automa-tion In M Mouloua amp R Parasuraman (Eds) Human per-formance in automated systems Current research and trends(pp 68-72) Hillsdale NJ Erlbaum

Tversky A amp Kahneman D (1984) Judgment under uncer-tainty Heuristics and biases Science 185 1124-1131

Vicente K J amp Rasmussen J (1990) The ecology of human-machine systems II Mediating direct perception in com-plex work domains Ecological Psychology 2 207-249

Weick K E (1988) Enacted sensemaking in crisis situationsJournal of Management Studies 25 305-317

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996

HUMAN USE OF AUTOMATION

Whitfield D Ball R G amp Ord G (1980) Some human fac-tors aspects of computer-aiding concepts for air traffic con-trollers Human Factors 22 569-580

Wickens C D (1992) Engineering psychology and human per-formance (2nd ed) New York HarperCollins

Wickens C D (1994) Designing for situation awareness andtrust in automation In Proceedings of the IFAC Conferenceon Integrated Systems Engineering (pp 174-179) Baden-Baden Germany International Federation on AutomaticControl

Wiener E L (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Wiener E L (1988) Cockpit automation In E L Wiener ampD C Nagel (Eds) Human factors in aviation (pp 433-461)San Diego Academic

Wiener E L (1989) Human factors of advanced technology(glass cockpit) transport aircraft (Rep 177528) MoffettField CA NASAAmes Research Center

Wiener E L amp Curry R E (1980) Flight-deck automationPromises and problems Ergonomics 23 995-1011

Will R P (1991) True and false dependence on technologyEvaluation with an expert system Computers in HumanBehavior 7 171-183

Woods D D (1996) Decomposing automation Apparent sim-plicity real complexity In R Parasuraman amp M Mouloua(Eds) Automation and human performance Theory and ap-plications (pp 3-17) Hillsdale NJ Erlbaum

June 1997-253

Woods D D Wise J amp Hanes L (1981) An evaluation ofnuclear power plant safety parameter display systems InProceedings of the Human Factors Society 25th Annual Meet-ing (pp 110-114) Santa Monica CA Human Factors andErgonomics Society

Yeh Y-Y amp Wickens C D (1988) The dissociation of subjec-tive measures of mental workload and performance Hu-man Factors 30 111-120

Zuboff S (1988) In the age of the smart machine The future ofwork and power New York Basic Books

Raja Parasuraman received a PhD in psychology from the Uni-versity of Aston Birmingham England in 1976 He is a pro-fessor of psychology and director of the Cognitive ScienceLaboratory at the Catholic University of America WashingtonDC

Victor Riley received a PhD in experimental psychology fromthe University of Minnesota in 1994 He is a senior researchscientist at Honeywell Technology Center Minneapolis Min-nesota

Date received June 17 1996Date accepted November 11 1996


Recommended