+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA Infotech@Aerospace 2007 Conference and...

[American Institute of Aeronautics and Astronautics AIAA Infotech@Aerospace 2007 Conference and...

Date post: 15-Dec-2016
Category:
Upload: marcia
View: 213 times
Download: 0 times
Share this document with a friend
16
1 American Institute of Aeronautics and Astronautics Reasoning about Knowledge in Human-Automation Systems (Preliminary Report) Y. Moses 1 and M.K. Shamo 2 Technion – Israel Institute of Technology, Haifa, Israel 32000 In a supervisory control system the human agent’s knowledge of past, current, and future system behavior is critical for system performance. Being able to reason about that knowledge in a precise and structured manner is, therefore, central to effective system design. In this paper we introduce the application of a well-established formal approach to reasoning about knowledge to the modeling and analysis of complex human-automation systems. An intuitive notion of knowledge in human-automation systems is sketched and then cast as a formal model. We present a case study in which the approach is used to model and reason about a familiar problem from the aviation human-automation systems literature; the results of our analysis provide evidence for the validity and value of reasoning about complex systems in terms of the knowledge of the system’s agents. To conclude, we discuss planned directions that will extend this new approach, and note several systems in the aviation and human-robot team domains that are part of our research program. Nomenclature p, q = primitive propositions ϕ, ψ = arbitrary formulas = the ‘satisfies’ relation S = the system s, t = states in the system S G = the set of global states g = a global state in G L = the set of local states l = a local state in L m = discrete point in time τ = transition function R = set of runs r = a single run in R h = human agent a = automation agent e = environment agent K = modal knowledge operator Θ = epistemic setup 1 Professor, Department of Electrical Engineering, Technion City, Haifa 2 Graduate Student, Department of Industrial Engineering and Management, Technion City, Haifa, and AIAA Member AIAA <i>Infotech@Aerospace</i> 2007 Conference and Exhibit 7 - 10 May 2007, Rohnert Park, California AIAA 2007-2709 Copyright © 2007 by Yoram Moses and Marcia Shamo. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
Transcript

1

American Institute of Aeronautics and Astronautics

Reasoning about Knowledge in

Human-Automation Systems

(Preliminary Report)

Y. Moses1 and M.K. Shamo2

Technion – Israel Institute of Technology, Haifa, Israel 32000

In a supervisory control system the human agent’s knowledge of past, current, and future system behavior is critical for system performance. Being able to reason about that knowledge in a precise and structured manner is, therefore, central to effective system design. In this paper we introduce the application of a well-established formal approach to reasoning about knowledge to the modeling and analysis of complex human-automation systems. An intuitive notion of knowledge in human-automation systems is sketched and then cast as a formal model. We present a case study in which the approach is used to model and reason about a familiar problem from the aviation human-automation systems literature; the results of our analysis provide evidence for the validity and value of reasoning about complex systems in terms of the knowledge of the system’s agents. To conclude, we discuss planned directions that will extend this new approach, and note several systems in the aviation and human-robot team domains that are part of our research program.

Nomenclature p, q = primitive propositions ϕ, ψ = arbitrary formulas

= the ‘satisfies’ relation S = the system s, t = states in the system S G = the set of global states g = a global state in G L = the set of local states l = a local state in L m = discrete point in time τ = transition function R = set of runs r = a single run in R h = human agent a = automation agent e = environment agent K = modal knowledge operator Θ = epistemic setup

1 Professor, Department of Electrical Engineering, Technion City, Haifa 2 Graduate Student, Department of Industrial Engineering and Management, Technion City, Haifa, and AIAA

Member

AIAA <i>Infotech@Aerospace</i> 2007 Conference and Exhibit7 - 10 May 2007, Rohnert Park, California

AIAA 2007-2709

Copyright © 2007 by Yoram Moses and Marcia Shamo. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

Φ = set of application-dependent primitive propositions π = interpretation associating truth assignments to propositions in Φ fh = specifies agent h’s explicit knowledge at a local state ah = specifies agent h’s ground knowledge at a local state Dh = set of knowledge implications E = epistemic system

= the temporal operator ‘next’ = the temporal operator ‘eventually’ = the temporal operator ‘henceforth’

P = protocol r(m) = the global state in run r at time point m ri(m) = agent i’s local state in run r at time point m

I Introduction A human operator’s knowledge regarding a complex system’s past, current, and future behavior is elemental to

system performance. Indeed, issues of the existence, content, and validity of operator knowledge in a complex human-automation system underlie many major threads of research in the human-machine systems community. Consider for instance broad and active research domains such as situation awareness (briefly, an operator’s knowledge of the relevant elements in the environment), mode awareness (operator knowledge of a system’s current operational state), and mental models (knowledge and beliefs regarding a system’s components, possible behaviors, and interdependencies). In a basic sense, all these domains are concerned with aspects of the operator’s knowledge and aim to define both theoretical frameworks and evaluative criteria for determining whether that knowledge is satisfactory for system performance.

Since human operator knowledge is a fundamental element of system performance, we argue that it is important to reason about that knowledge, and its role in controlling system performance, in a precise and structured manner. To be useful, a reasoning approach must satisfy a number of modeling and analysis requirements. In particular, knowledge in a system must be a well-defined entity or resource that can be directly represented, manipulated, and analyzed. Properties of knowledge unique to human agents must be expressible. Since our ultimate concern is performance of the complete system, and since non-human agents play equally important albeit different roles, we will need to be able to capture and reason about the 'knowledge' of the non-human agents in the system, as well. Finally, since systems are dynamic, it will be necessary to represent the evolution of knowledge over time, and how that knowledge both influences and is influenced by system behavior.

In this paper we introduce the use of a formal approach to reasoning about knowledge in human-automation systems that is based on a model of knowledge developed by Halpern and Moses and their colleagues.1-3 Their methodology combines the formal rigor of epistemic logic with intuitive and direct notions of knowledge and action in multi-agent systems, and thus provides a means of reasoning about systems at the level of the agents' knowledge. While existing applications of the approach focus primarily on systems in which all agents are non-human (e.g., communication protocols for distributed computer systems,1, 4 robot motion planning,5 adding notions of knowledge and communication to discrete event control systems6, 7), we argue that this formalism is equally valuable for reasoning about knowledge in systems in which one or more agents are human. Indeed, as increasingly complex human-automation and human-robot systems are developed, the need for tools that support the design and evaluation of these systems from the perspective of the agents' knowledge becomes an imperative. Consider, for instance, the knowledge possessed by the human and automation agents in advanced flight deck information systems, in systems of human supervisory controllers and multiple autonomous vehicles, or in robot-assisted search and rescue teams. Sophisticated tools are required to effectively analyze the complex and subtle interactions between what the human and non-human agents know in these and similar systems.

The value of reasoning about knowledge in the analysis of socio-technical systems has already been noted in the literature.8, 9 In the papers cited, the authors discuss an approach combining knowledge, timed automata and Activity Theory, and propose a case study of an aviation accident. From the brief description presented, it appears that their work focuses on the temporal aspect and does not model knowledge explicitly. Our approach makes use of a fine-grained analysis of knowledge based on a careful model of the different entities' local states, along the lines of Fagin et al.1 This enables us to establish rigorous claims regarding the role of knowledge, and the lack of knowledge, in human-automation interaction and supervisory control of complex systems. Our contribution is thus to propose a cohesive formal framework for the modeling and analysis of knowledge in these systems.

2 American Institute of Aeronautics and Astronautics

In this article, we present the initial theoretical foundations of our approach, and sketch a case study. The article is structured as follows. Within the domain of complex human-automation systems in which the human’s role is that of supervisory controller, we first define more precisely a notion of human knowledge in a complex system. We also put forth basic criteria for what it means for a human operator's knowledge to be satisfactory for system performance. We then describe the main concepts and elements of the knowledge formalism that are useful for modeling human-automation systems. Following, we present a case study in which the knowledge formalism is used to model and reason about the altitude deviation problem, a familiar and well-studied problem in the aviation human-automation systems literature.10-14 We provide a brief overview of the relevant aspects of the altitude deviation scenario, and construct a knowledge-based model of the problem. We then use the model to evaluate the knowledge in the system against formally defined criteria. To conclude, we present our initial views regarding the potential value of the approach for reasoning about knowledge in a variety of human-automation and human-robot systems, and outline a number of important research directions that we are currently pursuing. Finally, we note several systems in the aviation and human-robot team domains that will serve as rigorous test beds for validation of our approach.

II Human knowledge in complex systems

A. Knowledge in supervisory control In the complex systems that exist today, the human's role is normally that of supervisory controller – the agent in

the system that is charged with monitoring and directing the performance of multiple interacting non-human agents whose actions may be semi- or nearly fully autonomous. Imagine the tasks of the industrial process control operator overseeing multiple sub-systems within a plant or the tasks of the subway train control room operator monitoring and coordinating hundreds of simultaneously moving trains.

To function as a supervisory controller, the human agent must be able to command the system in accordance with defined specifications, monitor the system’s behavior to ensure that it performs as required, and identify and correct anomalous system behaviors.15, 16 While complete knowledge of all aspects of system component behaviors would of course immediately overwhelm the human operator, the need for specific knowledge of system functions and behaviors is an obvious requirement of supervisory control. It is, in fact, straightforward to interpret these supervisory control tasks in terms of the knowledge required. In order to perform effectively, we argue that the human agent needs

• knowledge of the rules defined by the system specifications; • knowledge of the current state of the system.

In order to more precisely reason about the extent to which a human agent’s knowledge satisfies these requirements in a given system, it is useful to decompose that knowledge into knowledge types that appear to be relevant for supervisory control. In the next section we propose a classification of knowledge types. While distinctions between various types of knowledge have been put forth in numerous research domains including human factors,17 cognitive sciences,18, 19 philosophy,20 and artificial intelligence,1, 21-23 we cast the notion here within the context of supervisory control and in a manner that will allow us to expressively and formally capture these knowledge types in our framework.

B. Types of human knowledge We identify two types of human knowledge that appear to have distinct purposes in supervisory control.

1. Explicit knowledge Explicit knowledge is the knowledge available via the human-automation interfaces at any given point in time.3

A human driver explicitly knows, for instance, that the current speed of the car she is driving is 40 miles per hour since that value is displayed on the speedometer. Explicit knowledge is needed for supervisory control tasks such as monitoring system status, and synchronizing the tasks of interacting or otherwise interdependent system components (e.g. not opening a tank cover until the internal pressure of the tank has dropped below some defined level).

3 The relationship between displayed information and useful knowledge is dependent on non-trivial factors such

as the agent correctly perceiving the display and knowing its meaning, the compatibility of the display format to the task, and so forth, and is itself an important area of human factors research. We assume here that agents in the system are knowledgeable and well-behaved (will respond appropriately), and that the display type, quality, and information saliency are adequate for the task.

3 American Institute of Aeronautics and Astronautics

Explicit knowledge may sometimes imply a need for recall. An aural warning is explicit knowledge, for example, but must be actively remembered by the agent once the bell or siren has been silenced.

2. Mental model knowledge The second type of knowledge required for human-automation system performance, mental model knowledge, is

the 'knowledge-in-the-head' that the human agent possesses regarding the global behaviors and properties of the physical system with which he is interacting.24-26 Mental model knowledge is required both for reasoning about and interpreting current interface information as well as for future-oriented supervisory control tasks such as planning and scheduling, troubleshooting, and decision-making.15, 16, 27 In the mental model we also include what we shall call ground knowledge – essential domain-relevant knowledge that the human agent may be assumed to possess, such as basic logical and computational skills. We suggest that this knowledge is ‘automatic’ in nature and thus requires almost no effortful thinking or reasoning action.27, 28 For instance, it seems fair to assume that a human agent who is sufficiently skilled to act as supervisory controller of a complex system, should be able to automatically determine which of two displayed integers is greater.

Mental model knowledge is normally grounded in fact (e.g. through training). However, since it evolves over time as a result of repeated interactions, mental model knowledge may grow to include some proportion of unproven (and perhaps incorrect) belief.29 For any typically-sized complex system the human operator’s mental model knowledge is clearly incomplete, as the number of potential system behaviors under all possible conditions is too large to be known.

The integration of these knowledge types defines the human agent's understanding of the current situation. Within the dynamic and often time-critical supervisory control domain, it seems natural to suggest that the reasoning actions must be immediate, or nearly so, as control behavior requiring long chains of inference or introspection would clearly not be conducive to effective system performance. For example, consider that the driver agent explicitly knows that the allowable speed is 25 mph (she just passed the speed limit sign), that her current actual speed is 40 mph as displayed on the speedometer, and the set of facts that comprise her mental model includes the proposition that an actual speed greater than an allowable speed may result in a ticket. Simple reasoning will then allow the human agent to deduce the (quite important) conclusion that she is currently in danger of a speeding ticket.

This classification of human knowledge into broad types is intentionally simplistic. Theories of human knowledge, definitions of human knowledge types, and the reasoning processes by which human knowledge is attained are areas of study that have intrigued scientists from many disciplines for centuries,30, 31 and that continue actively today.19, 32-36 The goal of this paper is not to add yet another theory to that arena, nor do we claim that our approach truly captures the structure of knowledge or the cognitive processes of reasoning in human agents. As noted above, our approach is more simply intended to enable an expressive (and formal) depiction of what must be known by a human agent in a specific role.

C. Properties of satisfactory knowledge If our aim is to provide a practically useful methodology and tool set for the design and analysis of complex

systems, we must provide formal metrics against which knowledge in systems can be rigorously evaluated. We propose that at a minimum, the human agent’s knowledge must be adequate and valid. If these properties hold, we shall say that the human agent's knowledge is satisfactory.

1. Adequacy Though the human agent's knowledge of a complex system's possible behavior is incomplete by definition due to

the size of the state space, we require that the operator's knowledge be adequate; it must guarantee her ability to distinguish between acceptable (i.e. in accordance with system specifications) and anomalous or 'illegal' behaviors or states. In other words, though the agent will not know, a priori, all the possible behaviors that a system may exhibit, she must always be able to determine whether the current behavior is 'good' or 'bad'. Of course the goodness of a current behavior or state may be a function of the states that precede it – the history of that state. For example, a state in which one aircraft engine is inoperable may be perfectly acceptable if in the previous state it was intentionally shut down, otherwise ‘engine out’ is a ‘bad’ state.

2. Validity A second necessary property of human agent knowledge is validity; the operator must have no knowledge

regarding system behavior that is false in some system state. In a formal sense this implies that human knowledge be consistent with the Knowledge Axiom which states that only what is true can be known.1

4 American Institute of Aeronautics and Astronautics

In this section we have sketched a simple framework of human knowledge in supervisory control that includes a classification of knowledge types and basic criteria against which that knowledge can be assessed. Our goal is to provide a precise and rigorous means for identifying all the knowledge that the human agent has in the system, and for analyzing whether in fact that knowledge is satisfactory for supervisory control. We next provide a brief introduction to the basic elements of the knowledge formalism in order to recast our model.

III Reasoning about knowledge – an introduction

A. Elements of the framework The knowledge formalism originally developed by Halpern and Moses and expanded on in numerous writings2,

5, 37-41 includes structures to represent the entities or agents in the system and the knowledge they can be said to possess, and a means for describing the behavior of the system as a function of knowledge and action. It thus provides a rigorous methodology for reasoning about knowledge in a dynamic multi-agent system, and for analyzing and proving properties of that knowledge. Here we focus on the elements of the formalism that are relevant to our present work, and include several extensions that will help to represent important aspects of human knowledge. The primary references for this part are Refs. 1-3, the related works listed above provide extensive detail on many additional aspects of the formalism.

1. Agents The formalism considers agents and systems of multiple interacting agents, where an agent might be a robot, a

processor, a human, a physical object, or any other entity of interest in the system. The approach includes the external environment as an agent, albeit a special type of agent whose behavior is not under the control of the other agents in the system. In general, the environment agent’s role is to represent all that is relevant to the system that is not captured by the other system agents. In a communications model, for example, the environment might represent the expected failure model of the system, the communication channels, etc.

2. Local states, global states, and agent knowledge In common with other formal languages and modeling frameworks,42-45 the knowledge-based approach

represents system behavior as a sequence of states and events or actions that cause transitions between states. In the knowledge formalism, the role of the state is to capture all the information available to the agents. If we adopt a syntactic definition of knowledge, we can say that all the agent explicitly knows in a state is that information.4 At each point in time m, each agent i in the system will be in some local state li; agent i’s knowledge is based solely on the information in its local state. A global state captures a snapshot of the overall world being modeled frozen at a point in time. Formally, it is a tuple consisting of the local states of all of the agents, and possibly also a state for the environment, that keeps track of any relevant parameters that do not appear in local states.

As we shall see, an agent’s knowledge in our framework depends solely on its local state. As a result, two different global states will appear indistinguishable to an agent if its local state is the same in both. The agent will, therefore, have the same knowledge in both states. An important benefit of this modeling approach is that it allows us to capture the fact that the knowledge in a system is normally distributed unequally among its agents, and one agent may not know what another agent knows. For example, if a robot agent’s local state contains a variable x and a human agent’s local state does not contain information about the variable x, then the robot knows the value of x and the human does not. If the value of x changes during system performance, the human may remain unaware of the change. By reasoning about knowledge we can thus accurately model and reason about situations in which agents (human or otherwise) have only partial knowledge of system behavior.

3. Runs and dynamic system behavior We are typically interested in analyzing the evolution of systems over time. The dynamic behavior of the system

is described by a set R of runs. We define a run to be a function mapping time points to global states. Thus, if r is a run, then r(0) will be its first global state, r(m) is the global state at time m, etc. A transition in the history of the system will transform a given global state into its successor in the run.

The transitions are caused by actions taken by the agents in the system, where we usually include a special “agent” called the environment (denoted by e). The environment may determine when agents act and when various nondeterministic events such as the delivery of messages take place. Formally, we assume that at any given point in

4 This is in contrast to the semantic, or possible worlds, approach that defines knowledge to be what is true in all

possible worlds. 5

American Institute of Aeronautics and Astronautics

time every agent, including the environment, may execute an action. The tuple of these is called a joint action and has the form (ae, a1,…,an) describing the actions of the environment and of the agents 1,....,n, respectively. Based on the global state at time m, and the joint action performed there, the resulting global state at time m+1 is determined by a transition function τ, which maps joint actions and global states to global states: τ(ae, a1,…,an)(g) = g’.

4. Protocols and actions We think of the actions of agents (and of the environment) as being determined by protocols. A protocol is a rule

that determines the actions that an agent can perform as a function of its local state. When only a single action is defined for a state then the agent’s behavior is deterministic. Alternately, a protocol may specify the agent’s action to be one of a set of actions, in which case we say that it is nondeterministic. In many cases, human agents follows deterministic protocols, while the environment is best thought of as following a nondeterministic protocol.

5. Defining knowledge in a complex human-automation system Knowledge is defined with respect to a system consisting of a set of runs. When the agents and their protocols

are given, the system typically consists of the set of all runs in which the agents follow their respective protocols; this provides a powerful basis for analyzing systems whose agents have well-defined behaviors and unlimited reasoning power. In this paper we consider a formalism in which the assumptions about the contents and computational aspects of an agent’s knowledge may be limited. This is especially suitable for analyzing situations involving human agents that need to perform a restricted and well-defined set of tasks. We assume that the knowledge a human agent has to perform a given task is based on two central elements: her explicit knowledge – concrete information she has immediate access to – and her mental model, which should intuitively drive the agent’s interpretation of that information. In this initial paper we consider a framework involving one ‘knower’ h – the human supervisory controller operating in a dynamic and complex environment. Generalizations and extensions to settings in which multiple ‘knowers’ are modeled will be straightforward.

We proceed to develop the framework as follows. We first define a logical language L. Starting from a set Φ of (application-dependent) primitive propositions or primitive facts as atomic formulas, we create more complex formulas inductively by applying logical connectives to simple formulas. Formally, L is the set of formulas defined by

• Every p ∈ Φ is a formula, • If ϕ and ψ are formulas then so are

o ¬ϕ (not ϕ) and ϕ ∧ ψ (standing for ϕ and ψ), o Kiϕ (standing for agent i knows ϕ), and o ϕ (at the next state, ϕ), ϕ (eventually ϕ will hold), and ϕ (henceforth ϕ).

We define only the two Boolean operators ¬ and ∧ because all other Boolean operators (such as implies denoted

by ⇒) are definable using these two. The language L allows us to express rich and complex statements such as Khϕ ∧ ψ ⇒ ¬ψ which reads that “if agent h knows that ϕ is true, and in addition ψ is true, then eventually ψ will be false”. When there are many “knowers”, the language allows us to speak about one agent’s knowledge about other agents’ knowledge, etc.

We choose to model the types of human-automation systems S we are interested in by defining a system with three agents; the human agent h, the automation agent a, and the environment agent e. With this construction we can focus on the human ‘knower’, and capture how h’s knowledge influences and is influenced by the behavior of the complete system. Thus at any given time each agent i, for i = h, a, e, will be in a local state li, representing the information that i has available at that time. As in the general case noted above, the tuple of all the agents’ local states is a global state g. The set G of all global states is then the Cartesian product of the sets of local states Li, G = Lh x La x Le. In a given application, there can be dependencies between elements in the agents’ local states (e.g. shared data) and the set of global states that will appear in a relevant set of runs R will usually be a subset of G.

The dynamic behavior of S represented as the set of runs R is generated by the joint actions of the environment, automation, and human agents. These joint actions cause transitions from one global state g to the next in accordance with the transition function τ. Clearly, in any complex human-automation system there will be many possible runs.

The propositions p,q,…∈ Φ and indeed all formulas of L are initially strings that we may intend to ascribe meaning to. The truth of primitive propositions is defined by way of an interpretation π that maps Φ and G to True or False. Thus, if π(p)(g)=True then p is true at the global state g. We wish to define formulas to be true or false at a given time m in a run r. Before we can do this, we need to explain the mechanism by which knowledge is ascribed to agents, based on components capturing their explicit and mental model knowledge.

6 American Institute of Aeronautics and Astronautics

Let Φ’ = Φ ∪ {¬p: p in Φ} be the set of primitive propositions and their negations. We think of the agent’s explicit knowledge as specifying a set of propositions that are true as a function of its local state. Formally, we model this by a function fh: Lh → Φ’ specifying which elements of the set Φ’ the agent explicitly knows hold when it is in a given local state.

We think of the agent’s mental model knowledge as containing two main components. The first is an element of ground or automatic knowledge, in which various propositions that may not be explicitly represented in the state are immediately observed. For example, the local state may contain two 5-digit numbers, and the agent may be expected to know immediately which of them is larger. We model this aspect of the mental model by a function ah that, when applied to a subset T⊆ Φ’ produces another such subset ah(T)=T’.

Finally, the second component of the agent’s mental model allows it to perform one set of logical inferences, whose conclusions are considered formulas that the agent knows. We define Dh to be a (normally finite) set of implications of the form p1∧ p2∧… ∧pk ⇒ Khϕ that is applied in a single or ‘one-shot’ round to (T ∪ T’). Dh(T ∪ T’) is then the set {Khϕ : for some p1∧ p2∧… ∧pk ⇒ Khϕ in Dh, where each pj is in T or in T’}.

Putting these elements together, we define h’s knowledge by way of an epistemic setup Θh = (fh, ah, Dh). Applying Θh to a local state l yields a set Θh(l) consisting of formulas Khϕ; Θh(l) = Dh(fh(l)) ∪ ah(fh(l)).

The mathematical model in which we can define truth of formulas will be an epistemic system E = (R, π, Θh). Truth of formulas in L can now be defined with respect to an epistemic system E = (R, π, Θh) as follows. At a

given time m in a run r, we define: (E,r,m) p (for every p∈Φ) iff π(p)(r(m)) = True (E,r,m) ϕ ∧ ψ iff (E,r,m) ϕ and (E,r,m) ψ (E,r,m) ¬ϕ iff (E,r,m) ϕ (E,r,m) Khϕ iff Khϕ ∈ Θ(rh(m)) (E,r,m) ϕ iff (E,r,m+1) ϕ (E,r,m) ϕ iff (E,r,m’) ϕ for some m’ ≥ m (E,r,m) ϕ iff (E,r,m’) ϕ for all m’ ≥ m Note that this defines the knowledge of the human agent syntactically, rather than semantically. That is, what the

human agent “knows” in each local state is a set of knowledge formulas that is determined by the epistemic setup Θh.. There is no a priori guarantee that this knowledge is true, or even consistent.

We say that a formula ϕ is valid in E, denoted by E ϕ, if (E,r,m) ϕ holds for all runs r∈R and times m.

B. Criteria for satisfactory knowledge – when do human agents know enough? To analyze whether or not the human agent's knowledge in an epistemic system E = (R, π, Θh) is satisfactory for

supervisory control we need criteria for what it means for a human agent to 'know enough'. We suggest that two fundamental properties of satisfactory knowledge are adequacy and validity, which we define below.

1. Adequacy Though a human operator's knowledge of a complex system is incomplete by definition, the human’s knowledge

is adequate for supervisory control if the operator can always distinguish between acceptable and anomalous system states and behaviors.

To formalize the notion, let ϕ be the proposition ‘the current state is acceptable’. The agent’s knowledge or epistemic setup Θh is then adequate if for any local state rh(m) in system R in which the global state r(m) is acceptable, the operator knows that it is acceptable, and if for any local state rh(m) in which the global state r(m) is not acceptable, the operator knows that it is not. More formally, Θh is adequate if E ((ϕ ⇒ Kh ϕ) ∧ (¬ϕ ⇒ Kh ¬ϕ)).

2. Validity Validity obtains if the following hold: 1. For every (r, m) with r ∈ R, if p ∈ (fh(rh(m)) ∪ ah(fh(rh(m)))) then (E,r,m) p. In words, for every local state

(r, m) in a run in R, if a proposition p is in the human agent’s explicit or automatic knowledge, then p is true at (r,m) in the epistemic system E.

7 American Institute of Aeronautics and Astronautics

2. For every implication p1∧p2∧…∧pk ⇒ Khϕ in Dh we have that E (p1∧p2∧…∧pk) ⇒ ϕ. In words, every conclusion that the human agent draws using its mental model is true at (r,m) in the epistemic system E.

The importance of validity is that it ensures that what is considered “known” is in fact true, which is perhaps the

most basic property required of knowledge, and is often called the Knowledge Axiom: Theorem 1: If an epistemic system E = (R, π, Θh) satisfies Validity, then E Khϕ ⇒ ϕ holds for all ϕ∈L. We have introduced a formal approach to reasoning about knowledge in a human-automation system and defined

initial criteria for what it means for a human's knowledge to be satisfactory for supervisory control of a complex system. In the remainder of the paper we present a case study in which the approach is used to model and analyze, from the knowledge perspective, the well-known altitude deviation problem examined in the literature10-12, 14 and briefly mentioned earlier. The completed model and subsequent analysis will establish the viability and value of thinking about human-automation systems at the level of the knowledge in the system, and what agents know.

IV The altitude deviation problem The altitude deviation problem, in which an aircraft deviates from an assigned flight level without the pilots’

awareness, is a significant yet too common aviation hazard. One of the causal factors in the occurrence of altitude deviations (or altitude ‘busts’) is the autopilot system, which may unexpectedly cause the aircraft to deviate without intentional crew input and without crew awareness. Given the tremendous range of flight data available for system control and flight crew information, autopilot-induced altitude deviations seem at first to be easily preventable, and numerous researchers have investigated the problem in order to explicate the underlying factors that contribute to their continued occurrence.

As many of these studies have noted, one reason for these altitude busts is that the flight crew has only partial knowledge of the autopilot’s internal logic, and only partial information regarding system status is available via the interface. The crew’s ability to predict what the autopilot will do in a given situation is limited by this partial knowledge of the system, with the result that the autopilot may appear to behave in a non-deterministic fashion and generate what have been termed 'automation surprises’.14, 46 Several of the functions of a typical autopilot and its logic are described below. The details of this particular autopilot and the crew's knowledge of its behavior are taken primarily from analyses done by Degani and colleagues.10, 11 Since our goal here is solely to demonstrate the use of the knowledge formalism, we consider only a limited and simplified version of the original detailed analyses.

Among its functions, the autopilot system can control the aircraft’s vertical position by maintaining a given flight altitude or climbing/descending to a new defined altitude.5 The pilot inputs a desired altitude; the autopilot then guides the aircraft to continue its climb until it reaches an internally computed ‘start capture’ altitude at which the aircraft begins leveling to smoothly meet the target altitude. For instance, consider an aircraft that is currently at 20,000 ft, climbing to level off at a target altitude of 25,000 ft. In order to attain level flight at 25,000 ft, the aircraft must begin to transition at some lower ‘start capture’ altitude, say 24,500 ft. At any time, the pilot may receive a new altitude clearance from Air Traffic Control (ATC); the pilot then sets the new altitude using the autopilot control interface.

There are several distinct states or modes that the autopilot transitions through during this climb and capture procedure. In the vertical speed mode, the autopilot controls the aircraft’s climb. If an altitude for level flight (the target altitude) has been entered in the autopilot then that target altitude limits the climb and the autopilot is in vertical speed constrained mode, otherwise the autopilot is in vertical speed free mode and the aircraft will continue to climb. Assuming that a target altitude has been entered, the autopilot enters the capture mode once the ‘start capture’ altitude has been attained. Once the aircraft has reached the desired flight level, the autopilot transitions to the hold state.

During this maneuver, the autopilot’s response to the pilot’s inputting a new target altitude is a function of the autopilot’s state and the relative values of the aircraft’s current altitude, the new target altitude, and the start capture altitude, as shown in Figure 1. For example,

If the autopilot is in vertical speed constrained mode and a new target altitude y is entered that 5 Autopilot control of descent is logically equivalent to that of climb except for terminology and internal

dynamics; we focus here only on an aircraft that is climbing in the interest of brevity and the prevention of needlessly complicated explanations.

8 American Institute of Aeronautics and Astronautics

is greater (higher) than the current aircraft altitude x, the autopilot will remain in vertical speed constrained mode and will respect the new target altitude.

If the autopilot is in vertical speed constrained mode and a new target altitude y is entered that is less than the current aircraft altitude x, the autopilot will not respect the new target altitude and will transition to unconstrained vertical speed mode (vertical speed free).

If the autopilot is in capture and a new target altitude y is entered that is greater than the start capture altitude z, then the autopilot will transition back to vertical speed constrained and will respect the new target altitude.

If the autopilot is in capture and a new target altitude y is entered that is less than the start capture altitude z, then the autopilot will transition to vertical speed free and will not respect the new target altitude.

Figure 1. Autopilot logic: Receiving a new target altitude y. As mentioned previously, the pilot’s knowledge of the autopilot’s logic and behavior is limited. The relevant

explicit information available to the pilot via the aircraft’s display interface includes the current altitude x, the target altitude y, and the vertical mode (vertical speed or capture; importantly there is no explicit indication of whether vertical speed is free or constrained). On the basis of related training materials the pilot knows, as well, that setting a new target altitude while the aircraft is climbing will cause the autopilot to remain in the vertical speed mode, and that if the aircraft is in capture, a new target altitude will cause the aircraft to transition back to vertical speed.

Regarding more precise knowledge of whether a specific transition to vertical speed results in transition to vertical speed constrained or vertical speed free, Refs. 10, 11 suggest that the pilot’s 'user model' of the system includes the knowledge that in both vertical speed and capture modes, setting a new target altitude greater than the aircraft’s current altitude will result in the transition to vertical speed mode constrained by the new target altitude, while setting a new target altitude less than the current altitude will result in transition to an unconstrained vertical speed mode. Clearly this ‘knowledge’ (more precisely, belief) is incorrect in relation to the actual autopilot behavior presented earlier – the outcome of changing the target altitude when in capture is a function of the start capture altitude and not of the aircraft's current altitude.

A. Modeling the knowledge in the flight deck In this section we develop a formal representation of the altitude deviation problem using the constructs and

techniques of the knowledge-based approach. This representation will enable us to more precisely reason about the role of the human’s knowledge in the system, and formally evaluate whether or not that knowledge is satisfactory for supervisory control according to our previous definition. The analysis will also provide insight into the specific knowledge that is lacking.

While we focus on the human agent as the sole ‘knower’ in the system, we consider the three agents in our model, the environment and automation agents as well as the human, in order to reason about the complete system. The model includes the local states representing the information available to the agents at each point in time, the actions that agents can perform, and the transitions between global states generated as a function of the actions executed jointly by the agents.

1. Agents There are three agents in the model: ATC acting in the external environment, the automation system of interest

(the autopilot and the aircraft, considered collectively), and the human pilot. The environment, automation, and human agents are denoted e, a, and h, respectively.

9 American Institute of Aeronautics and Astronautics

2. Local states The local states of each agent consist of the variables or information available to the agent in that state. The

environment's local states include a variable that represents whether or not the environment sent a new altitude in the previous round. The set Le is then the set of possible states (send.new.alt.y) where the possible values of the variable are {send.new.alt.yes, send.new.alt.no}.

The automation agent's local states consist of the variables that describe the current actual altitude, the vertical speed mode of the autopilot, and the altitude at which the autopilot transitioned into capture mode. La is then the set of possible states for the automation agent, which have the form la = (alt.x, vertical.mode, start.capture.altitude), where the values that each variable make take as a function of the system’s dynamic behavior are shown:

alt.x = x, where x is the current actual altitude, vertical.mode ∈ {alt.set.y, capture, free}, the actual capture modes of the autopilot (altitude set, capture, and

free climb/descent), start.capture.altitude ∈ {start.cap.alt.z, ∅}, the altitude z at which the autopilot has transitioned into capture

mode. If the aircraft has not yet reached capture, then the value of start.capture.altitude is the empty set ∅. The human agent's local states consist of the system parameters available from the interface: the current actual

altitude, the target altitude y, the current vertical mode, and whether or not the environment sent a new altitude. Lh is then the set of all possible states for the human agent h of the form (ALT, target.alt, vert.mode.display, alt.rec) where as above the value of each parameter in a given state is a function of the system’s dynamic behavior:

alt.x = x, where x is the current actual altitude, target.alt = target.alt.y, the currently displayed target altitude, vert.mode.display ∈ {vertical speed (VS), capture}, the displayed vertical speed mode, the source of the pilot’s

knowledge of the current vertical mode of the aircraft, alt.rec ∈ {new.alt.yes, new.alt.no}, whether or not the environment sent a new altitude in the previous round. We define Φ to contain the primitive propositions: {“the current altitude is x” | for some x < 50,000} ∪ {“the

current target altitude is y” | for some y < 50,000} ∪ {“the displayed vertical speed mode is M” | M = VS, capture} ∪ {“the actual vertical speed mode is M” | M = constrained, free, capture} ∪ {“ATC sent a new capture altitude”, “the current altitude is greater than the target altitude”}. In addition, Φ will contain facts such as “action a is performed”.

Let us consider how we might represent the pilot’s knowledge Θh in one of her local states; formalizing her knowledge in all other states Lh is similar and straightforward. Given the pilot’s local state lh = (alt.25,000, target.alt.20,000, VS, new.alt.yes), we would have fh(l) = {p1, p2, p3, p4}, where p1 = “the current altitude is 25,000”, p2 = “the target altitude is 20,000”, p3 = “the displayed vertical speed mode is VS”, and p4 = “ATC sent a new capture altitude”. Given T = {p1, p2, p3, p4}, it seems reasonable to assume that the pilot’s explicit knowledge can be extended to include ground or automatic knowledge such as “the current altitude is greater than the target altitude”. If we call this proposition p5, then ah (T) = T’ = {p5}.

Now if we assume that the well-trained pilot’s mental model knowledge Dh will include the implication p1∧p2∧p3∧p4∧p5⇒Khϕ, where ϕ is the formula ‘setting the new altitude ⇒ (vertical speed mode is free)’, then in the current state lh, the relevant knowledge that h has for supervisory control is Θh(l) = Dh(fh(l) ∪ ah(fh(l)))={Khϕ}.

B. An analysis of the pilot's knowledge In order to completely construct the epistemic system that corresponds to the altitude deviation problem, we

define the set of joint actions in the system, the transition function, and the autopilot’s protocols explicitly. In the final journal version of this report, there will be an appendix containing the complete model. The amount of detail given so far illustrates the essential new element of how the epistemic settings would be defined, and how knowledge may be obtained based on a given local state. Once we have completed the definition of the joint actions, transition function, etc. we will be able to obtain, for any given protocol Ph for the pilot, an epistemic system E = (R, π, Θh).

Given the limitations in the information available to the pilot, her local state will never allow her to obtain the crucial information about what the start capture altitude is. In a local state lh= (alt.x, target.alt.y, capture, new.alt.yes) the pilot must act differently depending on whether the target altitude y is greater or smaller than the actual start capture altitude z. Consider this as a proposition q∈ Φ. We can prove:

10 American Institute of Aeronautics and Astronautics

Theorem 2: Fix a protocol Ph for the pilot, and let E be a valid epistemic system for Ph. If (r,m) is a point such that rh(m) = (alt.x, target.alt.y, capture, new.alt.yes) then both (E,r,m) ¬Khq and (E,r,m) ¬Kh¬q.

Theorem 2 implies that the pilot is not able to obtain the crucial knowledge needed in order to make the right

decision in this particular local state. Thus, the interface does not allow sufficient knowledge in order to carry out the task correctly! Observe that we have ascribed knowledge to the pilot based on a local state that does not contain any information about the past. In fact, we can strengthen Theorem 2 and show that even if the pilot is assumed to remember its whole past history, the problem still remains. Finally, if we change the display in the cockpit so that the missing value z becomes an explicit part of the pilot’s local state, then it is possible to show that the necessary knowledge can be obtained, and the problem is resolved in the desired fashion.

V Discussion and directions for future research In this article we presented our initial efforts to utilize a well-established formal theory of knowledge and action

in multi-agent systems in order to conduct a knowledge-based analysis of a complex human-automation system. The approach allowed us to reason cleanly and rigorously about the design and performance of a system as a function of one of its most fundamental resources – the knowledge of the agents. Our analysis revealed the existence of design flaws in the system that precluded the human agent from being an effective supervisory controller, and identified the knowledge that was missing. No additional theories of performance or behavior were required, and the human and automation agents were depicted using an expressive common construct without losing important and unique properties of either of these uncommon agents. The example demonstrated that our approach offers a formal, expressive, and parsimonious methodology for the design and analysis of complex systems, and we believe this to be a significant contribution. As an initial step in a new direction, the work discussed here raises many intriguing points for investigation; we briefly discuss several that we are currently pursuing. We also mention a number of systems in the aviation and human-robot team domains that will serve as important test cases for the value and scalability of our approach.

A. A richer notion of human knowledge The first direction for our research is to expand on and more completely define a notion of human knowledge

within our formal framework. We are considering a classification of human knowledge in terms of its contents in addition to the type-based taxonomy defined in the current paper. For example, we may wish to capture the declarative / procedural dimension often used in cognitive modeling.17, 18, 47 Alternately, we may add a structural / strategic dimension to our model, where structural knowledge is knowledge of the physical world or the constants in that world, and strategic knowledge is an object's intrinsic worth to the agent, perhaps somewhat in the spirit of Gibson's affordances.48 By doing so we gain a more multi-dimensional model of human knowledge that may render our approach useful in a wider range of problem domains.

Another aspect of human knowledge that is important in supervisory control is the distinction between knowledge and belief. By capturing the important distinction between a human operator formally knowing a fact p and believing p, we can more precisely investigate the impact of incorrect human belief on system performance. Belief has been formally represented using a number of techniques,41, 49 and we intend to identify and build on the technique most appropriate for modeling a notion of human belief in our domains of interest.

An additional element of human knowledge and reasoning that is particularly important in the supervisory control context is the concept of counterfactual reasoning (‘if p were to hold then q would be true’).32 When an operator plans future actions, especially error recovery actions, counterfactual reasoning supports the operator’s consideration of conditional alternatives and the outcomes of hypothetical scenarios.33, 36 The ability to reason about and identify the knowledge needed for effective counterfactual reasoning will be useful in the design of more robust systems that provide the information needed for an operator to successfully 'think through' novel, perhaps safety-critical situations. Preliminary work suggests that incorporating an existing formalization of counterfactual reasoning38 into our approach will provide designers with a means to do so.

Once we have added these elements of human knowledge to our framework, the approach will support the modeling and analysis of a wider and more realistic set of complex systems. For example, a designer could more accurately evaluate the potential failure conditions of supervisory control inherent in a system that executes in a highly ambiguous environment by limiting the human agent's knowledge of important automation and environment agent behaviors. We may be able to capture differences in system performance that result from differences in the amount or quality of knowledge possessed by the human controller. If qualitative differences in knowledge

11 American Institute of Aeronautics and Astronautics

distinguish between novice and expert human operators,50, 51 for example, this would allow a designer to gauge the vulnerability of the system to novice supervisory control and might make salient required emphases in training.

There are, no doubt, many additional aspects of human agent knowledge that are relevant to our problem domain and that are amenable to formal representation in our framework; it is an interesting on-going goal of our work to identify them. Again, our final goal is not to draw a true and faithful picture of human knowledge, but rather to develop a representation that is epistemically adequate52 and that allows us to reason expressively and formally about important properties of human knowledge within the context of complex systems.

B. Extending the formal model Our formalization of human knowledge has drawn on various approaches in epistemic logic to create a

framework that is expressive enough to begin capturing the unique properties of human agent knowledge without sacrificing the rigor of formal logic. Extending this framework while preserving its formal nature is an important research goal. Our syntactic representation of knowledge, for instance, allows us to circumvent a significant problem that is inherent in the semantic definition of knowledge – the problem of logical omniscience. Humans clearly cannot know all the implications of what they know (as Ref. 49 points out if this were so then a human would know the outcome of a chess match immediately following the first move), and so a semantic definition of knowledge is inappropriate for human agents. In the syntactic approach we explicitly define the agent's knowledge, and so the problem of an agent's logically knowing all the implications of what he or she knows simply disappears. Unfortunately, the semantic approach also has valuable properties that are lost when replaced with syntactic definitions, and our ability to reason about the agent's knowledge itself is more limited. Our formalization must consider the tradeoff between the semantic and syntactic approaches and identify the most useful, and appropriate, detail of both.

Our approach can also formally represent different types of reasoning that a human agent might do, perhaps in different circumstances or in different systems. Previously we noted that our current definition of 'one-shot' round of reasoning captures an intuitive notion of how a human agent might reason in a supervisory control setting. That is, since complex systems are normally dynamic systems in which control decisions need to be made and implemented in a timely manner, the operator’s reasoning activities cannot require multiple rounds of reasoning. One-shot reasoning captures the resource bounds that would be expected due both to the human's inherently limited reasoning abilities as well as to the requirements of a context in which control decisions need to be made and implemented in a timely manner.

It will be interesting to expand on this initial work and identify additional patterns of inference that can be used to express human reasoning, the systems in which they may be appropriate, and the means by which they should be formalized. More precise bounds on reasoning may also be identified. For instance, within the one-shot model, how many independent instances of simple deductive inference can a human do? A system's design might require a human to infer, in the same round, n separate conclusions. Can this requirement be satisfied by typical human cognitive resources? Can we augment the one-shot model with a multi-step (i.e. algorithmic) representation of more complex human reasoning? What might be the natural resource bounds for this type of reasoning actions?

C. Knowledge of groups of agents in human-automation systems One of the most significant contributions of the knowledge formalism is its ability to express notions of the

knowledge of groups of agents, such as agents’ knowledge of other agents’ knowledge, distributed knowledge (knowledge is distributed between the agents in the system), and common knowledge (all the agents know a fact p, and know that all agents know p, and so on…).3 This expressiveness supports analysis of centrally important system properties such as the need for an agent to know what another agent knows, the additional knowledge made available by a fact being commonly known, the potential performance cost when system failure precludes a needed fact from being common knowledge, and so forth.

The importance of being able to reason formally about the knowledge of groups of agents in the design and analysis of human-automation systems is significant, as noted in Refs. 53-55. For a simple example, recall the search-and-rescue robot team mentioned previously. Not only must the designer consider what the human and robot agents need to know in any state of the system, but since the robot is normally remotely operated (e.g. it is deep in a collapsed building looking for survivors) the designer must be able to capture what the human and robot agents can know about each other's knowledge at any point in time in order to determine whether the knowledge of the agents will be sufficient for task performance.

We intend to explore the knowledge of groups of agents within the context of human-automation systems. Our work will consider these notions both from a conceptual perspective – for instance, what does it really mean to say

12 American Institute of Aeronautics and Astronautics

that human and automation agents have common knowledge of a fact? – and in terms of formal modeling considerations.

What are the theoretical issues of shared knowledge relevant to a human-automation system? It is natural to say that an operator may need to know what the automation knows, but when is it useful (and meaningful) to talk about an automation agent knowing what a human agent knows or what other automation agents know? Considering various forms of human-robot teams, for example, the need for a robot to know what the human agent knows seems clear in the case of search and rescue, personal assistant, or physical therapy robots.56 Can we define general taxonomies of human-automation and human-robot systems in which specific types of group knowledge are required for system performance?

The knowledge formalism provides a complete formal semantics and structures for the modeling and analysis of various types of group knowledge. We thus have the tools to represent and reason about the knowledge of every agent in the group, EG, distributed knowledge in the group, DG, and common knowledge, CG. We can, as well, consider the knowledge of subsets in the group, for instance denoting the common knowledge of a fact q shared by agents i and j as C{i,j}q. An important direction for our work is to further investigate the importance, modeling, and analysis of notions of group knowledge for human-automation systems. As systems grow in size (number of agents), in heterogeneity (types of agents), and in the criticality of the mission, a common concept that binds all agents and that provides a rigorous method for evaluation will become an imperative. We believe this last direction for our research program to be particularly significant.

D. Applications The final test of any formal approach for modeling and analysis is its applicability to real-world problems in the

intended domains. The altitude deviation problem discussed in this article served as a benchmark for demonstrating that our approach (1) can expressively capture important properties of human and non-human agents, (2) can enable us to answer queries about what human and non-human agents know and don’t know, and (3) can be used to draw significant conclusions regarding a complex system's design in spite of the dissimilarity of its agents. While these results offer an important initial ‘proof-of-concept’, the altitude deviation problem is a cleanly bounded and exhaustively researched scenario with faults in design identified a priori. What is the value of our formalism as a modeling and analysis tool for systems that are not as neatly defined or for which answers are not as clear?

The state-of-the-art in human-automation system design will provide a valuable testbed of systems for evaluating our approach ‘in real-life’; we are particularly interested in systems in the aviation and human-robot teams domains. In the aviation domain, current directions in integrated flight deck systems design, aviation information management, and the evolving role of the pilot as supervisory controller underscore the inherent relevance of a knowledge-based approach – but will our formalism indeed provide useful insight regarding the design of these highly complex and sophisticated systems? One important question, for example, concerns the scalability of the approach. The altitude deviation problem demonstrated the value of a fine-grained analysis of agent knowledge, but can we use the same approach to reason usefully about agent knowledge in very large systems (the so-called systems of systems)? Consider the role of human agent knowledge in the very large system now called NGATS (Next Generation Air Transportation System) that proposes to integrate navigation, communication, surveillance, and weather information so that all airborne and ground-based users can share the same information, at all times.57 The role of agent knowledge is clearly of integral importance in this massive system; indeed the pilot’s role is explicitly defined here as transitioning from “pilot to aircraft systems manager” (p. 2-30). However, will reasoning about agent knowledge at the coarser granularity implied by a system of this magnitude be feasible, and will it be of value?

The application of our formalism to the modeling and analysis of human-robot teams will serve as a particularly challenging and rigorous test of its viability. On the one hand, the importance of capturing notions of agent knowledge and common knowledge, and the need for a formal method for reasoning about knowledge in this domain, have already been noted in the literature,54, 58 and so the potential value of our approach is clear. On the other hand, these systems have unique properties that make accurate and useful representation exceptionally difficult. A knowledge-based model of a human-robot team will need to capture the (albeit artificial) intelligence of the non-human robotic agents and the often hostile environment within which these systems operate (e.g. the collapsed buildings environment of search and rescue teams). Too, our approach must be able to represent and reason usefully about the complex and dynamic patterns of communication and knowledge distribution between multiple human and non-human agents that are to be found in this domain. Consider, for example, that in a team of humans and unmanned aerial vehicles (UAVs) the multiple UAVs may communicate among themselves to maintain formation and ensure surveillance coverage while information regarding potential targets is transmitted to the human operator. Or the operator may communicate different commands to different UAVs, and receive different responses, which may be a function of the UAV’s reasoning ability and knowledge, rather than just data. How best to capture

13 American Institute of Aeronautics and Astronautics

the role of the knowledge of all the agents in this and similar systems? What insights can we gain here using our knowledge-based approach?

VI Conclusion As noted in the introduction to this work, the critical nature of many human-automation systems impose a clear

need for rigorous design and analysis tools that are practically useful. This is well-recognized, and the development of methods and tools has been an active area of research for many years. Unfortunately the highly complex nature of many of the tools too often results in their isolation in academic and scientific arenas. Subsequently, practical system design remains to a large extent a somewhat ad hoc process.

Towards that end we have introduced and described the initial development of a novel approach to modeling and reasoning about these systems that is intended to both satisfy the need for formal rigor and be sufficiently intuitive for practical use. Our initial results suggest that reasoning about these systems from the perspective of agent knowledge is in fact a viable and valuable approach.

VII References 1Fagin, R., Halpern, J. Y., Moses, Y. and Vardi, M. Y., Reasoning about Knowledge, MIT Press, Cambridge, Massachusetts,

2003. 2Halpern, J. Y. and Fagin, R., Modelling knowledge and action in distributed systems, Distributed Computing, 3 1989, pp.

159-179. 3Halpern, J. Y. and Moses, Y., Knowledge and common knowledge in a distributed environment., Journal of the ACM, 37

1990, pp. 549-587. 4Dwork, C. and Moses, Y., Knowledge and common knowledge in a byzantine environment: crash failures, Information and

Computation, 88 1990, pp. 156-186. 5Brafman, R. I., Latombe, J.-C., Moses, Y. and Shoham, Y., Knowledge as a tool in motion planning under uncertainty, in R.

Fagin, ed., Theoretical Aspects of Reasoning about Knowledge: Proc. Fifth Conference, Morgan Kaufmann, San Francisco, California, 1994, pp. 208-224.

6Ricker, S. L. and Rudie, K., Know means no: Incorporating knowledge into discrete-event control systems, IEEE Transactions on Automatic Control, 45 2000, pp. 1656-1668.

7Rudie, K., Lafortune, S. and Lin, F., Minimal communication in a distributed discrete-event system, IEEE Transactions on Automatic Control, 48 2003, pp. 957-975.

8Anderson, S. and Filipe, J. K., Guaranteeing temporal validity with a real-time logic of knowledge, Proceedings of the 23rd IEEE International Conference on Distributed Computing Systems Workshops, IEEE Computer Society Press, Providence, Rhode Island, 2003.

9Filipe, J. K., Felici, M. and Anderson, S., Timed knowledge-based modelling and analysis: on the dependability of socio-technical systems, in S. Bagnara, A. Rizzo, S. Pozzi, F. Rizzo and L. Save, eds., Proceedings of the 8th International Conference on Human Aspects of Advanced Manufacturing: Agility & Hybrid Automation, Rome, Italy, 2003, pp. 321-328.

10Degani, A. and Heymann, M., Formal verification of human-automation interaction, Human Factors, 44 2002, pp. 28-43. 11Degani, A., Heymann, M., Meyer, G. and Shafto, M., Some formal aspects of human-automation interaction, NASA Ames

Research Center, 2000. 12Rushby, J., Using model checking to help discover mode confusions and other automation surprises, Reliability

Engineering and System Safety, 75 2002, pp. 167-177. 13Rushby, J., Crow, J. and Palmer, E., An automated method to detect potential mode confusions, 18th AIAA/IEEE Digital

Avionics Systems Conference (DASC), St. Louis, MO, 1999. 14Palmer, E., "Oops, it didn't arm." A case study of two automation surprises, in R. S. Jensen and L. A. Rakovan, eds.,

Proceedings of the Eighth International Symposium on Aviation Psychology, Columbus, Ohio, 1995, pp. 227-232. 15Sheridan, T. B., Humans and Automation: System Design and Research Issues, John Wiley & Sons, Inc., Santa Monica,

2002. 16Sheridan, T. B., Human supervisory control, in A. P. Sage and W. B. Rouse, eds., Handbook of Systems Engineering and

Management, John Wiley & Sons, Inc., New York, 1999. 17Stout, R., Cannon-Bowers, J. A. and Salas, E., The role of shared mental models in developing team situation awareness:

implications for training, Training Research Journal, 2 1996, pp. 85-116. 18Anderson, J. R. and Lebiere, C., The Atomic Components of Thought, Lawrence Erlbaum Associates, Mahwah, NJ, 1998. 19Rips, L. J., The Psychology of Proof: Deductive Reasoning in Human Thinking, MIT Press, Cambridge, 1994. 20Hintikka, J., Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press, Ithaca,

1962. 21Benthem, J. v., Logics for information update, Proceedings of the 8th Conference on Theoretical Aspects of Rationality and

Knowledge, Siena, 2001, pp. 51-67. 22Davis, E., Representations of Commonsense Knowledge, Morgan Kaufmann, San Mateo CA, 1990.

14 American Institute of Aeronautics and Astronautics

23Hobbs, J. R. and Moore, R. C., eds., Formal Theories of the Commonsense World, Ablex Publishing Company, Norwood, New Jersey, 1985.

24Carroll, J. M. and Olson, J. R., Mental models in human-computer interaction, in M. Helander, ed., Handbook of Human-Computer Interaction, Elsevier, Amsterdam, 1988.

25Gentner, S. and Stevens, A. L., Mental Models, Erlbaum, NY, 1983. 26Norman, D. A., Some observations on mental models, in D. Gentner and A. Stevens, eds., Mental Models, Lawrence

Erlbaum Associates, Hillsdale, New Jersey, 1983. 27Wickens, C. D. and Hollands, J. G., Engineering Psychology and Human Performance, Prentice Hall, Upper Saddle River,

New Jersey, 2000. 28Gopher, D. and Donchin, E., Workload: An examination of the concept, in K. Boff, L. Kauffman and J. Thomas, eds.,

Handbook of Perception and Performance, Wiley, New York, 1986, pp. 41-1 to 41-49. 29Besnard, D., Greathead, D. and Baxter, G., When mental models go wrong: co-occurrences in dynamic, critical systems,

International Journal of Human-Computer Studies, 60 2004, pp. 117-128. 30Halpern, J. Y., Reasoning about knowledge: an overview, Proceedings of the 1986 conference on theoretical aspects of

reasoning about knowledge, Morgan Kaufmann Publishers Inc., Monterey, California, 1986, pp. 1-17. 31Moses, Y., Reasoning about knowledge and belief, pp. 1-25. 32Bryne, R. M. J., Mental models and counterfactual thoughts about what might have been, Trends in Cognitive Science, 6

2002, pp. 426-431. 33Byrne, R. M. J. and Egan, S. M., Counterfactual and prefactual conditionals, Canadian Journal of Experimental

Psychology, 58 2004, pp. 113-120. 34Evans, J. S. B. T., In two minds: dual-process accounts of reasoning, Trends in Cognitive Science, 7 2003, pp. 454-459. 35Johnson-Laird, P. N., Mental models in cognitive science, Cognitive Science, 4 1980, pp. 71-115. 36Thompson, V. A. and Byrne, R. M. J., Reasoning counterfactually: making inferences about things that didn't happen,

Journal of Experimental Psychology: Learning, Memory, and Cognition, 28 2002, pp. 1154-1170. 37Fagin, R., Halpern, J. Y., Moses, Y. and Vardi, M. Y., Knowledge-based programs, PODC 95, ACM, Ottawa, 1995. 38Halpern, J. Y. and Moses, Y., Using counterfactuals in knowledge-based programming, Distributed Computing, 17 2004,

pp. 91-106. 39Moses, Y., Resource-bounded knowledge (extended abstract), in M. Y. Vardi, ed., Proc. Second Conference on Theoretical

Aspects of Reasoning About Knowledge, Morgan Kaufmann, San Francisco, Calif., 1988, pp. 261-276. 40Moses, Y., Knowledge and communication (a tutorial). , in Y. Moses, ed., Theoretical Aspects of Reasoning About

Knowledge: Proc. Fourth Conference, Morgan Kaufmann, San Francisco: Calif., 1992. 41Moses, Y. and Shoham, Y., Belief as defeasible knowledge, Artificial Intelligence, 64 1993, pp. 299-322. 42Cassandras, C. G. and Lafortune, S., Introduction to Discrete Event Systems, Kluwer Academic Publishers, Norwall

Massachusetts, 1999. 43Clarke, E. M., Grumberg, O. and Peled, D. A., Model Checking, MIT Press, Cambridge Massachusetts, 1999. 44Clarke, E. M. and Wing, J. M., Formal methods: state of the art and future directions, ACM Computing Surveys, 28 1996,

pp. 626-643. 45Harel, D., Statecharts: a visual formalism for complex systems, Science of Computer Programming, 8 1987, pp. 231-274. 46Sarter, N. B., Woods, D. D. and Billings, C. E., Automation surprises, in G. Salvendy, ed., Handbook of Human Factors &

Ergonomics, second edition, Wiley, 1997. 47Byrne, M. D. and Kirlik, A., Using computational cognitive modeling to diagnose possible sources of aviation error,

Aviation Human Factors Division, Institute of Aviation, University of Illinois, 2003. 48Gibson, J. J., The Ecological Approach to Visual Perception, Lawrence Erlbaum Associates, Hillsdale, 1986. 49Konolige, K., Belief and incompleteness, in J. R. Hobbs and R. C. Moore, eds., Formal Theories of the Commonsense

World, Ablex, Norwood NJ, 1985. 50Bainbridge, L., Types of representation, in L. P. Goodstein, H. B. Anderson and S. E. Olsen, eds., Tasks, Errors and Mental

Models, Taylor and Francis Ltd., London, 1988. 51Woods, D. D. and Roth, E. M., Cognitive systems engineering, in M. Helander, ed., Handbook of Human-Computer

Interaction, North-Holland, New York, 1988. 52McCarthy, J. and Hayes, P. J., Some philosophical problems from the standpoint of artificial intelligence, Machine

Intelligence, 6 1969. 53Christoffersen, K. and Woods, D. D., How to make automated systems team players, Advances in Human Performance and

Cognitive Engineering Research, Elsevier Science Ltd., 2002, pp. 1-12. 54Kiesler, S., Fostering common ground in human-robot interaction, Proceedings of the IEEE International Workshop on

Robots and Human Interactive Communication (RO-MAN), 2005, pp. 729-734. 55Klein, G., Feltovich, P. J., Bradshaw, J. M. and Woods, D. D., Common ground and coordination in joint activity, in W. R.

Rouse and K. B. Boff, eds., Organizational Simulation, John Wiley, New York City, NY, 2004, pp. (pp. in press). 56Burke, J. L., Murphy, R. R., Rogers, E., Lumelsky, V. J. and Scholtz, J., Final report for the DARPA/NSF interdisciplinary

study on human-robot interaction, IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews, 34 2004, pp. 103-112.

57Concept of Operations for the Next Generation Air Transportation System, Joint Planning and Development Office, 2007, pp. 1-226.

15 American Institute of Aeronautics and Astronautics

58Murphy, R. R., Human-robot interaction in rescue robotics, IEEE Transactions on Systems, Man, and Cybernetics, 34 2004, pp. 138-153.

16 American Institute of Aeronautics and Astronautics


Recommended