+ All Categories
Home > Documents > “Where’s Farah?”: Knowledge silos and information fusion by distributed collaborating teams

“Where’s Farah?”: Knowledge silos and information fusion by distributed collaborating teams

Date post: 06-Mar-2023
Category:
Upload: colostate
View: 0 times
Download: 0 times
Share this document with a friend
12
Wheres Farah?: Knowledge silos and information fusion by distributed collaborating teams Stephen C. Hayne & Lucy J. Troup & Sara A. McComb Published online: 5 October 2010 # Springer Science+Business Media, LLC 2010 Abstract The Cognitively-Based Rapid Assessment Methodology (C-RAM) system manages multiple-user interactions as users work with multiple information sources. Further, it allows users to view, exchange, organize, and combine the information available and it facilitates group decision-making. Three-member teams, randomly assigned in either the (a) view otherswhiteboards or (b) cannot view otherswhiteboards conditions, completed an intelligence analysis and mission planning task. Each team member was given access to a virtual whiteboard populated with decision cards (DCards) containing intelligence information constrained to a specific area of expertise. DCards can be assessed (rated) for decision impact and importance and team members have access to all DCards regardless of experimental condition. Team members who can view their teammateswhiteboards during collaborative activities achieve significantly higher per- formance. When compared to teams unable to view otherswhiteboards, they move their own DCards less frequently, add fewer additional DCards to their own whiteboards, and rate othersDCards less frequently. Additionally, rating ones own DCards is the only process positively related team performance. Keywords Collaboration . Knowledge management . Shared cognition . Decision making 1 Introduction Terrorism Informatics has been defined as the application of advanced methodologies and information fusion and analysis techniques to acquire, integrate, process, analyze, and manage the diversity of terrorism-related information for national/international and homeland security-related applications(Chen et al. 2008). While the study of terrorism requires multi-source data integrated with a variety of techniques, e.g. data mining, image and video processing and language translation, at some point the human analyst must be involved in order to apply sensemaking and judgment (Weick 1995). The essence of the analysts information assessment is the process of distinguishing signals from noise and possibly detecting patterns (Klein 1993). In the military, intelligence analysts are constantly searching for signals that might suggest an adversarys intentions. In each case, the analyst must hunt through a quantity of data, searching for meaningful patterns within the preponderance of noise (Klein et al. 2006a, b). In most contexts, the content domain and volume of data available is too much for an individual analyst to consider. In these cases, it becomes necessary to divide the work and collaborate in a team. This implies information sharing, but simply sharing information is not enough (Ellis 2009). We advocate that analyst collaboration must be supported with a specific interaction model where the software systems are S. C. Hayne (*) College of Business, Colorado State University, Fort Collins, CO, USA e-mail: [email protected] L. J. Troup Department of Psychology, Colorado State University, Fort Collins, CO, USA e-mail: [email protected] S. A. McComb Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, USA e-mail: [email protected] Inf Syst Front (2011) 13:89100 DOI 10.1007/s10796-010-9274-9
Transcript

“Where’s Farah?”: Knowledge silos and information fusionby distributed collaborating teams

Stephen C. Hayne & Lucy J. Troup & Sara A. McComb

Published online: 5 October 2010# Springer Science+Business Media, LLC 2010

Abstract The Cognitively-Based Rapid AssessmentMethodology (C-RAM) system manages multiple-userinteractions as users work with multiple informationsources. Further, it allows users to view, exchange,organize, and combine the information available and itfacilitates group decision-making. Three-member teams,randomly assigned in either the (a) view others’ whiteboardsor (b) cannot view others’ whiteboards conditions, completedan intelligence analysis and mission planning task. Each teammember was given access to a virtual whiteboard populatedwith decision cards (DCards) containing intelligenceinformation constrained to a specific area of expertise.DCards can be assessed (rated) for decision impact andimportance and team members have access to all DCardsregardless of experimental condition. Team memberswho can view their teammates’ whiteboards duringcollaborative activities achieve significantly higher per-formance. When compared to teams unable to viewothers’ whiteboards, they move their own DCards lessfrequently, add fewer additional DCards to their ownwhiteboards, and rate others’ DCards less frequently.

Additionally, rating one’s own DCards is the only processpositively related team performance.

Keywords Collaboration . Knowledge management .

Shared cognition . Decision making

1 Introduction

Terrorism Informatics has been defined as “the applicationof advanced methodologies and information fusion andanalysis techniques to acquire, integrate, process, analyze,and manage the diversity of terrorism-related informationfor national/international and homeland security-relatedapplications” (Chen et al. 2008). While the study ofterrorism requires multi-source data integrated with avariety of techniques, e.g. data mining, image and videoprocessing and language translation, at some point thehuman analyst must be involved in order to applysensemaking and judgment (Weick 1995). The essenceof the analyst’s information assessment is the process ofdistinguishing signals from noise and possibly detectingpatterns (Klein 1993). In the military, intelligence analystsare constantly searching for signals that might suggest anadversary’s intentions. In each case, the analyst must huntthrough a quantity of data, searching for meaningfulpatterns within the preponderance of noise (Klein et al.2006a, b).

In most contexts, the content domain and volume of dataavailable is too much for an individual analyst to consider.In these cases, it becomes necessary to divide the work andcollaborate in a team. This implies information sharing, butsimply sharing information is not enough (Ellis 2009). Weadvocate that analyst collaboration must be supported witha specific interaction model where the software systems are

S. C. Hayne (*)College of Business, Colorado State University,Fort Collins, CO, USAe-mail: [email protected]

L. J. TroupDepartment of Psychology, Colorado State University,Fort Collins, CO, USAe-mail: [email protected]

S. A. McCombDepartment of Industrial and Systems Engineering,Texas A&M University,College Station, TX, USAe-mail: [email protected]

Inf Syst Front (2011) 13:89–100DOI 10.1007/s10796-010-9274-9

well-aligned to the cognitive abilities of individuals and teams.The Cognitively-Based Rapid Assessment Methodology (C-RAM, see Fig. 1 and description in a later section) and systemwas developed to support analyst collaboration on multi-source data (Hayne and Smith 2007). C-RAM, as imple-mented, provides both textual and visual methods for viewingthese data and allows analysts to organize the informationusing different presentation modalities (articulatory-loop andvisio-spatial). These separate presentations support the col-lection and fusion of information differentiating C-RAMfrom many other collaborative systems (Yen et al. 2006) andare particularly appropriate for terrorism informatics.

In this paper, we describe a study of the cognitive,collaborative and adaptive properties of this distributedsocio-technical system of humans and computers when

representing uncertain and subjective information in aneffort to improve collaborative information assessment.Specifically, we report the results of allowing analysts toview each other’s whiteboards as they work on a task wherethe team tries to determine if an individual is in fact aterrorist, and then find that individual’s specific location.

2 Prior research

2.1 Terrorism informatics

Terrorism is a difficult research area because terroristsactivities are global and often disconnected along timelines,not to mention that it sometimes cannot be distinguished

Fig. 1 C-RAM system whiteboard

90 Inf Syst Front (2011) 13:89–100

from other forms of violence (Silke 2004; Schmid 2004).Determining which group is responsible for an act, whethera claim is credible or what the motives were, becomeincredibly difficult tasks (Ellis 2009; Mickolus 2002).Many terrorist groups have moved to a networked structurebecause the highly decentralized structures allow for moreautonomy and fluid decision-making. Since only smallparts of these networks are revealed through difficult totrace financial transactions, snippets of communication, orthird-hand rumors, analysts find them very difficult tovisualize and they “require new methods and tools to helpthem make sense of such a complex threat environment”(ibid., page 143).

C-RAM is a new method with an associated tool and isnot only based on cognitive theory, but as implemented,provides a solution for case management by allowingmultiple working theories, or hypotheses to exist at anyone time. Within each hypothesis, multiple informationitems exist that are either supportive or not supportive, andare denoted by an assessment (a tuple representing thebelief, uncertainty, or disbelief) given by the analyst. Thesesituational assessments are aggregated using a beliefalgebra and drive “consensus” both algebraically andvisually (Hayne et al. 2005).

In order to combat terrorism, organizational learningmodels and theory need to inform the design of terrorisminformatics systems (Trujillo and Jackson 2008). Terroristgroups are learning and so must we. As such, C-RAMconforms to the model of information acquisition, distribution,interpretation and storage, where “members of a group developnew knowledge about their activities and the outcomes theyare generating, share it among group members, incorporate thenew knowledge into the routines of the group, and preservethis knowledge” (ibid., page 177).

2.2 Working memory

One of the goals of the C-RAM system is to provideenhancement of existing cognitive processing. In particular,the creation of a mechanism by which storage, organizationand retrieval of large amounts of information can befacilitated. In any team information processing task requiringthe dissemination of shared information, this needs to beachieved at both the individual and team level. Providing acognitive workspace that affords individual and teamcognitive effort support is crucial to support the informationmanagement and decision making processes implicated interrorism informatics.

Central to the C-RAM system is the notion that the resultof an individual’s cognition can be represented on a centralworkspace. This provides a representation that can beequated to the cognitive process required to make aninformed decision, specifically attentional and memory

processes. C-RAM attempts to drive a central workspacethat is somewhat representative of Working Memory(Baddeley and Hitch 1974; Baddeley 1992, 1998; Baddeleyet al. 2001).

Working memory is considered a central mechanism bywhich information is held and processed before either beingdisposed of or consolidated for future retrieval (Baddeleyand Hitch 1974; Baddeley 1992, 1998; Baddeley et al.2001). It is a relatively temporary or “short-term” repre-sentation that allows information being processed to bebuffered, and, if relevant passed on to more permanentstorage for later retrieval.

How information is represented in working memory is acrucial aspect of human information processing. It has beendemonstrated that humans create cognitive structures inmemory, called “chunks,” where many related pieces ofdata are aggregated (Chase and Simon 1973; Simon 1974).The use of these cognitive structures is vital for encodingdomain knowledge, pattern-recognition of situations, andselective search techniques for retrieval (Gobet and Simon1998).

Once those long-term memories are “active” in workingmemory, it also takes some ongoing effort to keep themactive. These efforts are expended by the cognitiveprocessor, and are generally referred to as “attention” ofwhich humans have a finite amount (Wickens 1984;Wickens and Liu 1988). Attention can be consciouslydirected to a variety of tasks, such as retrieving memoriesfrom long-term storage, maintaining memories in short-term storage, directing sensory activities (e.g. looking orlistening carefully), and controlling motor processes. Wehave difficulty dividing attention among several tasks, orattending to all the data provided by our senses (Broadbent1958; Treisman 1969). As a result, we often do not perceivemost of the information that is available to us (Lavie 1995).As Nobel Laureate Herb Simon has said, “a wealth ofinformation creates a poverty of attention” (Varian 1995).

We concentrate on working memory in this paperbecause it is limited to two separate stores of relativelysmall capacity (Baddeley 1992, 1998; Baddeley et al.2001). Baddeley’s (ibid) terminology has been generallyadopted as the standard; he refers to these separateresources as “visio-spatial” and “articulatory-loop” memory.While Chase and Simon (ibid) adopted Miller’s (1956)estimate for the size of each working memory store as about7±2 items, more recently, the size of the visio-spatial storehas been estimated as a maximum of 4 items (Zhang andSimon 1985; Gobet and Clarkson 2004; Gobet and Simon2000). As they show, these limitations are additive; it ispossible to simultaneously hold about 4 items in visio-spatialmemory while holding approximately 7 items in articulatory-loop memory. One goal of C-RAM is to facilitate thisprocess by allowing cognitive processing in individuals and

Inf Syst Front (2011) 13:89–100 91

teams to be “distributed”, and therefore increase taskperformance (more items can be “held” in memory).Distributed Cognition is an extension of the underlyinghuman construct of Working Memory where we think ofcognitive processing as being a part of a wider “distributed”cognitive system (Hutchins 1991, 1995). Distributed cognitionin relation to C-RAM suggests that the cognitive processingof a team would be distributed across a variety of cognitivesub systems, which would enable teams of individuals toshare and disseminate information more effectively.

2.3 Transactive memory systems

To achieve the benefits of collective recall, the individualgroup members require a system for encoding, storing,retrieving, and communicating with the group. This systemhas been called a transactive memory system or TMS(Wegner 1987) and supports the management of knowledgesilos (Brandon and Hollingshead 2004). TMS includes thecognitive abilities of the individuals as well as meta-memory, that is, the beliefs that the members have abouttheir memories. Thus, the members of a group have accessto the collective memory by virtue of knowing whichperson knows which information. By having a sharedawareness of who knows what information, cognitive loadis reduced because each individual only has to remember“who knows what” in the team and not the informationitself. Greater access to expertise can be achieved, and thereis less redundancy of effort (Wegner et al. 1991).Mohammed and Dumville (2001) point out that developinga transactive memory system reduces the rehashing ofshared information and allows for the pooling of unsharedinformation, a result that is contrary to Stasser et al. (1995).

2.4 Team recognition primed decision making

In order to integrate these concepts into a process model,Hayne et al. (2002) proposed their Team RecognitionPrimed Decision Making model (Team RPD in Fig. 2).They adapted Klein’s (1993) model of individual decision-makers to capture how chunking structures in transactivememory systems might be utilized to compensate forcognitive limitations in a group decision-making situation.Klein’s model emphasizes situation assessment through

pattern recognition (using recall from memory). Hayne etal. (2005) showed that teams perform essentially the samesteps as individuals. Specifically, they assess the situation,share these situation assessments among members, and theindividual team members select a response by adapting astrategy from their previous experience or by creating anew response (plan). Finally, they execute their plan, andobserve the results. Hayne et al.’s (ibid) empirical researchhas shown that teams using a simple “chunk” tool tocommunicate and come to consensus about which patternthey faced during situation assessment, leads to better teamprocesses and performance.

Note that situation assessment and response selectionoccur within the individual, whereas situation sharing,communication and execution occur among the individualteam members (see Fig. 2). Individuals use an internalcognitive process to perform situation assessment, but mustuse an external process when collaborating with the team.The stimulating structures used in situation sharing, mayalso trigger internal cognitive processes. Finally, whenacting as a team executing the response, individuals makeincremental actions that feedback to the team. This shouldlead to explicit awareness of a variety of constructs such asthe notion of “Team”, “Task” and “Goals” (Salas and Fiore2004).

Kaempf et al. (1996) found that individual experts spentmost of their time scanning the environment and developingtheir situation assessments. Relatively little time was spentselecting and implementing responses. If the situationassessment task has the same relative importance for teamsas for individuals, then the initial focus for team decisionsupport should be directed towards the development of toolsto support collective situation assessment. For individualmembers, these tools should be designed to reduce thecognitive effort required, attend to the highest priority tasks,and remember the most important features of the taskenvironment. For the team, we suggest these tools shouldfacilitate sharing of assessments through placement of publicrepresentations.

Public representations have been studied before in othercontexts (including distributed cognition). Interestingly,Grassé (1959) coined the term stigmergy, referring to aclass of mechanisms, or stimulating structures, that mediateanimal-animal interactions. The concept has been used toexplain the emergence, regulation, and control of collectiveactivities of social insects (Susi and Ziemke 2001). Socialinsects exhibit a coordination paradox: they seem to becooperating in an organized way. However, when lookingat any individual insect, they appear to be workingindependently as though they were not involved in anycollective task. The explanation of the paradox provided bystigmergy is that the insects interact indirectly by placingstimulating structures in their environments. These stimulating

Pattern Sharing

ResponseSelection

SituationAssessment

Execution

TeamIndividual

Stimulating Structure(Cognitive Chunk)

Fig. 2 TEAM recognition-primed decision making

92 Inf Syst Front (2011) 13:89–100

structures trigger specific actions in other individuals(Theraulaz and Bonabeau 1999). Stigmergy appears to bethe ultimate example of reduction of cognitive effortbecause social insects, having essentially no cognitivecapability, are able to perform complex collaborative tasks.

We advocate that this concept can be applied to humanteams (i.e., when a stimulating structure is placed in theexternal environment by an individual, other team memberscan interpret it and take appropriate action, without the needfor specific communication or coordination). Stigmergy inthis form is complementary to distributed cognition.

2.5 C-RAM

C-RAM is both a methodology and software implementationthat embeds the concepts from TeamRPD, where teamswill rapidly assess situations and share informationpatterns while making decisions. C-RAM follows a longline of group software research which has progressedsomewhat independently along two parallel tracks: GroupDecision Support Systems (GDSS) and Computer-Supported Cooperative Work (CSCW). DeSanctis andGallupe (1987) defined GDSS as an “interactive” computer-based system that facilitates the solution of unstructuredproblems by a set of decision-makers working together as agroup”. Tasks commonly supported by GDSS systemsinclude: brainstorming, idea organization, voting, totalquality management, and communications. CSCW is definedby Ellis et al. (1991) as ‘“computer-based systems thatsupport two or more users engaged in a common task (orgoal) and that provide an interface to a shared environment.”CSCW applications include: concurrent programming, sharedvideo, real-time drawing and whiteboarding, collaborativewriting, telepresence, and awareness. These technologies canalso be called Computer Mediated Communication (Siegal etal. 1986) and Group Support Systems (Jessup and Valacich1993). Pendergast and Hayne (1999) suggested using Group-ware (Johnson-Lenz and Johnson-Lenz 1982), and extendingit to include the dimensions of “communication (pushingor pulling information), collaboration (shared informationleading to shared understanding), coordination (delegation oftask, sequential sign-offs, etc.), and control (management ofconflict)… groupware implies a certain level of control thatmight otherwise be imposed by the group participantsthemselves” (page 312).

There have been many CSCW whiteboard systems devel-oped (for examples see Stefik et al. 1987; Ishii and Miyake1991; Kobayashi and Ishii 1993; Hayne and Pendergast1995; Hayne and Ram 1995; Roseman and Greenberg 1996).More recently, Keel (2007) has proposed putting cards on awhiteboard and having software agents make inferencesabout their placement, however in C-RAM, human analystsmake inferences about their placement and their assessments.

The visio-spatial component of C-RAM is a virtual“whiteboard” where evidence items can be manipulated andcompared against other team member’s assessments andevidence organization. An easy way to conceptualize thewhiteboard is to think of arranging note cards on a playingtable. Cards can be added, stacked, overlapped, arranged bycertain criteria, or removed from the table entirely.Similarly, each information item (here called a DecisionCard, or DCard) can be arranged on the whiteboard in asimilar fashion. For example, an analyst who specializes insatellite imagery can view relevant information and rate theimportance of that information in regard to a particularhypothesis.

Analysts assess the value of a DCard by adjusting theirlevel of belief, disbelief, and uncertainty that the informationcontributes to the current situation. The analyst can also ratethe trust placed in the source of the information. The result ismapped to a DCODE impact/importance value (Fleming2003, 2008), which is graphically depicted on the DCard (thered, green or yellow bars shown on the DCards in Fig. 1).DCODE assessments have previously been shown to providesuperior decision-making capability with individuals in aninformation assessment task (Hayne and Smith 2007).

The important difference between analysts using C-RAM to model their knowledge versus analysts meetingface-to-face to discussion their knowledge, is that C-RAMforces them to represent their knowledge in a stimulatingstructure (DCard). Thus, team members are not constrainedto merely discussing “common” knowledge, but can look ata team member’s whiteboard and focus on the knowledgethat they do not hold in common which is intended tomitigate the effect previously reported (Dennis 1996;Stasser et al. 1995; Stasser et al. 2000; Wittenbaum andStasser 1996). This focus is further directed if the DCardis assessed using DCODE, because their colleague willknow where to look in the knowledge space (or silo). Wesuggest that together, these mechanisms act to self-synchronize the team and enhance collaboration anddecision-making.

We investigate the effects of these C-RAM features onteam performance and collaboration by utilizing an existingscenario developed for analysis of team performance byWarner et al. (2008). The scenario was integrated into C-RAM and the effects on “team” intelligence of sharingwhiteboards versus restricted access to each other’s white-board were studied. Where team members have full accessto each other’s individual assessments represented by their“whiteboards”, we expect that overall performance on thetask dictated by the scenario will be enhanced.

As such, we have developed the following hypotheses:

H1. Shared whiteboard access will lead to an increase inteam communication in the intelligence task.

Inf Syst Front (2011) 13:89–100 93

H2. Shared whiteboard access will lead to an increase inthe volume of team processes required to gatherinformation and create knowledge in the intelligencetask.

H3. Shared whiteboard access will lead to an increase inteam performance in the intelligence task.

H4. Increases in the number of team processes required togather information and create knowledge will berelated to increases in team performance in theintelligence task.

3 Methods

3.1 Apparatus and materials

The C-RAM system was designed and created by 21stCentury Systems International in partnership with a majoruniversity. It is a bespoke software tool developed for thecollection, fusion, sharing, and detection of patterns ininformation used by intelligence analysts to addresshypotheses about given situations. As previously stated,the software enables analysts within specific domains toview, sort, rate and share information pertaining to thosespecific domains.

C-RAM can manage multiple whiteboards for multipletasks. There are four types of whiteboards: private, shared,public, and complete:

& A private whiteboard is a whiteboard only viewable/editable by one analyst.

& A shared whiteboard is viewable/editable by multipleanalysts. Shared whiteboards are created when ananalyst invites one or more others into a new white-board. A private whiteboard can be copied into a sharedworkspace at any time.

& A public whiteboard is viewable by all team membersand editable by the original owners who published it. Apublic whiteboard is created after the private or sharedwhiteboard has been vetted and accepted by the analyst(s)working on it, and is another separate copy of the private/shared whiteboard.

& A “complete” whiteboard is the agreed upon overviewof the hypotheses as a whole, usually vetted by theperson responsible for the hypothesis and may be editedby any team member. Complete whiteboards may alsobe made viewable to those outside the team.

For this experiment, we restrict the whiteboards to justprivate and shared. The articulatory-loop component of C-RAM is a threaded discussion/chat tool linked to eachwhiteboard task. Users can carry on text discussions whileobserving and manipulating their whiteboard.

This software runs in a browser and was viewed usingDell computers, running Vista (IE 7), with two 21-inchViewsonic monitors. The browser with C-RAM was full-screen on the left-hand monitor with chat implemented (onthe right monitor) to allow team members to communicatewith each other while executing the task.

The task was developed by Warner et al. (2008) and isan event-based, multi-dimensional, multi-domain scenariowith embedded information uncertainty and cognitiveoverload called “The Special Operations ReconnaissanceScenario (SOR): Intelligence Analysis andMission Planning”(see Appendix 1). It was designed specifically to empiricallyinvestigate the processes of team collaboration.

The information from the scenario was uploaded into theC-RAM system to form the basis of the intelligenceinvestigated by our analysts. Each one of the pieces ofinformation described in the Appendix was converted to aDCard in C-RAM and put into the relevant team member’swhiteboard. DCards are “owned” by the role (or knowledgesilo) they are initially assigned to, even though all DCardsare available in the common data store. As shown in theHuman-Intelligence Analyst’s whiteboard in Fig. 1, eachDCard is a visio-spatial summary (reduced) representationof the underlying information item. Users double-click aDCard to open the source document in a viewer for reading.Analysts can add other analysts’ DCards to their white-board (remember that they start out with their “own”DCards), organize the DCards in their whiteboard, and canrate their own (or other analysts’) DCards using DCODE.

3.2 Design

An independent measures design was employed to allow usto make direct comparisons between a condition whereanalysts were able to use C-RAM to its fullest potential inregard to sharing and viewing another analyst’s intelligenceand manipulations of that information, to a condition wherethey were isolated and unable to make comparisons to theirteam members. Due to the problem solving nature of thetask dictated by the pre-loaded scenario it was envisagedthat carry over effects from the prior exposure would have asignificant effect on the task outcome. Therefore a betweensubjects design was preferred.

3.3 Participants

Participants were recruited from the a major university’sparticipant pool. Recruitment took place according toregulations set by the University Institutional ReviewBoard, Human Subjects Committee. Participants werecompensated with course credit towards their degree. Fortyfour Male and Female participants were recruited in total,all with normal or corrected to normal vision, ranging

94 Inf Syst Front (2011) 13:89–100

from 19–36 years in age. The majority of the participantswere Female (71%) with an average age of 19.6 years,with low previous experience with each other. Nosignificant differences in participant demographics existsbetween treatments.

3.4 Procedure

The aim of the study was to create teams of analysts whowould complete two tasks using the C-RAM system.Participants were randomly allocated to teams of three.On arrival at the Collaboration and Cognition laboratory,participants were randomly allocated to one of two teamtreatments; Team Condition A (Viewing Other WB) wherefull access was given to the C-RAM system and analystscould freely access other team members shared whiteboards(5 teams), or Team Condition B (No Viewing Other WB)where analysts were only able to view their own privatewhiteboards (6 teams). Two teams of 2 persons each and 7individuals who participated due to attendance shortfallswere not included in this analysis.

In both conditions, participants were further randomlyassigned one of three possible roles: Satellite Intelligence,Human Intelligence, or Additional Intelligence. The relevantinformation cards were then made available to each ofthese roles. In both conditions, team members could atanytime add additional intelligence cards from the completeinventory, to include information from other analysts “pools”of intelligence. Participants were not “encouraged” to use C-RAM in any particular way (e.g. while they were told how torate a DCard, they were not told that they should).

The laboratory was set up so that team members couldnot see or talk to each other. They were informed that tocommunicate with each other, they could utilize the sharedwhiteboard function (Condition A) and/or the chat function(Conditions A & B). A standardized set of instructionswere read, introducing the experiment; specifically thescenario that they were encountering, along with C-RAMtraining.

After the training phases, participants were informed thatthey had 45 minutes for the Task 1 hypothesis: “Is Farrahassociated with Al Qaida?” After completion of this phase,a team response was recorded and the second task began.Task 2 involved the team making a decision: “LocateFarrah in a particular place, at a certain time.” Teams wereallocated 20 minutes to complete Task 2. After this task wascomplete, team responses were recorded. Participants werethen debriefed and dismissed.

3.5 Measures

Team performance Teams either provided an answer to thethree questions/instructions associated with the two Task

hypotheses or indicated that they did not know. Correctanswers were scored 1 point, incorrect answers were scored2 points, and do not know answers were scored 3 points.The team performance score was calculated by summingthe team’s score for the three questions and subtracting thesum from 10. This reverse coding was done to transformthe performance score to a 7 point scale where 7 was thebest performance (i.e., all three responses were correct) and1 was the worst performance (i.e., the team did not knowhow to respond to all three requests for information).

Team processes To assess the processes team membersused to gather information and create knowledge, a log ofall actions executed during interactions with C-RAM wasrecorded and time stamped. The logs were analyzed togather the number of occurrences of the process events ofinterest. Specifically, information access is measured as (1)the number of source documents team members view fromtheir own DCards, (2) the number of source documentsteam members view from others’ DCards, and (3) thenumber of DCards a team member adds to her/his white-board. Knowledge creation is measured as (1) the numberof DCard movements in the whiteboard, (2) the number ofratings team members make to their own DCards, and (3)the number of ratings team members make to others’DCards.

Controls For the regression analyses, we controlled fortreatment condition (1 = view other WB and 2 = no viewother WB) and the amount of previous experience the teammembers had with each other (1 = no previous experiencewith my teammates to 6 = worked on several projects withboth teammates).

4 Results

Data was collated and analyzed in terms of overall teamperformance. To begin, we compared team processes andperformance across conditions. We then regressed teamperformance onto the processes. The results of the analysesare described below.

4.1 Communication comparison

In Hypothesis 1, we predict that communication willincrease when team members have shared whiteboardaccess. To test this hypothesis, analyses of the chat logswere conducted. Initial comparisons between treatments forthe amount of chat generated within a team suggests thatteams able to view each other’s whiteboards had morediscussion with a mean of 167.2 lines, whereas not viewing

Inf Syst Front (2011) 13:89–100 95

other whiteboards had a mean of 86.3 lines (t=2.57, p<.02). Thus, Hypothesis 1 is supported.

4.2 Team processes comparison

Our second hypothesis suggests that team processes andperformance will improve when team members have sharedwhiteboard access. To test this hypothesis, all user actionswere logged (with millesecond timestamps) and analyzed.ANOVA results for differences between the means for someuser actions were significant, albeit in the opposite directionof what was hypothesized. As seen in Table 1, Own DCardMoves (F=5.01, p<.0325), Number of DCards Added (F=5.35, p<.028), and Others DCards Rated (F=4.78, p<.035)occurred significantly more often in the cases where teammembers did not have access to their teammates’ white-boards. Thus, Hypothesis 2 is not supported.

4.3 Team performance comparison

ANOVA was performed on the data to ascertain if teamperformance was better when team members have sharedaccess to each others’ whiteboards (Hypothesis 3). Incomparing teams between treatments (viewing other teammembers’ whiteboards, or not), a significant differencebetween the two types of treatments was found (F=51.45, p<.0001). Specifically, teams that could view eachothers’ whiteboards achieved significantly higher perfor-mance (M=6.4, SD=.51) than teams without access (M=4.8, SD=.71), thereby supporting Hypothesis 3.

4.4 Regression results

Hypothesis 4 states that team performance will be positivelyrelated to the volume of team processes undertaken by theteam to gather information and create knowledge. We ran tworegression analyses to test the effects of information access(see Table 2) and knowledge creation (see Table 3) on teamperformance, respectively. The results indicate that ratingone’s own DCards is positively related to team performance.Viewing one’s own source information, adding new DCards,

and reorganizing one’s own DCards are all negatively relatedto team performance. Prior experience with team memberspositively affected performance. Thus, Hypothesis 4 receivedpartial support.

5 Discussion and conclusions

The goals for this study were to evaluate the C-RAMsystem for managing multiple user interactions as theywork with multiple information sources, both visually andtextually. In particular, the effectiveness of the whiteboardfunction within the system for sharing and disseminatinginformation.

In summary, being able to view other analysts’ white-boards leads to significantly higher performance on theSOR task. Every single team in this treatment was able todiscover that Farah was associated with Al-Queda. Further-more, these teams often accurately determined Farah’slocation at a given time. Teams unable to view each other’swhiteboards were able to do this less than half of the time.We suggest that the explicit partition of DCards to analystwhiteboards (knowledge silos), lead to explicit support oftransactive memory (Wegner 1987; Brandon and Hollings-head 2004). Because team members could look at theirteammates’ whiteboards, they knew exactly “who hadwhat” information.

Regression analysis also clearly showed that rating one’sown DCards has a positive effect on team performance.When others view your whiteboard, they see the DCODEratings on the DCards and are able to quickly (cognitivelyefficiently) determine which DCards to focus on.

In terms of process measures, teams that could not vieweach other’s whiteboards were more likely to undertakeinformation gathering and knowledge creation processescontrary to our hypothesis. For example, team membersmore often added and moved DCards within their white-board. This follows from TeamRPD because if “I can’t seeyour whiteboard (and your DCards), I will seek out moreinformation when building situational awareness in order toaccomplish the task.” In other words, the teams without

Process Viewing OtherWBs (Mean)

No Viewing OtherWBs (Mean)

Statistics

Own Whiteboard Views 13.1 10.9 F=1.19, p<.284

Own DCard Moves 116.5 164.4 F=5.01, p<.0325 *

Own Source Views 20.2 18.7 F=0.14, p<.707

Own DCards Rated 3.7 4.3 F=0.31, p<.579

DCards Added 10.1 15.6 F=5.35, p<.028 *

Others’ Source Views 21.0 23.4 F=0.54, p<.46

Others’ DCards Rated 1.5 3.4 F=4.78, p<.035 *

Table 1 Pro-cess data

comparisons

96 Inf Syst Front (2011) 13:89–100

shared access need to undertake more information gatheringand knowledge creation processes than their counterpartswho had shared access. But, adding DCards to one’swhiteboard is cognitively expensive and difficult sincethese DCards are more chunks to manage and remember(Chase and Simon 1973; Zhang and Simon 1985; Gobetand Clarkson 2004; Gobet and Simon 2000). The extraDCards have to be moved around an analyst’s existingDCards, perhaps disrupting their original organization. Thistakes even more cognitive resources away from the task.Clearly, our results show that such cognitive load may beeased when access is shared, possibly because informationgathering and knowledge creation can be accomplishedimplicitly by looking at teammates’ workspace.

When these DCards are added to a whiteboard, individualsare more likely to rate the added DCard(s) than are teammembers who can view each other’s whiteboards. We suggestthat analysts rated the DCards added to their whiteboard moreoften than their “own” DCards because they were attemptingto communicate with each other using the DCODE publicrepresentation or stimulating structure. Rating the DCard maydraw attention to it, i.e. stigmergy (Grassé1959) and stimulatingstructures (Hayne et al. 2005; Theraulaz and Bonabeau 1999),thus leading to increased distributed cognition (Hutchins1991, 1995).

Similarly, continually organizing the orientation of theDCards on the whiteboard distracts the individual’sattention from the task (Doclos and McCarthy 2006). It isinteresting to note, however, that viewing one’s sourceinformation is negatively correlated with performancebecause this is not what would be expected. Knowing

more about the information represented by the DCardshould lead to better decision making. It is possible thatcontinued re-reading of this content is related to the extracards added to the whiteboard because team members can’thold all this information in working memory.

The use of Chat was different between treatments. Whenviewing other analysts’ whiteboards, participants interactedmore. Being able to see that other people didn’t have thesame information on their whiteboards appears to promptusers to engage in discussion. We suggest that whenanalysts didn’t know what was on others’ whiteboards,they didn’t know what to share. This result is counter toStasser et al. (1995), where experts did not share relevantinformation that was not in common. It is possible that bybeing able to view another’s whiteboard, the analyst is ableto “know who knows what” and thus can detect what thoseother analysts don’t know, and share it.

Because the knowledge chunks that represent distributedcognition are embedded on an individual’s whiteboard,there is enough information to suggest that the knowledgeavailable between team members is explicit. This enablesindividuals to confidently focus on their own silo. In otherwords, “I know enough about what you know to knowwhat you know and to know that you don’t know what Iknow. This allows me to focus on what I know.”

Rummaging for the relevant bits of information aboutterrorist activities from the myriad of representations andchannels is a daunting task for any team, much less anindividual analyst. C-RAM is a powerful tool that facilitatesteam situation assessment through cognitive alignment ofdistributed and shared chunk representations.

Standardized β Statistics

Model F=22.0, p<.0001*

R2=0.80

Treatment Condition −0.66 t=−7.20, p<.0001*Previous Experience with Teammates 0.20 t=2.27, p=.03*

Own Source Views −0.24 t=−2.47, p=.02*Others’ Source Views −0.17 t=−1.36, p=.18DCards Added −0.21 t=−1.71, p=.10*

Table 2 Information accessregression results

Standardized β Statistics

Model F=19.47, p<.0001*

R2=0.78

Treatment Condition −0.73 t=−7.54, p<.0001*Previous Experience with Teammates 0.23 t=2.33, p=.03*

Own DCard Moves −0.27 t=−2.73, p=.01*Own DCards Rated 0.17 t=1.7, p=.10*

Others’ DCards Rated −0.05 t=−0.54, p=.59

Table 3 Knowledgecreation regression results

Inf Syst Front (2011) 13:89–100 97

Future research might include some measures of basiccognitive performance, for example working memorycapacity. We also suggest examining the organizationalstructure of individual whiteboards, because the way theDCards are represented may affect shared cognition(Cannon-Bowers and Salas 2001).

5.1 Limitations

The normal concerns regarding the use of students assubjects could be viewed as a limitation of the study(external validity). However, we feel that basic cognitiveprocesses and limitations apply across all populations.Another possible weakness of this study is that theintelligence analysis experimental task may be consid-ered too simple. But, by having 50 different pieces ofinformation in either image or text form with differinglevels of revealed information, and only chat communi-cation, our pilot studies demonstrated that the task hadenough complexity to be challenging within a two hourtime block. Thus, the task required effortful cognition ofthe sort that is typical of many naturalistic domains.Further, we are confident that we simulated an appropri-ate naturalistic decision making environment through theuse of properly aligned incentives. The groups in thisstudy were ad-hoc, yet they exhibited spirited, cohesive,collective identity during de-briefing.

Acknowledgements This research is partially supported by Dr.Mike Letsky at the Office of Naval Research.

Appendix 1: The following text was taken from Warneret al. (2008) and describes the SOR

“In developing the SOR scenario an assessment was madeof the types of cognitive tasks and decisions that wereinvolved in intelligence analysis and mission planning. Theassessment started with using Pirolli’s (2005) unclassifiedcognitive task analysis for intelligence analyst together withthe advice Pirolli obtained from intelligence analysts at theNaval Postgraduate School. The results of this analysiswere integrated with results from St. John et al. (2006)unclassified SLATE scenario. All this information wasreviewed by Lt. Ford, an intelligence officer at the MissionSupport Center, Naval Amphibious Base, Coronado.Updates were made to the types of tasks, information anddecisions required by intelligence analyst and missionplanners, which served as the foundation for the SORscenario.” (page 8, ibid)

“All the information used for storyboarding was takenfrom unclassified open sources. In addition, all names werechanged to reflect fictitious names along with dates. All

photos were also changed, using photoshop, so that allpictures are fictitious. … The text of the SOR scenario waswritten around a story of “Denkapsa Farah”. The storystarts May 26, 2006 where local intelligence indicates thatan al-Qaeda element is reforming in the town of Disisabadin Eastern Afghanistan. This group may be attempting tostrike a deal with a local, coalition-supported warlord,Denkapsa Farah. The overall instructions to the scenarioproblem solving team was: “Based on the intelligenceprovided, work together as quickly and accurately aspossible as a team to:

(1) Determine if Farah has an association with al-Qaeda(Task 1—1.5 hour)

(2) Determine Farah location at a specific time (Task 2—30 minutes)” (page 9, ibid)

“The mission statement above provides the team memberswith the tasks they are to complete. The general backgroundis a brief history of both the characters in the scenario andreal events (such as the September 11 attacks) and people(Bin Laden). The other three sections of the scenario areHuman Intelligence, Satellite Intelligence, and AdditionalIntelligence. Team members are required to share theirinformation with their teammates to accomplish the task.

Human intelligence information One member of the team isassigned the Human Intelligence portion of the scenario. Theinformation provided to this team member involves suchintelligence as hand drawn maps, written notes, bankingtransactions, phone records, and informant information.There are 15 individual pieces of human intelligence.

Satellite intelligence information Another team member isresponsible for the Satellite imagery in the scenario and hasthe satellite photos associated with each task. The photosdepict buildings from a bird’s eye view as well as close-upwith heavier detail and geographical information. There are25 satellite images.

Additional intelligence information The third team memberwill receive additional intelligence from the scenario. Allother pieces of information not included in the first twocategories have been placed into the “additional intelli-gence” group (i.e., maps, photographs, open sourceinformation, and tapped phone conversations). There are10 additional pieces of information.” (pages 10–11, ibid)

References

Baddeley, A. (1992). Working memory. Science, 255(5044), 556–559.Baddeley, A. (1998). Recent developments in working memory.

Current Opinion in Neurobiology, 8(2), 234–238.

98 Inf Syst Front (2011) 13:89–100

Baddeley, A. D., & Hitch, G. J. L. (1974). Working memory. In G. A.Bower (Ed.), The psychology of learning and motivation:advances in research and theory, vol. 8 (pp. 47–89). New York:Academic.

Baddeley, A., Chincotta, D., & Adlam, A. (2001). Working memoryand the control of action: evidence from task switching. Journalof Experimental Psychology: General, 130, 641–657.

Brandon, D., & Hollingshead, A. (2004). Transactive memory systems inorganizations: matching tasks, expertise, and people. OrganizationScience, 15(6), 633–644.

Broadbent, D. E. (1958). Perception and communication. London:Pergammon.

Cannon-Bowers, J. A., & Salas, E. (2001). Reflections on sharedcognition. Journal of Organizational Behavior, 22, 195–202.

Chase, W. G., & Simon, H. A. (1973). Perception in chess. CognitivePsychology, 4(1), 55–81.

Chen, H., Reid, E., Sinai, J., Silke, A., & Ganor, B. (2008). Terrorisminformatics: knowledge management and data mining forhomeland security. New York, NY: Springer Science, p. xv.

Dennis, A. (1996). Information exchange and use in group decisionmaking: you can lead a group to information but you can’t makeit think. MIS Quarterly, 20(4), 433–455.

DeSanctis, G., & Gallupe, B. (1987). A foundation for the study ofgroup decision support systems. Management Science, 33(5),589–609.

Doclos, F., & McCarthy, G. (2006). Brain systems mediatingcognitive interference by emotional distraction. Journal ofNeuroscience, 26(7), 2072–2079.

Ellis, J. (2009). Countering terrorism with knowledge. In H. Chen, E.Reid, J. Sinai, A. Silke, B. Ganor (eds) Terrorism informatics.Springer.

Ellis, S., Gibbs, J., & Rein, G. (1991). GroupWare: some issues andexperiences. Communications of the ACM, 34(1), 38–58.

Fleming, R. A. (2003). Information exchange and display inasynchronous C2 group decision making? SPAWAR SystemsCenter, San Diego; The 8th Intl C2 Research and TechSymposium (ICCRTS).

Fleming, R. A. (2008). DCODE: A tool for knowledge transfer, conflictresolution and consensus-building in teams. In M. Letsky et al.(Eds.),Macrocognition in teams: theories and methodologies. UK:Ashgate.

Gobet, F., & Clarkson, G. (2004). Chunks in expert memory: evidencefor the magical number four... or is it two? Memory, 12, 732–747.

Gobet, F., & Simon, H. (1998). Expert chess memory: revisiting thechunking hypothesis. Memory, 6, 225–255.

Gobet, F., & Simon, H. A. (2000). Five seconds or sixty? Presentationtime in expert memory. Cognitive Science, 24(4), 651–682.

Grassé, P. (1959). La reconstruction du nid et les coordinations inter-individuelles chez bellicositermes natalensis et cubitermes sp. lathéorie de la stigmergie: Essai d’interprétation du comportementdes termites constructeurs. Insectes Sociaux, 6(1), 41–81.

Hayne, S., & Pendergast, M. (1995). Experiences with object orientedgroup support software development. IBM Systems Journal, 34(1), 96–120.

Hayne, S., & Ram, S. (1995). Group database design: addressing theview modeling problem. Journal of Systems and Software, 28(2),97–122.

Hayne, S., & Smith, C. A. P. (2007). Cognitively-based rapidassessment methodology (C-RAM) final report. Office of NavalResearch, Arlington, VA, Tech. Rep. N00014-06-M-0223.

Hayne, S., Smith, C. A. P., & Turk, D. (2002). The effectiveness ofgroups recognizing patterns. International Journal of HumanComputer Studies, 59(5), 523–543.

Hayne, S., Smith, C. A. P., & Vijayasarathy, L. (2005). The use ofpattern-communication tools and team pattern recognition. IEEETransactions on Professional Communication, 48(4), 377–390.

Hutchins, E. (1991). The social organization of distributed cognition.In L. Resnick, J. Levine, & S. Teasdale (Eds.), Perspectives onsocially shared cognition (pp. 283–307). Washington, DC:American Psychological Association.

Hutchins, E. (1995). How a cockpit remembers its speeds. CognitiveScience, 19(3), 265–288.

Ishii, H., & Miyake, N. (1991). Toward an open shared workspace:computer and video fusion approach of team workstation.Communications of the ACM, 34(12), 36–54.

Jessup, L., & Valacich, J. (1993). Group support systems: A newfrontier. New York: MacMillan.

Johnson-Lenz, P., & Johnson-Lenz, T. (1982). Groupware: theprocess and impacts of design choices. In E. Kerr & S. Hiltz(Eds.), Computer-mediated communication systems. New York:Academic Press.

Kaempf, G., Klein, G., Thordsen, M., & Wolf, S. (1996). Decisionmaking in complex naval command-and-control environments.Human Factors, 38, 220–231.

Keel, P. E. (2007). EWall: a visual analytics environment for collaborativesense-making. Information Visualization, 6(1), 48–63.

Klein, G. (1993). A recognition-primed decision (RPD) model of rapiddecision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in action: models andmethods. Norwood: Ablex.

Klein, G., Moon, B., & Hoffman, R. F. (2006a). Making sense ofsensemaking I: alternative perspectives. IEEE Intelligent Systems,21(4), 70–73.

Klein, G., Moon, B., & Hoffman, R. F. (2006b). Making sense ofsensemaking Ii: a macrocognitive model. IEEE IntelligentSystems, 21(5), 88–92.

Kobayashi, M., & Ishii, H. (1993). ClearBoard: a novel shared drawingmedium that supports gaze awareness in remote collaboration.IEICE Transactions on Communications, 76(6), 609–624.

Lavie, N. (1995). Perceptual load as a necessary condition forselective attention. Experimental Psychology: Perception andPerformance, 21(3), 451–468.

Mickolus, E. F. (2002). How do we know we’re winning the waragainst terrorists? Issues in measurement. Studies in Conflict &Terrorism, 25(3), 151–160.

Miller, G. A. (1956). The magical number seven, plus or minus two:some limits on our capacity for processing information. ThePsychological Review, 63, 81–97.

Mohammed, S., & Dumville, B. C. (2001). Team mental models ina team knowledge framework: expanding theory and measurementacross disciplinary boundaries. Journal of Organizational Behavior,22, 89–106.

Pendergast, M., & Hayne, S. (1999). Groupware and social networks:will life ever be the same again. Journal of Information andSoftware Technology, 41(6), 311–318.

Pirolli, P. (2005). Rational analyses of information foraging on theweb. Cognitive Science, 29(3), 343–373.

Roseman, M., & Greenberg, S. (1996). Teamrooms: network placesfor collaboration. Proceedings of the ACM CSCW Conference,pp 325–333.

Salas, E., & Fiore, S. M. (2004). Team cognition: Understanding thefactors that drive process and performance. Washington, DC: APA.

Schmid, A. (2004). Statistics on terrorism: the challenge of measuringtrends in global terrorism. Forum on Crime and Society, 4(1/2),49–69.

Siegal, J., Dubrovsky, V., Kiesler, S., & McGuire, T. (1986). Groupprocesses in computer mediated communication. OrganizationalBehavior and Human Decision Process, 37, 157–187.

Silke, A. (2004). An introduction to terrorism research. In A. Silke(Ed.), Research on terrorism: trends, achievements and failures(pp. 1–29). London: Frank Cass.

Simon, H. (1974). How big is a chunk? Science, 183, 482–488.

Inf Syst Front (2011) 13:89–100 99

Stasser, G., Stewart, D. D., & Wittenbaum, G. M. (1995). Expert rolesand information exchange during discussion: the importance ofknowing who know knows what. Journal of Experimental SocialPsychology, 31, 244–265.

Stasser, G., Vaughan, S., & Stewart, D. (2000). Pooling unsharedinformation: the benefits of knowing how access to informationis distributed among group members. Organization Behavior andHuman Decision Processes, 82(1), 102–116.

St. John, M., Smallman, H. S., & Voigt, B. D. (2006). SLATE scenarioone: Taliban headquarters. Unclassified technical report. SanDiego: Pacific Science & Engineering Group, Inc.

Stefik, M., Foster, G., Bobrow, D. G., Kahn, K., Lanning, S., &Suchman, L. (1987). Beyond the chalkboard: computer supportfor collaboration and problem solving in meetings. Communica-tions of the ACM, 30(1), 32–48.

Susi, T., & Ziemke, T. (2001). Social cognition, artefacts, andstigmergy: a cooperative analysis of theoretical frameworks forthe understanding of artefact-mediated collaborative activity.Journal of Cognitive Systems Research, 2(4), 273–290.

Theraulaz, G., & Bonabeau, E. (1999). A brief history of stigmergy.Artificial Life, 5(2), 97–116.

Treisman, A. M. (1969). Strategies and models of selective attention.Psychological Review, 76, 282–299.

Trujillo, H., & Jackson, B. (2008). Terrorism informatics: knowledgemanagement and data mining for homeland security. In H. Chen,E. Reid, J. Sinai, A. Silke, & B. Ganor (Eds.). New York, NY:Springer Science, Chapter 9.

Varian, H. (1995). The information economy: how much will two bitsbe worth in the digital marketplace? Scientific American, 273(3),200–201.

Warner, N., Burkman, L., & Biron, C, (2008). Special OperationsReconnaissance (SOR) scenario: intelligence analysis and missionplanning. Office of Naval Research, Arlington, VA, Tech. Rep.NAWCADPAX/TM-2008/184.

Wegner, D. M. (1987). Transactive memory: a contemporary analysisof the group mind. In B. Mullen & G. R. Goethals (Eds.),Theories of group behavior (pp. 185–208). New York: Springer.

Wegner, D. M., Erber, R., & Raymond, P. (1991). Transactive memoryin close relationships. Journal of Personality and SocialPsychology, 61(6), 923–929.

Weick, K. (1995). Sensemaking in organizations. Thousand Oaks: Sage.Wickens, C. D. (1984). Processing resources in attention. In R.

Parasuraman & R. Davies (Eds.), Varieties of attention (pp. 63–101). Orlando: Academic.

Wickens, C. D., & Liu, Y. (1988). Codes and modalities in multipleresources: a success and qualification.Human Factors, 30, 599–616.

Wittenbaum, G., & Stasser, G. (1996). Management of information insmall groups. In J. Nye & A. Brower (Eds.), What’s social aboutsocial cognition: research on socially shared cognition in smallgroups (pp. 3–28). Thousand Oaks: Sage.

Yen, J., Fan, X., Sun, S., Hanratty, T., & Dumer, J. (2006). Agentswith shared mental models for enhancing team decision makings.Decision Support Systems, 41(3), 634–653.

Zhang, G., & Simon, H. A. (1985). STM capacity for chinese worksand idioms: chunking and acoustical loop hypotheses. Memoryand Cognition, 13, 193–201.

Stephen C. Hayne is a Professor of Computer Information Systems inthe College of Business at Colorado State University. He received hisPh.D. from the University of Arizona (1990); his current researchinvolves exploring collaboration and cognition in teams and groups,especially when under time pressure. He has received more than $3.1million in grants from the National Science Foundation and the Officeof Naval Research to continue this work and was awarded an IBMFaculty Fellowship in 2006. His papers have been published in majorconferences and numerous journals such as Journal of ManagementInformation Systems, Database, Journal of Information and Manage-ment, Journal of Computer Supported Collaborative Work, IBMSystems Journal, Electronic Markets and International Journal ofHuman Computer Studies. His research is based in the desire to useinnovative technologies to solve real business problems.

Lucy J. Troup an Assistant Professor in the Perceptual and BrainScience program in the Department of Psychology at Colorado StateUniversity. Prior to moving to the United States she was faculty in thePsychology Department at the University of the West of England,Bristol, UK. She received her PhD from the University of PlymouthUK in 1995. She has an active research program in Human computerInteraction. Current projects include developing models of the humanvisual system, imaging the brain using Event Related Potentials,evaluating computer tools for intelligence agents and funded researchworking with both the National Parks Service and the United StatesDepartment of Agriculture. Recent publications include Environmentand Behavior, Journal of Applied Social Psychology, and The Journalof Environmental Psychology.

Sara A. McComb is an Associate Professor of Industrial andSystems Engineering and holds the Parsons Career DevelopmentProfessorship in Engineering Management at Texas A&M Univer-sity. Prior to joining Texas A&M, she served on the faculty of theIsenberg School of Management at the University of MassachusettsAmherst. She received her B.S.I.E. from GMI Engineering &Management Institute, her M.S.E.S. from Rensselaer PolytechnicInstitute and her Ph.D. from Purdue University’s School of IndustrialEngineering. McComb’s research interests include examining projectteams, particularly their communication and cognitive processes.Her research has been funded by the National Science Foundation,the Office of Naval Research, and the Department of Defense, andpublished in journals such as the IEEE Transactions on EngineeringManagement, Human Factors, and the Journal of Engineering andTechnology Management.

100 Inf Syst Front (2011) 13:89–100


Recommended