+ All Categories
Home > Documents > Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in...

Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in...

Date post: 09-Oct-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
14
Articles WINTER 2011 39 T he way people interact with robots appears to be funda- mentally different from how they interact with most oth- er technologies. People tend to ascribe a level of intelli- gence and sociability to robots that inuences their perceptions of how the interactions should proceed. If a robot, for its part, adheres to this ascription, then the interactions tend to be nat- ural and engaging; if not, they can be discordant. Our goal is to develop autonomous robotic systems that can sustain such nat- ural and engaging social interactions with untrained users. Our approach is to develop believable robot characters. In this context, believable means an illusion of life. Bates (1994) writes, “There is a notion in the Arts of ‘believable character.’ It does not mean an honest or reliable character, but one that provides the illusion of life, and thus permits the audience’s suspension of disbelief …. Traditional character animators are among those artists who have sought to create believable characters ….” Think of animated characters such as the magic carpet in Dis- ney’s Aladdin or the teapot and candlestick in Beauty and the Beast — not people, in any sense, but engaging, lifelike charac- ters. Perhaps a more apt analogy, though, is to view robots as actors performing in a human environment (Hoffman 2011). As early as the 19th century, acting theories, such as Delsarte’s method (Stebbins 1886), placed emphasis on external actions as a key to believability. For example, an actor’s dialogue should be rich in verbal and nonverbal expression of emotions and per- sonality traits. While modern acting methods (Stanislavski Copyright © 2011, Association for the Advancement of Articial Intelligence. All rights reserved. ISSN 0738-4602 Believable Robot Characters Reid Simmons, Maxim Makatchev, Rachel Kirby, Min Kyung Lee, Imran Fanaswala, Brett Browning, Jodi Forlizzi, Majd Sakr Believability of characters has been an objec- tive in literature, theater, lm, and animation. We argue that believable robot characters are important in human-robot interaction, as well. In particular, we contend that believable char- acters evoke users’ social responses that, for some tasks, lead to more natural interactions and are associated with improved task per- formance. In a dialogue-capable robot, a key to such believability is the integration of a consis- tent story line, verbal and nonverbal behaviors, and sociocultural context. We describe our work in this area and present empirical results from three robot receptionist test beds that operate “in the wild.”
Transcript
Page 1: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

WINTER 2011 39

The way people interact with robots appears to be funda-mentally different from how they interact with most oth-er technologies. People tend to ascribe a level of intelli-

gence and sociability to robots that influences their perceptionsof how the interactions should proceed. If a robot, for its part,adheres to this ascription, then the interactions tend to be nat-ural and engaging; if not, they can be discordant. Our goal is todevelop autonomous robotic systems that can sustain such nat-ural and engaging social interactions with untrained users.

Our approach is to develop believable robot characters. In thiscontext, believable means an illusion of life. Bates (1994) writes,“There is a notion in the Arts of ‘believable character.’ It doesnot mean an honest or reliable character, but one that providesthe illusion of life, and thus permits the audience’s suspensionof disbelief …. Traditional character animators are among thoseartists who have sought to create believable characters ….”Think of animated characters such as the magic carpet in Dis-ney’s Aladdin or the teapot and candlestick in Beauty and theBeast — not people, in any sense, but engaging, lifelike charac-ters.

Perhaps a more apt analogy, though, is to view robots asactors performing in a human environment (Hoffman 2011). Asearly as the 19th century, acting theories, such as Delsarte’smethod (Stebbins 1886), placed emphasis on external actions asa key to believability. For example, an actor’s dialogue should berich in verbal and nonverbal expression of emotions and per-sonality traits. While modern acting methods (Stanislavski

Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602

Believable Robot Characters

Reid Simmons, Maxim Makatchev, Rachel Kirby, Min Kyung Lee, Imran Fanaswala, Brett Browning, Jodi Forlizzi, Majd Sakr

� Believability of characters has been an objec-tive in literature, theater, film, and animation.We argue that believable robot characters areimportant in human-robot interaction, as well.In particular, we contend that believable char-acters evoke users’ social responses that, forsome tasks, lead to more natural interactionsand are associated with improved task per-formance. In a dialogue-capable robot, a key tosuch believability is the integration of a consis-tent story line, verbal and nonverbal behaviors,and sociocultural context. We describe our workin this area and present empirical results fromthree robot receptionist test beds that operate“in the wild.”

Page 2: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

2008) instead focus on cohesive representations ofunderlying developments in the character, theseare still manifested in a performer’s surface actions.Thus, both traditions of acting emphasize the needfor both verbal and nonverbal richness and conti-nuity with character.

These ideas from animation and drama provideinspiration for our approach to developing believ-able characters. In general, a believable charactershould be cognizant of the people with whom it isinteracting and exhibit behaviors consistent withthe social norms of such interactions. Believabilityof robots, in addition to increasing a user’s enjoy-ment while interacting, helps to make the interac-tion more natural by increasing predictability —the robot acts more like people expect. In this arti-cle, we describe our approaches to developingbelievable characters, focusing on both verbal andnonverbal behaviors. In particular, we present ouruse of dramatic structure with rich backstory andevolving story line, verbal and nonverbal socialbehaviors, and believable culturally specific charac-ters.

To test our approach, we have developed severalrobot characters that operate long term at theCarnegie Mellon campuses in both Pittsburgh,Pennsylvania, and Doha, Qatar. The robots are sim-ilar in that they each play the role of a robot recep-tionist (“roboceptionist”), providing directions andgeneral information about the university and envi-rons. They each feature a cartoonlike, but expres-sive, three-dimensional graphical head and sensorsthat allow them to track and respond to people’sactions. They all receive typed input and respondusing text-to-speech with lip-syncing. They differmainly in the characters themselves — includingboth male and female, human and machine, Amer-ican and Arab. The differences are exhibitedthrough a combination of facial features, voice, lan-guage used, nonverbal behaviors, and backstory.

Most research in human-robot interaction hasinvolved laboratory experiments, where peopleand robots interact under relatively controlledconditions. While this is a valuable source of data,it may not be indicative of the types of interactionsthat people will have with deployed systems. Analternate approach, which we have followed for anumber of years, is to place robots “in the wild,”where people interact with them if, and when,they choose. While it is typically more difficult toevaluate such robots, due to a lack of ground truthmeasurement of user states, we believe that theinteractions are much more natural and better cap-ture the range of interactions that people willexhibit in actual settings. For instance, in interac-tions with our roboceptionists, we see manyinstances of foul language, personal questions, andmarriage proposals — the types of interactions oneis less likely to observe in laboratory settings,

where people are conscious of being recorded. Hav-ing a robot publicly accessible also enables us tocapture interactions with diverse user groups, suchas support staff and visiting parents and children,that represent challenges to conventional routes ofsubject recruitment. In such uncontrolled settings,we typically need to log and analyze hundreds orthousands of interactions over weeks, or evenmonths, to find significant results. To date, ourapproach has yielded insight into various aspectsof interaction including what dialogue topics peo-ple typically choose, how social behaviors manifestthemselves and affect the outcomes of interac-tions, how display of emotion and mood affect theway people interact, and how interactions differacross cultures.

The next section presents our robot test beds.We then describe our approach to designing believ-able characters, including how we incorporate dra-matic structure and emotions into the dialogue,how we utilize verbal and nonverbal behaviors toincrease believability, and how we develop believ-able ethnically specific characters. We then presentan empirical account of how our design approachaffects the experience of interacting with robotcharacters in the wild and finally provide conclu-sions and future work.

Robot Test Beds To test our approach, we wanted to place robots insituations where people would encounter themoften and have the option of deciding whether tointeract with, or ignore, them. The hypothesis isthat believable robot characters would prove to beengaging and would attract people to interactwith, and exhibit social behaviors toward, therobots. To that end, we wanted the robots to havea role that would be familiar to people and onewhere they would have occasional need for therobots’ assistance. Thus, we chose to develop robotreceptionists that greet visitors, provide informa-tion, such as directions to people’s offices, and talkabout their “life.”

The first roboceptionist, named Valerie, wasdeployed in November 2003 (Gockley et al. 2005).It is housed in a custom-built booth in a corridornear the main entrance of the Robotics Institute atCarnegie Mellon, Pittsburgh (figure 1). The robothas a 15-inch monitor that displays a graphicalface, mounted on a Directed Perception pan-tilthead atop a (stationary) RWI B21r base. The robotperceives people using a SICK laser mountedunderneath the ledge of the booth. People interactwith the roboceptionist by typing to it, and itresponds vocally using the Cepstral text-to-speechsystem.1 An auxiliary monitor over the keyboardshows people what they are typing and displaysthe IRB notification. Visitors affiliated with the

Articles

40 AI MAGAZINE

Page 3: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

WINTER 2011 41

Figure 1. Valerie at Work in Her Booth.

university can also swipe their ID cards to identifythemselves to the robot.

Tank (figure 2) replaced Valerie in October 2005.The hardware and most of the software used is thesame as Valerie’s, but we changed the face, voice,and the character’s backstory and story line. Inaddition, the booth was decorated to reflect Tank’sbackstory (worked for NASA and the CIA).

In 2008, Hala was deployed in the main atriumof the Carnegie Mellon campus in Doha, Qatar.Unlike the Pittsburgh roboceptionists, Hala is situ-ated at a counter next to human receptionists andsecurity personnel (figure 3). Hala can accept inputin both Arabic and English, and responds inwhichever language the user types, using theAcapela speech synthesis system.

The three robots all display an expressive graph-ical head, based on AT&T’s SimpleFace. To date,the heads have been fairly simple and “cartoon-like” (figure 4(a–c)) to avoid raising undue expec-tations about the robot’s capabilities. For the nextversion of Hala, however, we are shifting towardincreased visual realism and adding visual cues ofethnicity (figure 4c).

For small motions, the head moves within thescreen, while for larger motions the whole screenmoves on the pan-tilt head. Phoneme outputsfrom the text-to-speech systems are used to auto-matically generate and synchronize lip movementswith the speech audio. In addition, a scripting lan-guage was implemented that enables developers todefine facial expressions that can be synchronizedwith speech.

The roboceptionists interact with people prima-rily in a question-answering mode, waiting fortyped input (prompting if none is forthcoming).While signage helps guide people in interacting,they are largely left to themselves to discover whatthe roboceptionist knows and can talk about. Infact, the roboceptionists are fairly limited — theycan tell what offices people are in (using an onlineuniversity database), give directions around cam-pus, answer questions about the weather aroundthe world (by requesting and parsing free weatherweb services), provide the current date and time,and answer some specific questions about theirlocales. For instance, Tank can talk about CarnegieMellon, Pittsburgh and, of course, the Steelers. Inaddition, each robot has a fairly rich backstory andevolving story line that it can speak about, whichare described in the following section.

To handle input utterances, we adapted theAINE system, an open-source chatbot.2 AINE pro-vides a language for designing template-based rulesfor parsing text. Some of the rules generate script-ed output, while others invoke special procedures(such as the ones for looking up people’s officesand handling weather requests).

In addition to responding to users’ utterances,

the roboceptionists engage in some spontaneousbehaviors. For instance, they greet people who arepassing by, using a learned classifier to distinguishpeople who are likely to stop from those who arenot, and focusing on the former. If people arestanding nearby, the robot will encourage them toapproach and interact. When no one is interact-ing, the roboceptionist will periodically engage ina simulated phone call, along the lines of what ahuman receptionist might do.

Designing Believable Robot Characters

Our primary approach to designing believablecharacters is to focus on the content and behaviorof the character. The goal is to develop syntheticcharacters that are memorable and cohesive.Inspired by dramaturgy and acting theories, suchas those of Delsarte (Stebbins 1886) and Stanislavs-ki (2008), we strive to have the character’s look,voice, personality, use of language, nonverbal

Page 4: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

42 AI MAGAZINE

expressions, backstory, and evolving story line beunified and contribute to an overall engagingexperience. Next, we elaborate on our designapproach by considering three important aspectsof character: backstory, social verbal and nonver-bal behaviors, and expression of culture.

Backstory Backstory is the history of a character, used to pro-vide depth to the character (as it is incrementallyrevealed to users) and as a context for the charac-ter’s actions. To create characters with consistentbackstories, we have been collaborating with fac-ulty and students from Carnegie Mellon’s Schoolof Drama. To begin, the drama personnel brain-storm several candidate characters and we jointlychoose one that we believe would be both inter-esting and acceptable to our “audience”: visitorsand people in our buildings.

Once a character is chosen, the drama personneldevelop a backstory and evolving story line (alongwith a three-dimensional graphical head thatexemplifies that character, which we animate

using our adaptation of SimpleFace). The backsto-ry includes details about the character, includingits age, family, living situation, past employment,and so on. The drama personnel write robot dia-logue for each aspect of the backstory and the dia-logues are entered into a content database.

The story line consists of 3–4 narrative threadsthat run throughout an academic year. The threadsare scheduled out in advance, with different eventsoccurring at different dates, each of the threadshaving a dramatic structure — building to a climaxand resolution. Most of the story lines last the fullyear, but some end after only a few months, withothers starting in their place. Examples of storylines that we have used include family problems,problems at the job (for example, thinking that therobot’s supervisor is out to get him/her), pursuinga musical career, trying to lobby for robot rights,helping an Afghan refugee, and the difficulties ofbeing a robot without arms. The drama personnelwrite dialogue for the character, describing both itspre- and postreaction to the events of the storylines. For instance, if the robot has a date sched-uled for Saturday, the script might indicate that therobot can start talking about the date three daysprior, at which point it might reveal that it is veryexcited and nervous. Then, after the date, differentdialogue may be indicated, for instance if the datedid not go well. Typically, a full year’s worth of dia-logue is written at one time and entered into thecontent database. In addition, specific nonverbalexpressions can be designed and associated withpieces of dialogue to enhance their effect.

The content database associates dialogues (andexpressions) with objects involved in the backsto-ry and story lines, such as characters, buildings,pets, and nearby knickknacks. Each object has a listof attributes (for example, name, age, parent, job)and a list of associated events that involve thatobject. Each attribute is a time line, partitioned intoa set of intervals over which a particular piece ofdialogue holds. For instance, the “boyfriend”attribute of Valerie might have one value fromNovember 2003 to December 11, 2003, thenanother value until March 15, 2004, then anothervalue from March 15, 2004 until the “end of time.”When someone asks Valerie “who is yourboyfriend,” the system looks up the “Valerie”object from the database, accesses the “boyfriend”attribute, searches the time line to find the intervalthat includes the current date and time, and thensays the associated dialogue and displays any asso-ciated expressions. Similarly, one can ask temporalquestions (for example, “who was your boyfriendlast January”) and questions with embedded refer-ences (for example, “what is your pet’s name,”which involves looking up the current “pet” attrib-ute of the robot and then looking up the current“name” of that object). For consistency, static

Figure 2. Tank Attending to a User.

Page 5: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

WINTER 2011 43

information, such as the robot’s name, is alsoentered into the database as an attribute with justa single interval on the time line. A scripting lan-guage was developed to facilitate expressing thetemporal aspects of the story lines and backstory,which are parsed and entered into the databaseeach time the robot software is restarted.

Our experience has shown that much care isneeded to present the desired characters, and thatthe character development must be holistic. Onepertinent example is with Valerie — a mid-20sfemale character with mother and boyfriend issuesand a strong desire to become a lounge singer.Valerie was conceived of as being naïve and slight-ly neurotic, but very pleasant. Unfortunately, oneof her responses to not being able to parse userinput was seen as snide, or even hostile: “Look, Ican give you directions, or sing you a song, or giveyou the weather. But, if you want anything else, Ican’t help you.” Since that phrase was said often(Valerie fails to understand about a quarter of userinput) it tended to dominate people’s perceptionof her, in a way contrary to Valerie’s intended per-sonality. For Tank, we made a concerted effort to

have all such phrases reflect his character and per-sonality.

Social Verbal and Nonverbal Behaviors Exhibiting appropriate social behaviors is a largefactor in the believability of characters. For robotsin the wild, attracting interactors is an importantgoal, and we believe that socially appropriatebehaviors can facilitate that. For instance, theroboceptionists greet passersby, either nodding atthem or saying “good morning (afternoon orevening),” depending on the time of day. To avoidbothering people who are probably not interestedin paying attention to the robot, we trained a clas-sifier, using several weeks of collected laser rangedata, to predict how likely a person, at a given posi-tion and velocity, is to stop and interact with therobot. This classifier is run repeatedly as people aretracked by the laser, and when the confidence thata person will stop crosses a threshold, the robotgreets that person.

Similarly, if a robot is idle and detects a personstopped nearby, but not in front of the keyboard,it will encourage the person to interact. If the per-

Figure 3. Hala in an Atrium with Human Receptionist and Security Staff.

Page 6: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

44 AI MAGAZINE

son is at the keyboard, but not typing, it will brieflydescribe how to interact with the robot. If the per-son continues to ignore the robot’s requests tointeract, the robot will announce that it is ignoringthe person, too.

During the interaction, several strategies areused to encourage the user’s social response. Forexample, the robot may augment the usual greet-ings with the phrase “Thanks for stopping by.” Wehypothesize that this primes the user to thank therobot more often. The robot also can indicate theeffort it takes to answer the user’s question, eitherby just pausing or saying, “Just a second, please. I’lllook that up.” The effects of these social strategieson the interactions are described in the evaluationsection.

Nonverbal behaviors are just as important as dia-logue to the design of believable characters. Facialexpressions, gaze, and head posture all contributeto the overall effect of “liveness” and awareness ofsurroundings. We have developed a GUI thatenables the drama personnel (and others) to designspecific nonverbal expressions and associate themwith phrases, so that when the dialogue is said theexpressions are displayed.

In addition, we have implemented a categoricalmodel of emotions (Gockley, Forlizzi, and Sim-mons 2006; Kirby, Forlizzi, and Simmons 2010)that enables the robot’s facial expressions to be cor-related both with the current interaction and thestory lines. Developers can tag pieces of dialogueto indicate its emotional content. The emotionalindicators are combined and used to change theexpression of the robot. Specifically, we imple-mented a subset of the basic emotions presented inEkman (1969): joy (happiness), sadness, disgust(frustration), and anger. The emotional expressionsare based on Delsarte’s code of facial expressions(Stebbins 1886), and their intensities are expressedover a continuous range with linear interpolationof key frames (figure 5). A web-based study demon-

strated that people were able to “read” the emo-tional expressions fairly readily (Kirby, Forlizzi, andSimmons 2010). Emotions are associated primarilywith interactions and are short-lived, lasting theduration of the associated pieces of dialogue. Forinstance, the robot displays frustration if it doesnot understand an input, and when the input is “Ilove you,” it displays happiness while responding,“But, you don’t even know me.”

The robot also maintains a longer-lived mood,which is primarily associated with personal histo-ry and “life” events. Mood is represented with avalence (positive or negative) and an intensity. Themood associated with an event rises and falls expo-nentially over time, reaching a peak at the time theevent is scheduled to occur in the story line. Thecontributions of multiple concurrent events sumto produce the overall mood. In our model, moodaffects emotions, and vice versa. Emotions are lin-early scaled by the intensity of the current mood,where emotions that have the same valence as themood are increased in intensity, while emotionsthat differ in valence are decreased. Similarly, theoccurrence of emotional events can modulatemood. In accordance with existing psychologicalmodels (Rook 2001), positive social exchangesincrease positive moods, while negative exchangesdecrease any mood. Thus, expressing admirationfor the robot will increase its already positivemood, while swearing at the robot can “bring itdown.” This effect decays over time, however,returning the robot to its “baseline” mood, as indi-cated by current events in the story line, if no fur-ther emotional events occur. The evaluation sec-tion describes an experiment to test the effects ofmood on people’s interactions with the robocep-tionist.

The robots perform several other nonverbalbehaviors to increase their believability. Periodi-cally (and stochastically), they engage in auto-nomic behaviors — blinking, breathing (displayedthrough flaring nostrils), and small head move-

a b c d

Figure 4. Faces of Roboceptionist Characters Valerie (a), Tank (b), Current Hala (c), and New Hala (d).

Page 7: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

WINTER 2011 45

ments. Each adds a measure of liveness. The robotsalso use gaze to maintain engagement — whilethey focus primarily on the person at the key-board, they will turn and nod at people who new-ly arrive and will occasionally thereafter turn togaze at them. In addition, when someone is typ-ing, the robots will periodically gaze down at thekeyboard, to acknowledge that they are aware ofthe input.

Expressing Culture Advocates of the holistic design of animated agentsargue that cultural, as well as individual, variabili-ty should be expressed in all of the “qualities thatanimate characters should possess” (Hayes-Roth,Maldonado, and Moraes 2002), including backsto-ry, appearance, manner of speaking, and gesturing.In particular, projecting a cultural identity mayincrease believability through a phenomenonknown as homophily — the tendency of individualsto associate disproportionally with similar others(Lazarsfeld and Merton 1954). While homophily istypically observed in interactions betweenhumans, its effects are also evident in interactionsbetween humans and onscreen agents (Nass, Isbis-ter, and Lee 2000). Based on this, we believe thataffinity toward robot characters can be increasedthrough an expression of cultural cues, especiallywhen they are congruent with the culture of theuser.

Most existing robot characters with culturalidentity, such as Ibn Sina (Mavridis and Hanson2009) and Geminoids (Ishiguro 2005), expresstheir ethnicity almost exclusively through appear-ance and choice of language. While we attemptedto avoid ethnic cues when designing the appear-ances of the Pittsburgh-based roboceptionists,3 thenew Hala (figure 4d) is decidedly Middle-Eastern inappearance and both versions of Hala support dia-logue in Arabic and English.

To enhance cultural identity, we go beyondexpressing ethnicity through appearance and lan-guage choice alone. Tank, for example, is aware oflocal colloquialisms (for example, “yinz” in Pitts-burgh dialect is plural for “you”) and Hala can han-dle some degree of code switching (for example,using the Arabic “inshallah” — “God willing” — inEnglish dialogues). The roboceptionists’ backsto-ries support their identity as citizens of either Pitts-burgh or Doha. Tank, for example, is an avid fan ofthe Pittsburgh Steelers (“I love the Steelers. A foot-ball team named after the most important materi-al in the world is OK with me.”) and once datedthe Pittsburgh stadium’s scoreboard; Hala is waryabout the idea of driving in Doha’s fast traffic.

In general, though, creating a believable charac-ter that projects a particular culture is quite chal-lenging. The first difficulty lies with the term cul-ture itself. When used to describe communities ofpeople, it is as impossible to separate culture fromlanguage (Agar 1994) as it is to outline culturebased solely on ethnicity or mother tongue(McPherson, Smith-Lovin, and Cook 2000). Themore careful view of an individual’s culture as acombination of dimensions that evolve with timeand context and are viewed in a relation to anobserver (Agar 2006) is alluring but implies amethodological difficulty in identifying culturallyspecific behaviors. While reported anthropologicalstudies cover many world cultures to variousextents, it is not always clear if the findings standthe test of time. It is also not unusual for an out-sider’s view of a particular community to be foundoffensive by the community members themselves(de Rosis, Pelachaud, and Poggi 2004).

To address these concerns more fully, we aredeveloping a data-driven approach to identifyingbehaviors that express culture. First, we shortlistpotential culturally specific behaviors, or rich points(Agar 1994), from anthropological accounts,

Figure 5. Valerie’s Emotional Range from Neutral to Angry.

Page 8: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

46 AI MAGAZINE

ethnographies, and studies on second-languageacquisition, among others. We then design stimulithat incorporate these rich points and evaluateperception of those stimuli by the members of thedesired demographics. For instance, in the case ofan Arabic character, we define the community ofinterest based on native language and country ofresidence, and we compare the perception of thestimuli of that community with a control group(native English speakers residing in the UnitedStates).

Our first such study, which recruited partici-pants from both American English and Arabic lan-guage communities using Amazon’s MechanicalTurk, evaluated perceived personality and natural-ness of linguistic features of verbosity, hedging,syntactic and lexical alignment, and formalitythrough fragments of dialogues (Makatchev andSimmons 2011a). The results have been quiteinformative. For example, we found that the speak-ers of American English, unlike the speakers of Ara-bic, find formal utterances unnatural in dialogueacts involving greetings, question-answer, and dis-agreement. Formal utterances also tended to beperceived as indicators of openness and conscien-tiousness by Arabic speakers in disagreements andapologies, respectively, but not by American Eng-lish speakers. Apologies that included the hedgingmarker “I am not sure” were perceived as an indi-cator of agreeableness by American English speak-ers, but not by speakers of Arabic. Additional stud-ies in both verbal and nonverbal behaviors, as wellas transfer of the findings to our robot test beds,are in progress.

Evaluating Robot Characters in the Wild

One of the consequences of deploying a robot inthe wild, as opposed to studying the interactions

through controlled laboratory experiments, is thatit becomes difficult to survey and interview userswithout disrupting some of the “wilderness” — theimpression that the interaction is autonomous andunmonitored by humans. As a result, corpora ofinteractions with robots that operate in the wildtypically have neither user demographic data northe ground truth of user intentions and impres-sions. On a positive side, robots operating in thewild may encounter many more users than in areasonably controlled experiment. Tank, for exam-ple, even after the novelty factor wore out, stillaverages 27.5 interactions per day (SE = 0.5), eachaveraging 9.5 total dialogue turns (SE = 0.1) (basedon January 2009 to June 2011 data). The relativelylarge amount of collected data makes it feasible toanalyze multiple features of interactions at once.Specifically, we demonstrate that mining interac-tion transcripts allows us to infer the degree of theuser’s social orientation toward the robot. Thedata-driven method also enables us to estimate thejoint effects that multiple robot behaviors have onthe interactions. We also show that embeddingelicitation questions in the interaction itself is aviable way of obtaining ground truth for some ofaspects of the users’ intentions.

Topics, Greetings, and Discourse Features One assertion in the previous section is that therobots’ backstories lead to more believable charac-ters. Anecdotally, we know that many users fre-quent the robot to follow its stories, and somebecome quite emotionally attached. More quanti-tatively, we can analyze the topics of user interac-tions to indicate how people interact with therobot. Table 1, adopted from Lee and Makatchev(2009), shows the frequency of dialogue topics ininteractions with Tank. These data were obtainedby manually coding 197 interactions that occurredin 5 weekdays in March 2008 (2 coders, � = 0.7).Interaction boundaries are defined by the user’sapproach and departure, according to the lasertracker.

While a large fraction of the interactions areseeking information on directions, weather, ortime (42.7 percent), a comparable number are con-cerned with the robot’s character itself (31.0 per-cent). Only about 12.7 percent of dialogues havemore than one topic associated with them, andabout 20.8 percent of all dialogues consist only ofa greeting and an occasional farewell.

Little overlap (6.6 percent) between dialogueswith information-seeking questions and thoseinvolving the robot’s character suggests that thereare (at least) two classes of users: those who aremainly interested in finding out information per-tinent to their daily agendas and those who arecurious about the robot.

Analysis of dialogue openings provides more

Topic Fraction of Interactions (percent) Location of a person/place 30.5 Weather 11.2 Date and time 2.5 Talking about the robot 31.0 Talking about the user 3.6 Greeting/farewell only 20.8 Gibberish 9.1 Insults 4.0 Others 2.5

Table 1. Frequency of Topics.

Page 9: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

WINTER 2011 47

support for this hypothesis. The presence of agreeting in the user’s first turn appears to be astrong predictor of many features of the followingdialogue (Makatchev, Lee, and Simmons 2009).Users who start with a greeting tend to be morepersistent in response to the robot’s nonunder-standings (77 percent versus 67 percent of userswho did not greet), more than three times as like-ly to thank the robot for an answered question (25percent versus 8 percent), and more likely to endtheir interactions with a farewell (20 percent versus13 percent). The users who greeted the robot alsoperform better on the information-seeking task: 54percent of them have all of their utterances suc-cessfully parsed (versus 44 percent) and 50 percentof them get their questions answered (versus 43percent). On the other hand, the verbosity of thedialogue, excluding the greeting and farewellturns, is relatively unchanged, compared to usersthat did not greet.

Similar associations between dialogue openingsand discourse features, such as lexical and syntac-tic alignment, clarification questions, reformula-tions, and repetitions, have been found in variousHCI and HRI corpora by Fischer (see, for example,Fischer [2006]). Fischer explains these correlationsby the user’s preconceptions about computeragents and robots. In particular, those users whoconsider the robot as a tool will neither align norproduce various social behaviors, while those whoconsider the robot as a conversational partner will.Similarly, Lee, Kiesler, and Forlizzi (2010), based onan analysis of the roboceptionist data, suggest thatusers who have different mental models of therobots will apply different interaction scripts.

A similar analysis was performed on Hala’s cor-pus, to see what differences may be observedbetween a Pittsburgh-based and Doha-based robot.Due to differences with respect to their knowledgebase and dialogue rule coverage, a direct compari-son between their corpora is difficult. Nevertheless,we conducted such a comparison (Makatchev et al.2010), attempting to normalize Hala’s corpus byfocusing on those dialogues that were conductedentirely in English.

The comparison shows that Hala’s dialogues lastalmost twice as long as Tank’s (120 seconds versus63 seconds) and on average contain one extra pairof utterances. The fraction of dialogues that startwith a greeting are about the same for Hala andTank (38.7 percent and 39.4 percent). Tank’sanswers receive thanks at a much higher rate thanHala’s (12.9 percent versus 2.3 percent), perhapsin part because more of Hala’s questions are of apersonal nature (57 percent versus Tank’s 31 per-cent), such as “Are you married?” Other possiblereasons for the observed differences include dis-tinct robot personae, perceived gender differences,differences in coverage of their knowledge bases,their immediate surroundings, and potential dif-ferences in demographics of their user communi-ties (Fanaswala, Browning, and Sakr 2011).

Eliciting Social Behaviors through Dialogue Strategies The results suggest that users who produce socialdialogue acts are also more likely to get their ques-tions answered by the robot. Thus, we would liketo see whether we can develop robot interactionstrategies that encourage users to behave moresocially toward the robots, and whether by doingso we can influence task performance.

Toward this end, we have investigated the use ofinitiative, priming, and expression of effort to elicitusers’ social responses. Figure 6 depicts the dia-logue structure in terms of these strategies and pos-sible user responses.

Initiative: An interaction with the roboceptionistmay begin in one of two ways. It can be initiatedby the robot, when the robot’s laser tracker detectsa passerby, and consists of either a greeting, a headturn and nod, or a combination. Alternately, aninteraction can be initiated by the user, by typingto the robot. Our analysis (Makatchev, Lee, andSimmons 2009) shows that users who engage in adialog with the robot after being greeted first (witha verbal, nonverbal, or combined greeting) starttheir interactions with a greeting more often thanusers who were not proactively greeted (42 percentversus 37 percent). In addition, in support of our

Robot greets a passerbyRobot does not greet a passerby

User greets robot

User does not greet robot

Robot does not prime the user for thanksRobot primes the user for thanks

User asks for directions or weather info

Robot expresses effort verballyRobot expresses effort nonverbally

User does not say thanksUser says thanks

User does not say goodbyeUser says goodbye

Figure 6. The Temporal Flow of an Information-Seeking Interaction.

Page 10: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

48 AI MAGAZINE

hypothesis, users who were greeted by the robotwere also more likely to have their questions suc-cessfully parsed and answered (52 percent versus42 percent).

Priming: In an attempt to prime the user to say“thanks” after the robot’s answers, we evaluatedhaving the robot preface its standard response to auser’s greeting with the phrase “Thanks for stop-ping by.” We found that such priming had a sig-nificant positive main effect on occurrences ofsocial dialogue acts of thanking and farewells(Makatchev and Simmons 2011b). In particular,logit model selection with second-order interac-tion terms results in a model with the priming vari-able significant with p = 0.03 (odds ratio’s 95 per-cent confidence interval is [1.06, 3.15]) forexplaining occurrences of thanks and with p < 0.01(odds ratio’s 95 percent confidence interval is[1.16, 1.64]) for explaining occurrences offarewells.

Expression of Effort: We hypothesize that if therobot indicates that it has put effort into answeringa user’s question, the user may be more likely toprovide a social response. To test this, we had Tankrandomly precede its answers to weather and direc-tion questions with either a silent half-secondpause (a nonverbal expression of effort, or NVEE)or one of the following two phrases (verbal expres-sions of effort, or VEE): “Wait a second please. I’lllook that up” or “I am looking it up. Please holdon.” We found a small positive main effect ofNVEE on the turnwise length of the interactions.There are, however, some significant interactioneffects. For example, logistic regression on countsof occurrences of expression of effort indicates thatusers who greeted the robot were more likely toexpress thanks and farewells as the number ofoccurrences of NVEE increased (p < 0.01, odds coef-ficient’s 95 percent confidence interval is [1.19,3.08]). The VEEs did not produce this kind ofeffect. One possible explanation is that multipleoccurrences of a VEE may be annoying. Directcomparison of NVEE and VEE shows two interest-ing, but weakly significant (0.05 < p < 0.1), inter-actions. First, VEE in combination with the robot’sadmission of inability to parse the user’s utterancehas more chance of being thanked by the user thanNVEE co-occurring with such failures. Conversely,NVEE combined with a valid robot answer hasmore chance of being followed by a user’s thanksand farewell than VEE combined with a validanswer.

In summary, initiative in greeting users corre-lates with having them exhibit social attitudestoward the robot. Priming with “Thanks for stop-ping by” succeeds in encouraging users towardsocial dialogue. Multiple nonverbal expressions ofeffort improve the rates of social dialogue acts forusers who have already greeted the robot. All users,

on average, produced more social dialogue acts inresponse to nonverbal expressions of effort com-bined with a valid answer or verbal expressions ofeffort combined with a failure to parse or fetchinformation.

Expressing Emotion and Mood In a similar fashion, we would like to see how non-verbal expressions of emotion and mood affectusers’ social attitudes toward the robots. While pre-vious research demonstrated that emotions dis-played by robots can be recognized and interpret-ed by humans (Breazeal 2003), it was not knownhow people would react to a mood (that is, a con-sistent display of positive or negative emotion).

Gockley, Forlizzi, and Simmons (2006) tested themodel of emotion and mood described previously.The study displayed the mood of Valerie (sad, hap-py, or neutral) as a combination of facial expres-sions and several nonverbal behaviors: In the neg-ative (sad) condition, Valerie either looked awayfrom the visitor or appeared to sigh. In the neutralcondition, Valerie either smiled and performed asingle head nod or briefly glanced away from thevisitor. In the positive (happy) condition, Valerieeither smiled and nodded (as in the neutral condi-tion, but with a wider smile), or bounced her headfrom side to side in a seemingly happy, energeticmotion.

During the nine-week-long study, each day therobot displayed either a negative, neutral, or posi-tive mood. For consistency, the mood was coordi-nated with the robot’s story line.

The analysis was performed separately for weekswith low and high visitor traffic. Dialogues duringlow-visitor traffic weeks had more user turns in theneutral condition than in either the positive ornegative conditions (neutral M = 4.19, positive M =3.49, negative M = 3.74; F[1, 1236] = 4.59, p = 0.03).Positive and negative mood conditions did not dif-fer significantly.

During high-visitor traffic weeks, however, dia-logues had more user turns in the positive and neg-ative mood conditions than in the neutral condi-tion (neutral M = 3.33, positive M = 3.92, negativeM = 3.79; F[1, 1419] = 3.81, p = 0.05). Again, therewere no significant differences between positiveand negative mood conditions.

A possible explanation for these opposing trendsis that high-visitor traffic weeks consist of morefirst-time visitors, who may find it more interestingto encounter a moody (sad or happy) robot. On theother hand, repeat users (more likely during low-traffic weeks) may perceive a higher degree of com-mon ground in the moody (nonneutral) condi-tions, which may lead to more efficientinteractions. More detailed analysis of the interac-tions would be necessary to give a conclusiveanswer on the effect of the nonverbal expression

Page 11: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Articles

WINTER 2011 49

of the mood on the social perception of the robot.However, the observed effect of the robot’s moodon the number of dialogue turns suggests that fac-tors related to social perception of the robot can beinfluenced by the robot designers.

Pragmatic Analysis Humans often produce utterances that conveyintentions beyond those implied by the syntax, oreven semantics. For example, some apparent ques-tions do not seek information, but instead areintended as either assertions (rhetorical questions,such as “Who do you think you are?”), social ritu-als (phatic questions, such as “How are you?”), ortests of the receiver’s knowledge (display questions,such as “Where is my office?”). If a robot characterdoes not seem to understand the deeper, pragmaticmeaning (meaning that depends on context), itsdegree of believability may be lessened in the esti-mation of its users.

Consider the fragment of a dialogue shown infigure 7. Clearly, the user has some knowledge ofwhere Jane’s office is (the robot’s answer is actual-ly incorrect). Why did the user ask this question tothe robot in the first place? What was the user’sintention? Is this the best possible response, for therobot to provide an answer referring to the seman-tic content of the query? While we do not yet haveanswers to these important questions, here weaddress a problem essential for development of dis-play question detection methods: obtainingground-truth display question annotations.

The roboceptionist dialogues occasionallyinclude fragments where users provide explicitfeedback, or even answers to their own questions,after the robot fails to answer them properly. Forexample, dialogues such as the one in figure 7 sug-gest that users do ask display questions and thatsome users are willing to provide detailed feedbackafter the robot fails to produce a satisfactoryanswer. We attempt to exploit these tendencies byhaving the robot purposely fail to answer ques-tions at random and, instead, present an elicitationquestion (but not more than once per dialogue). Afragment of such a dialogue is shown in figure 8.The subset of display questions that is labeled thisway serves as a lower bound on the fraction ofinteractions that contain a display question amongall the interactions with an elicitation question.This is just a lower bound because the user maychoose not to answer the elicitation question evenif he/she knows the answer, and the elicitationintervention may be applied to a question that isnot a display question within a larger dialogue thatdoes contain a display question.

An experiment conducted over a period of 3months shows that at least 16.7 percent (SE = 4.6percent) of interactions that included an elicita-tion question contained a display question

(Makatchev and Simmons 2010). Given the rela-tively large number of display questions, webelieve that it is important to be able to reliablydetect and handle such questions, in order to sus-tain believability. Future work will analyze the cor-pus to find ways of distinguishing between displayand other types of questions.

Conclusions When users encounter a robot capable of naturallanguage dialogue, their expectations are raisednot only with respect to the natural languageunderstanding and generation capabilities, butalso with regard to the intentionality of both ver-bal and nonverbal behaviors, the robot’s autono-my, and its awareness of the sociocultural context.Failing to meet user expectations along any ofthese dimensions may result in less natural or evendisrupted interactions.

We attempt to meet such expectations by creat-ing believable characters that provide an illusionof life. We presented several approaches to creatingbelievable robot characters, including having abackstory and dynamic story line, using nonverbalexpressions of emotions, and incorporating socialcues and sociocultural context into the robot’sbehaviors. Our results show that such a holisticapproach to robot design results in robots that cansustain the interest of a user community overmany years of deployment. We argue for the desir-ability of evaluating design choices “in the wild”and have shown that, in some cases, it is feasible toobtain ground-truth user intent without breakingthe illusion of the robot operating unsupervisedand unmonitored.

1 U: Jane Smith

2 R: Jane Smith is in 4001 Wean Hall.

Would you like directions?

3 U: no she is not1

4 R: OK

5 U: Smith is not where you say she is

Figure 7. A Fragment of a Dialogue Containing User Feedback.

The person’s name and room are modified for privacy, and the labels “U:”and “R:” are added to denote the user and robot turns, respectively.

Page 12: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

We also found that many users communicatewith the robot characters using social dialogue,which can be considered as a measure of believ-ability. Such users tend to perform better on theinformation-seeking task. The natural questionarises: is there a causal link between social attitudeand performance? We took a step toward answer-ing this question by finding that dialogue strate-gies of priming and expression of effort tended toincrease the social responses of users who hadalready greeted the robot. In addition, there isweaker evidence that verbal expression of effortcan trigger social dialogue acts for all users (onaverage) when the robot fails to answer the ques-tion. It remains to be shown, however, whethersuch a manipulation of users’ social responsesaffects their task performance.

Not all tasks would benefit from social interac-tions with a believable agent. For example, for cer-tain stressful and demanding tasks, even humanteams use simplified languages devoid of socialcues. Nevertheless, for tasks where social interac-tion is key, such as when a receptionist establishesrapport with a visitor, we contend that believablerobot characters have the potential to positivelyeffect both perceptual and performance metrics ofinteractions.

Future work includes expansions along all of theoutlined dimensions of believability, includingverbal and nonverbal expression of personalityand emotion, as well as recognition and generationof culturally appropriate behaviors. While we still

have a long way to go in understanding exactlywhat factors affect character believability, we feelconfident that our approach is heading in the rightdirection.

Acknowledgments This publication was made possible by the supportof an NPRP grant from the Qatar National ResearchFund. Min Kyung Lee would like to acknowledgeNational Science Foundation grants IIS-0624275and CNS-0709077 and Kwanjeong EducationalFoundation. The statements made herein are sole-ly the responsibility of the authors.

The authors would like to acknowledge contri-butions of Ameer Ayman Abdulsalam, Amna Al-Zeyara, Hatem Alismail, Greg Armstrong, NawalBehih, Frank Broz, Frédéric Delaunay, AndyEchenique, Wael Mahmoud Gazzawi, JunsungKim, Nik Melchior, Marek Michalowski, AnneMundell, Brennan Sellner, Suzanne Wertheim, andVictoria Yew. We are also grateful to Michael Agar,Justine Cassell, Michael Chemers, Sanako Mitsugi,Antonio Roque, Alan Schultz, Candace Sidner,Aaron Steinfeld, and Mark Thompson for theirinput at various stages of research presented in thisarticle.

Note1. See Cepstral LLC, Cepstral Text-to-Speech (cepstral.com).2. See the Ainebot project home page: distro.ibiblio.org/pub/linux/distributions/amigolinux/download/ainebot.3. However, the new Pittsburgh roboceptionist character(under development) will have an African-Americanappearance and a Southern accent.

References Agar, M. 1994. Language Shock: Understanding the Cultureof Conversation. New York: William Morrow. Agar, M. 2006. Culture: Can You Take It Anywhere? Invit-ed Lecture Presented at the Gevirtz Graduate School ofEducation, University of California at Santa Barbara.International Journal of Qualitative Methods 5(2): 1–12. Bates, J. 1994. The Role of Emotion in Believable Agents.Communications of the ACM 37(7): 122–125. Breazeal, C. 2003. Emotion and Sociable HumanoidRobots. International Journal of Human Computer Studies59(1–2): 119–155. de Rosis, F.; Pelachaud, C.; and Poggi, I. 2004. Transcul-tural Believability in Embodied Agents: A Matter of Con-sistent Adaptation. In Agent Culture. Human-Agent Inter-action in a Multicultural World, ed. S. Payr and R. Trappl,75–105. Mahwah, NJ: Lawrence Erlbaum Associates, Pub-lishers. Ekman, P. 1969. Pan-Cultural Elements in Facial Displaysof Emotion. Science 164(3875): 86–88. Fanaswala, I.; Browning, B.; and Sakr, M. 2011. Interac-

Articles

50 AI MAGAZINE

1 U: where is hamburg hall

2 R: Sorry. I could not get the information. If you happen to know the answer to your question, can you tell it to me?

3 U: yes

4 R: Ok, what is the answer?

5 U: go out of the double doors behind you and walk to your left, the path will lead you to the back of hamburg hall.

Figure 8. A Dialogue Fragment Containing the Embedded Elicitation Question in Turns 2 and 4.

Page 13: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

tional Disparities in English and Arabic Native Speakerswith a Bi-Lingual Robot Receptionist. In Proceedings of the6th International Conference on Human-Robot Interaction(HRI), 133–134. New York: Association for ComputingMachinery. Fischer, K. 2006. The Role of Users’ Preconceptions inTalking to Computers and Robots. Paper presented at theWorkshop on How People Talk to Computers, Robots,and Other Artificial Communication Partners, Delmen-horst, Germany, 21–23 April. Gockley, R.; Bruce, A.; Forlizzi, J.; Michalowski, M.;Mundell, A.; Rosenthal, S.; Sellner, B.; Simmons, R.;Snipes, K.; Schultz, A. C.; and Wang, J. 2005. DesigningRobots for Long-Term Social Interaction. In Proceedings ofthe 2005 IEEE/RS International Conference on IntelligentRobots and Systems, 2199–2204. Piscataway, NJ: The Insti-tute of Electrical and Electronics Engineers.Gockley, R.; Forlizzi, J.; and Simmons, R. 2006. Interac-tions with a Moody Robot. In Proceedings of the 5thACM/IEEE International Conference on Human Robot Inter-action, 186–193. New York: Association for ComputingMachinery. Hayes-Roth, B.; Maldonado, H.; and Moraes, M. 2002.Designing for Diversity: Multicultural Characters for aMulticultural World. Paper presented at IMAGINA, TheEuropean 3D Simulation and Virtual Technology Event.Monte Carlo, Monaco, 12–15 February.Hoffman, G. 2011. On Stage: Robots as Performers. Paperpresented at the RSS 2011 Workshop on Human-RobotInteraction: Perspectives and Contributions to Roboticsfrom the Human Sciences. Los Angeles, CA, 1 July. Ishiguro, H. 2005. Android Science. Toward a New Cross-Interdisciplinary Framework. Paper presented at TowardSocial Mechanisms of Android Science: An ICCS/CogSci-2006 Long Symposium. Vancouver, BC, Canada, 26 July. Kirby, R.; Forlizzi, J.; and Simmons, R. 2010. AffectiveSocial Robots. Robotics and Autonomous Systems 58(3):322–332. Lazarsfeld, P., and Merton, R. 1954. Friendship as a SocialProcess: A Substantive and Methodological Analysis. InFreedom and Control in Modern Society, ed M. Berger, T.Abel, C. Page. New York: Van Nostrant. Lee, M. K., and Makatchev, M. 2009. How Do People Talkwith a Robot? An Analysis of Human-Robot Dialogues inthe Real World. In Proceedings of the 27th InternationalConference on Human Factors in Computing Systems, 3769–3774. New York: Association for Computing Machinery. Lee, M. K.; Kiesler, S.; and Forlizzi, J. 2010. Receptionist orInformation Kiosk: How Do People Talk with a Robot? InProceedings of the 2010 ACM Conference on Computer Sup-ported Cooperative Work, CSCW, 31–40. New York: Associ-ation for Computing Machinery. Makatchev, M., and Simmons, R. 2010. Do You ReallyWant to Know? Display Questions in Human-Robot Dia-logues. In Dialog with Robots: Papers from the AAAI FallSymposium. Technical Report FS-10-05. Menlo Park, CA:AAAI Press.. Makatchev, M., and Simmons, R. 2011a. Perception ofPersonality and Naturalness through Dialogues by NativeSpeakers of American English and Arabic. Paper present-ed at the 12th Annual SIGdial Meeting on Discourse andDialogue. Portland, OR, 17–18 June. Makatchev, M., and Simmons, R. 2011b. Using Initiative,

Articles

WINTER 2011 51

ICWSM-12Submissions Due January 18, 2012

The Sixth International AAAI Conference onWeblogs and Social Media (ICWSM-12) will be heldat Trinity College in Dublin, Ireland, June 4-8,2012. This unique forum brings togetherresearchers from the disciplines of computer sci-ence, linguistics, communication, and the socialsciences. The broad goal of ICWSM is to increaseunderstanding of social media in all its incarna-tions. Submissions describing research that blendssocial science and computational approaches areespecially encouraged.

In addition to the usual program of contributedtechnical talks, posters and invited presentations,the main conference will include a selection ofkeynote talks from prominent social scientists andtechnologists. Keynotes will be presented byAndrew Tomkins (Google+), Patrick Meier (Ushahi-di!), and Lada Adamic (University of Michigan).The very successful workshop and tutorial pro-grams will continue on the first day of the confer-ence, Monday, June 4.

Please note the following important deadlines forICWSM-12:

January 6 Workshop Proposal AcceptanceJanuary 9 Tutorial Proposal SubmissionJanuary 13 Paper, Poster, Demo Abstract

SubmissionJanuary 18 Full Paper, Poster, Demo

SubmissionJanuary 23 Tutorial Proposal AcceptanceFebruary 27 Paper, Poster, Demo NotificationsMarch 12 Camera Ready Paper DueJune 4-8 ICWSM-12 Conference

For complete submission instructions, deadlines, andother details about the conference, please see

www.icwsm.org/2012 or write to [email protected]

Page 14: Believable Robot Charactersml48959/materials/Publication/2011-BelievableRobot...of robots, in addition to increasing a user’s enjoy- ... where people are conscious of being recorded.

Maxim Makatchev is a Ph.D. candidate in the RoboticsInstitute at Carnegie Mellon University. His researchinterests are in human-robot interaction and natural lan-guage dialogue.

Rachel Kirby (nee Gockley) received her Ph.D. in robot-ics from Carnegie Mellon University in 2010. Herresearch addressed various aspects of social human-robotinteraction, with a focus on computational models ofhuman behavior. Her thesis “Social Robot Navigation”presented a method for robots to navigate safely andcomfortably around people by respecting human socialconventions, such as personal space. Kirby is now a soft-ware engineer at Google, Inc.

Min Kyung Lee is a Ph.D. student in the Human-Com-puter Interaction Institute at Carnegie Mellon University.Her general research interests lie in the broad areas ofhuman-computer interaction, human-robot interaction,and computer-supportive collaborative work. Over thepast four years, Kyung has worked on several projectsthat focused on designing social behaviors ofautonomous systems and their application in the domainof collaboration, and personal and assistive services atIntel and Willow Garage.

Imran Fanaswala is a senior research programmer atCarnegie Mellon Qatar. His time is spent coding, men-toring, and mulling, in the areas of human-robot inter-action and cloud computing.

Brett Browning is a senior system scientist in the Robot-ics Institute at Carnegie Mellon University. He also isassociated with Carnegie Mellon Qatar, where he is acodirector of the Qri8 robotics lab, and the NationalRobotics Engineering Center (NREC). He joined CarnegieMellon in 2000 as a postdoctoral fellow and then the fac-ulty in 2002. Browning received his Ph.D. in electricalengineering from the University of Queensland in 2000.Browning’s research interests focus on perception andautonomy for robots operating in industrial and off-roadenvironments.

Jodi Forlizzi is an associate professor of design andhuman-computer interaction at Carnegie Mellon Uni-versity, in Pittsburgh, Pennsylvania. Forlizzi is an inter-action designer who examines theories of experience,emotion, and social product use as they relate to interac-tion design. Other research and practice centers on noti-fication systems ranging from peripheral displays toembodied robots, with a special focus on the socialbehavior evoked by these systems. One recent system isSnackbot, a robot that delivers snacks and encouragespeople to make healthy choices.

Majd F. Sakr, Ph.D., is an associate teaching professor incomputer science and the assistant dean for research atCarnegie Mellon University in Qatar. He is the cofounderof the Qatar Cloud Computing Center. In addition toworking at Carnegie Mellon in Pittsburgh, he has heldappointments at the American University of Science andTechnology in Beirut and at the NEC Research Institute inPrinceton, New Jersey. His current areas of research inter-est include data-intensive scalable computing and cross-cultural, multilingual human-robot interaction and HCI.He holds a BS, MS and Ph.D. in electrical engineeringfrom the University of Pittsburgh.

Articles

52 AI MAGAZINE

Priming, and Expression of Effort to Elicit Social Behav-iors in a Dialogue with a Robot Receptionist. TechnicalReport TR-11-23, Robotics Institute, Carnegie MellonUniversity, Pittsburgh, PA. Makatchev, M.; Fanaswala, I. A.; Abdulsalam, A. A.;Browning, B.; Gazzawi, W. M.; Sakr, M.; and Simmons, R.2010. Dialogue Patterns of an Arabic Robot Receptionist.In Proceedings of the 5th ACM IEEE International Conferenceon Human-Robot Interaction, 167–168. New York: Associa-tion for Computing Machinery. Makatchev, M.; Lee, M. K.; and Simmons, R. 2009. Relat-ing Initial Turns of Human-Robot Dialogues to Discourse.In Proceedings of the 4th ACM IEEE International Conferenceon Human-Robot Interaction, 321–322. New York: Associa-tion for Computing Machinery.Mavridis, N., and Hanson, D. 2009. The IbnSina Interac-tive Theater: Where Humans, Robots and Virtual Char-acters Meet. In Proceedings of the 18th IEEE InternationalSymposium on Robot and Human Interactive Communica-tion. Piscataway, NJ: The Institute of Electrical and Elec-tronics Engineers.McPherson, M.; Smith-Lovin, L.; and Cook, J. M. 2000.What Is a Language Community? American Journal ofPolitical Science 44(1): 142–155. Nass, C.; Isbister, K.; and Lee, E. 2000. Truth Is Beauty:Researching Embodied Conversational Agents. In Embod-ied Conversational Agents, ed. J. Cassell, J. Sullivan, S. Pre-vost, and E. Churchill, 374–402. Cambridge, MA: MITPress. Rook, K. S. 2001. Emotional Health and Positive VersusNegative Social Exchanges: A Daily Diary Analysis.Applied Developmental Sciences 5(2): 86–97. Stanislavski, K. 2008. An Actor’s Work: A Student’s Diary.New York, NY: Routledge. Stebbins, G. 1886. Delsarte System of Dramatic Expression.New York: E. S. Werner.

Reid Simmons is a research professor in the RoboticsInstitute at Carnegie Mellon University. He received hisPh.D. in artificial intelligence from MIT in 1987. Hisresearch focuses on developing reliable autonomous sys-tems that can plan, act, and learn in complex, uncertainenvironments. He is currently investigating aspects ofhuman-robot social interaction, both conversational andnavigational, multirobot coordination, especially as itapplies to assembly of large-scale structures, and assistiverobotics to aid the elderly and people with disabilities.

AAAI Has Moved!AAAI’s new address is

2275 East Bayshore Road, Suite 160Palo Alto CA 94303 USATelephone: 650-328-3123

Fax 650-321-4457


Recommended