+ All Categories
Home > Documents > [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

[Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

Date post: 06-Dec-2016
Category:
Upload: mary-anne
View: 216 times
Download: 3 times
Share this document with a friend
11
S.S. Ge et al. (Eds.): ICSR 2012, LNAI 7621, pp. 45–55, 2012. © Springer-Verlag Berlin Heidelberg 2012 Robot Social Intelligence Mary-Anne Williams Social Robotics Studio, University of Technology, Sydney 2007 Sydney, Australia [email protected] Abstract. Robots are pervading human society today at an ever-accelerating rate, but in order to actualize their profound potential impact, robots will need cognitive capabilities that support the necessary social intelligence required to fluently engage with people and other robots. People are social agents and robots must develop sufficient social intelligence to engage with them effectively. Despite their enormous potential, robots will not be accepted in society unless they exhibit social intelligence skills. They cannot work with people effectively if they ignore the limitations, needs, expectations and vulnerability of people working in and around their workspaces. People are limited social agents, i.e. they do not have unlimited cognitive, computational and physical capabilities. People have limited ability in perceiving, paying attention, reacting to stimuli, anticipating, and problem-solving. In addition, people are constrained by their morphology; it limits their physical strength for example. People cannot be expected to and will not compensate for social deficiencies of robots, hence widespread acceptance and integration of robots into society will only be achieved if robots possess the sufficient social intelligence to communicate, interact and collaborate with people. In this paper we identify the key cognitive capabilities robots will require to achieve appropriate levels of social intelligence for safe and effective engagement with people. This work serves as a proto-blueprint that can inform the emerging roadmap and research agenda for the new exciting and challenging field of social robotics. Keywords: Social Intelligence, Cognitive Capabilities, Artificial Intelligence, Autonomous Agents, Law and Ethics. 1 Introduction Robotists and social scientists may be seen as having little in common but they agree on at least one issue: the new age of robots will have a profound impact on people and society. Robots are computer controlled cyberphysical systems that perceive their environment using sensors and undertake physical action using actuators to effect change. Autonomous robots can interpret sensor information they gather autonomously and undertake physical action autonomously. Autonomy can be graded in various ways [46], and autonomous robots can operate without human intervention. Over the last thirty years computers have revolutionized society, it is not surprising that the impending robot revolution is expected to have an even more profound
Transcript
Page 1: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

S.S. Ge et al. (Eds.): ICSR 2012, LNAI 7621, pp. 45–55, 2012. © Springer-Verlag Berlin Heidelberg 2012

Robot Social Intelligence

Mary-Anne Williams

Social Robotics Studio, University of Technology, Sydney 2007 Sydney, Australia

[email protected]

Abstract. Robots are pervading human society today at an ever-accelerating rate, but in order to actualize their profound potential impact, robots will need cognitive capabilities that support the necessary social intelligence required to fluently engage with people and other robots. People are social agents and robots must develop sufficient social intelligence to engage with them effectively. Despite their enormous potential, robots will not be accepted in society unless they exhibit social intelligence skills. They cannot work with people effectively if they ignore the limitations, needs, expectations and vulnerability of people working in and around their workspaces. People are limited social agents, i.e. they do not have unlimited cognitive, computational and physical capabilities. People have limited ability in perceiving, paying attention, reacting to stimuli, anticipating, and problem-solving. In addition, people are constrained by their morphology; it limits their physical strength for example. People cannot be expected to and will not compensate for social deficiencies of robots, hence widespread acceptance and integration of robots into society will only be achieved if robots possess the sufficient social intelligence to communicate, interact and collaborate with people. In this paper we identify the key cognitive capabilities robots will require to achieve appropriate levels of social intelligence for safe and effective engagement with people. This work serves as a proto-blueprint that can inform the emerging roadmap and research agenda for the new exciting and challenging field of social robotics.

Keywords: Social Intelligence, Cognitive Capabilities, Artificial Intelligence, Autonomous Agents, Law and Ethics.

1 Introduction

Robotists and social scientists may be seen as having little in common but they agree on at least one issue: the new age of robots will have a profound impact on people and society. Robots are computer controlled cyberphysical systems that perceive their environment using sensors and undertake physical action using actuators to effect change. Autonomous robots can interpret sensor information they gather autonomously and undertake physical action autonomously. Autonomy can be graded in various ways [46], and autonomous robots can operate without human intervention. Over the last thirty years computers have revolutionized society, it is not surprising that the impending robot revolution is expected to have an even more profound

Page 2: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

46 M.-A. Williams

impact. Robots can perceive stimuli that humans cannot detect; they can be physically stronger, less distracted and more tolerant of difficult conditions and repetitive routines. These characteristics together with a lack of social intelligence will present challenges to society as robots become more and more integrated into peoples lives and as people increasingly interact and engage with robots.

Robot sensory, actuation and computational capabilities have dramatically improved since Shakey, the world’s first general mobile robot that could plan and reason about its own actions, was developed by Stanford University in the early 1970s. However, the gap between the social intelligence of people and robots remains almost as far apart today as the time of first generation robots.

As robot capabilities become increasingly impressive and robots increasingly pervasive in society there is a pressing need to design and develop robots with social cognitive capabilities, social intelligence and effective social skills. Social intelligence allows social agents to act appropriately in social settings. Albrecht [1] defines social intelligence as the ability to get along well with others while winning their cooperation. He argues that social intelligence requires social awareness, sensitivity to the needs and interests of others, an attitude of generosity and consideration, and a set of practical skills for interacting successfully with others in any setting.

Human society is diverse and tolerates a wide variety of cultures, races and practices, but underlying human society are humans, who share similar bodies with similar cognitive capabilities, e.g. vision, language, pain, and intelligence. Despite cultural, racial and gender differences, people have similar morphologies, similar cognitive abilities and similar human experiences. People who lack social capabilities are seen to exhibit social disorders, e.g. autism, and to have antisocial personalities, e.g. sociopathic, psychopathic, and Axis II personality. Dissocial personality disorder has been classified by the World Health Organisation [53] as characterized by at least 3 of the following: (i) callous unconcern for the feelings of others, (ii) gross and persistent attitude of irresponsibility and disregard for social norms, rules, and obligations, (iii) incapacity to maintain enduring relationships, though having no difficulty in establishing them, (iv) very low tolerance to frustration and a low threshold for discharge of aggression, including violence, (v) incapacity to experience guilt or to profit from experience, particularly punishment, (vi) markedly prone to blame others or to offer plausible rationalizations for the behaviour that has brought the person into conflict with society.

Aberrant human behaviour is examined in psychopathology, the study of mental illness, mental distress, and abnormal or maladaptive behavior and treated as a medical condition, such as psychopathy, a genetic subtype of antisocial personality disorder. Antisocial behavior has a high cost in human society, as it tends to cause significant unwanted problems [48]. Humans suffering from mental disorders may be treated with compassion. But there is no role in human society for a robot exhibiting psychopathological or sociopathic behaviours. In order for robots to be suitably integrated into human society they will require social cognitive capabilities and social intelligence skills to work with people and each other.

This paper identifies key capabilities autonomous robots will require in order to engage and collaborate with people safely and effectively in society. Section 2 identifies cognitive capabilities needed to achieve social intelligence. Section 3 highlights several key legal and ethical issues related to social robots.

Page 3: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

Robot Social Intelligence 47

2 Cognitive Capabilities for Social Intelligence

Social intelligence requires a high level of self-awareness, sense of identity and awareness of others. People typically have a strong sense of identity and self-awareness, and awareness of others. In addition, people are motivated to act of their own volition with a sense of purpose. People are curious and driven to develop an understanding of themselves and other entities in their environment. It is unlikely that robots will achieve human-level cognition any time soon, but in order to attain a minimal level of social intelligence they will require a concept of self, an understanding of purposeful behaviour, and an ability to distinguish intentional behaviour from unintentional behaviour. Tomasello et al. [16] studied the understanding of intention in orangutans, chimpanzees and children. All three demonstrated they understood the difference between accidental and intentional acts. One of the challenges in studying an agent’s understanding of others is that observed phenomena like behaviours can sometimes be explained as simple stimulus-response learning, rather than requiring deep understanding.

A person can observe the existence and configuration of another persons body directly, however all aspects of other people’s minds must be inferred from observing their behaviour together with other information. Robots, on the other hand, can share sensory data, perceptions and software with each other directly. Interpreting another person’s behaviour is a nontrivial task. Humans use introspection and experience with other people to help decipher it. Sometimes they are wrong and misinterpret other people’s behaviour, but surprisingly people can correctly interpret other peoples’ behaviour particularly within a single culture. It turns out that this skill is crucial for interaction and engagement in social settings. People use a range of strategies to engage and collaborate with others e.g. they use language where meaning is derived from commonly agreed vocabularies and grammars [3], gaze or gestures.

Social intelligence among agents that share similar morphologies and cognitive capabilities is easier than achieving social intelligence among heterogeneous agents. Social intelligence across nonhuman species is not prevalent in the animal kingdom. Wolves, lions and chimpanzees can hunt collaboratively in packs. They can achieve joint attention and work together to catch their prey, but there is no evidence that shows lions hunt with chimpanzees or wolves [11]. People, on the other hand, can work with other species like horses, donkeys, elephants, eagles and dogs to achieve collaborative outcomes, however when people work with other animals to achieve a goal it is usually entirely different to how they would work with other people and much more limited in its achievements.

A theory of mind allows people to attribute thoughts, desires, and intentions to other agents, to predict and explain their actions/behaviours. Knowing other people have a theory of mind also assists people to communicate, as they know that their actions/behaviour can be interpreted in certain ways in certain contexts in a given culture. For example, a person could indicate pain by crying, look at an object intently or point to it to direct other peoples’ attention. Even though animals, and in particular mammals, have bodies (and brains) which are different to humans, people still try to imagine what animals are thinking and feeling. Animals do not laugh for example, but

Page 4: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

48 M.-A. Williams

they have behaviours that people interpret, e.g. when a dog whimpers a person might infer that it is in pain and when a dog wags its tail a person might infer that the dog is happy and excited. External indicators of animal thought are an important factor in achieving empathy across species. Dogs wag their tails and humans clap hands, smile and frown. Although a robot could display a visual or audible action to indicate pleasure, dismay or fear, the capacity to make a display that is intuitive and easily interpreted by people may assist in the robot’s acceptance by humans. A gesture or low impact sound like purring can often be more effective than a spoken linguistic response which is typically more invasive and attention demanding. For spoken responses if a robot sounds like a child it will evoke a different instinctive response from people than if its voice sounds more like a strong and competent adult.

Robot bodies for the foreseeable future are entirely different to the human body and those found in the biological world. As a result a robot’s subjective experience is entirely different to that of people, and yet if humans are to communicate and collaborate with robots we must bridge this experience gap. If robots are to work with people in open and complex ways then they will require a theory of human mind, similarly, people will require a theory of robot mind.

Bridging the robot-human mind gap will be nontrivial. It will require that robots develop a robust social intelligence and a theory of mind [42]. Children under the age of three do not typically exhibit a theory of mind capability; instead it develops later and improves over time. One of the key tools we use to interpret other peoples behaviour is introspection. When one sees another person reach for object then one typically infers that the person wants to grasp the object. If a robot observes the same scenario and has the capability to retrieve the object, then it might try to assist but there will always be exceptions and a need for robots to display commonsense. Robot designers must find ways to endow a robot with commonsense and social intelligence so that it will respond appropriately in social circumstances [51].

Robots and people need cognitive skills to explain and predict each others’ behaviour. Empathy is related to a theory of mind, and so we can ask what kinds of empathy should people develop for robots and vice versa. Empathy can be effected as a kind of projection of experience: if agent X observes agent Y experiencing situation S, then X could predict how Y may be feeling and importantly how they may respond, by imagining how Y may be feeling. Recent neuro-ethological studies of animal behaviour [55] suggest that mammals may exhibit ethical or empathic abilities, but there is little evidence that insects, despite demonstrating sophisticated “social” skills in their colonies, empathise with each other.

A theory of mind is sometimes determined to be a byproduct of broader cognitive abilities of the human mind to register, monitor, and represent its own functioning. However, in this paper, we take the simulation stance [25] on a theory of mind, which says that an agent recognizes its own mental states, and by simulation ascribes mental states to others. Robots could develop this simulation theory of mind capability and use it to predict other agent states by simulating their experience and responses.

Social robots will require capabilities which will allow them to attribute knowledge and mental states to other cognitive agents like people, household pets, and other robots. The ability to simulate attention and intention, as well as being able to imitate and learn, are key cognitive skills for social robots. Researchers have shown that people’s ability

Page 5: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

Robot Social Intelligence 49

to model other minds develops over their lifetime, and there is no reason to expect robots to have a mature theory of mind from the moment of deployment, and so they too need methods that will allow them to develop their abilities over their lifetime.

Baron-Cohen [10, 11, 12] showed that infants respond to attention in others by 7-9 months of age and that this skill is a key step on the critical pathway to developing a theory of mind [40]. Understanding attention requires the insight that where a person is looking provides clues as to what may have their attention and/or what they may be thinking about. There are many experiments that confirm that people follow the gaze of others. In addition, people point to indicate objects of interest. Dogs can also read attention in other dogs [32] and also humans. Dogs can correctly interpret human pointing, and hunting dogs are well known for pointing with their nose and tail. Other animals follow heads not eyes in interpreting gaze [56]. Humans are unique in that they have a highly visible light/white sclera, other animals have black sclera with rare exceptions e.g. albino gorillas. This together with an understanding of ourselves makes it relatively easy to guess what another person may be looking at.

People direct and share attention to communicate and collaborate. They use joint attention [11] to cooperate and undertake joint tasks. The act of pointing is a means to create joint attention. Social robots must acquire the ability to interpret and enact pointing behaviours. In addition to managing attention, robots will require capabilities to determine intentions so that they can assist, avoid and anticipate a person’s next action [41]. Robots will require tools that allow them to model other agents’ beliefs this is particularly important when the beliefs are false or not consistent with the robots’. Various physiological tests have been used to test people’s ability to model other people’s minds, and the following tests can be used to evaluate robot’s cognitive capabilities for social intelligence. Passing the test as is would not demonstrate social intelligence since a robot could easily be programed to pass the test, the key point is that social robots would need to demonstrate the ability required in a broad range of general settings.

The False-Belief Test: The robot is shown a basket and a box. A marble is placed in the basket in front of a person who then leaves the room, while they are out of the room the marble is moved from the basket to the box.The robot is then asked where the person will look for the marble. The robot passes the task if it answers that the person will look in the basket, i.e. where they saw the marble placed. This test can be used to demonstrate that a robot is able to understand that a human’s state of belief may be different to its own and based on its personal experience. Typically children are unable to pass this test before the age of four, but even at age four the majority of autistic children are unable to pass the test [5, 10, 12, 38].

The Appearance-Reality Test: A robot is asked what it believes to be inside a candy box. The robot will indicate candy. Then the robot is shown that pencils are in the box. The robot is now asked what an agent/person who has not observed the contents will believe is in the box. If the robot answers candy then the robot passes this test. Normal children can pass this test at age four or five years, but autistic children typically cannot.

Social robots will need to be able to consider events and situations from another agents’ perspective. Humans that experience a theory of mind deficit have difficulty determining the intentions of others, lack understanding of how their behaviour affects others, and have a difficult time with social reciprocity [5, 10, 12, 38]. Theory of mind deficits have been observed in people with spectrum disorders, people

Page 6: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

50 M.-A. Williams

with schizophrenia, people with attention deficit disorder, persons under the influence of alcohol and narcotics, sleep-deprived persons, and persons who are experiencing severe emotional or physical pain [5]. Clearly social robots require a theory of mind; otherwise they too will exhibit the behaviours of unacceptable mental disabilities.

It is important to note that there has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals. For example, Povinelli [32] presented chimpanzees with the choice of two experimenters from which to request food: one who had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a bucket or bag over his head; a blindfold over his eyes; or being turned away from the baiting) does not know, and can only guess. They found that the animals failed in most cases to differentially request food from the "knower." By contrast, Hare, Call, and Tomasello [31] found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached.

Individual people experience a mind and assume other people also experience a mind. A theory of mind, as been shown to be a crucial capability for cognitive agents to communicate effectively [10, 11, 12, 25]. Without a theory of mind people display forms of mental disabilities. Normal people find it cognitively exhausting and frustrating to work with and engage with people who do not have a theory of mind.

Reality is experienced and represented (perceived and conceived) by both people and robots. However, they do not represent every aspect of their experience. Aspects of experiences that are selected for representation are determined by their morphology and grounding capabilities [45]. Robots and people have entirely different morphology and information grounding capabilities, consequently their social intelligence is different. Social robots will require self-awareness which Novianto and Williams [28, 49] maintain is the ability to focus attention on subjective experience, i.e. what is happening now. A robot that can recognise itself in the mirror without a self-concept would not count as passing this test. The ability to anticipate is a crucial skill and related to attention and concerns awareness of future subjective experience, i.e. what is about to happen or what will happen next. The ability to anticipate other agents behaviour including what might gain their attention will play a crucial role in determining how a robot should respond to other agents cohabiting a shared environment or cooperating to achieve joint tasks.

A basic social skill is to know when a “friend” or “superior” requires help and when one should offer help. In order for social robots to ask for assistance they must have an understanding and representation of their own capabilities and experience. They must know what they know, what they don’t know, what they are doing, how they are doing it, why they are doing, know what others know, what others don’t know, what others are doing (and how and why), what interests others and what will get their attention (and why).

The cognitive experience architecture [46] provides a useful tool to explore robot design. It has four main components grounded in a robot’s experience: morphology, understanding, motivation and governance. The cognitive experience architecture can be used to design and develop robotic systems that can make sense of their own experiences all by themselves guided by self generated motivations and cognitive capabilities for self-control and understanding. Robots are physical entities and have a physical morphology that affords key cognitive capabilities that support the development of a sense of being and self-determination. A self-determined robot must

Page 7: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

Robot Social Intelligence 51

be able to represent and make sense of its own experience; use its motivations to drive control mechanisms and strategies, use awareness and attention to steer understanding for the purpose of responding and anticipating. A robot’s body parts, degrees of freedom, sensors and actuators all play important roles in its morphology which in turn determines a robot’s sense-of-self and self-concept. Robots experience themselves (via propriception) and the world (perception). Propriceptual and perceptual experience acquires information through internal and external sensors collectively. Representations of social experience are key to developing social intelligence. Building mechanisms for robots to represent and use their experience rather than designing behaviors via human encoding of social experience is needed. Robots need to be self-motivated and to pursue their own goals such as seeking certain experiences depending on representations of needs, wants, current and future states. Motivation plays a crucial role in action selection in robots. It influences what the robot will pay attention to, which in turn affects what the agent is aware of and how it might respond to its experiences. It determines what information will be grounded in representations, how, when and why. People’s attention and awareness are influenced by their emotional state, goals, passion, persistence, and perseverance all properties of motivation. At the very least autonomous social robots will require simple motivations: to act safely, not cause damage to themselves or others, and not to waste resources including people’s time.

A robot can claim to achieve a degree of understanding if it can make sense of its experience; this requires cognitive capabilities for making representations. Representations possess affordances; they can afford action, behaviour, reaction, deliberation, decision-making, learning, description, explanation, prediction, anticipation and many other capabilities for social intelligence. Measuring affordances can provide a measure of the value/quality of representations.

The internal (e.g. body) and external world offers too much information to capture in representations of experience (e.g. humans do not represent and capture every chemical reaction in their bodies, nor do they represent and capture every event they witness in the outside world). Clearly, a robot should carefully select the information to represent that will generate the most value. We know from neuroscience that predictive feedback cycles are crucial in learning; studies in neuropsychology suggest the value of comparing prediction with perceived information. In robotics predictive feedback plays an important role; there is an essential difference between prediction and anticipation. Prediction forecasts the future, while anticipation concerns actions to take in the present for better outcomes in the future. Humans spend copious amounts of time thinking, rerunning old experiences and rehearsing future experiences as they anticipate and prepare for future social encounters.

3 Legal and Ethical Considerations

Social robots not only require self-awareness but a social awareness. They must be able to detect and recognize other social agents; possess a social radar that allows them to navigate social situations just as they navigate physical spaces. Social robots will be expected to behave legally and ethically. Science Fiction writer Asmiov [2] developed the following so-called Three Laws of Robotics. These laws provide a

Page 8: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

52 M.-A. Williams

useful place to start designing a set of rules that could be used to govern social robot behaviour.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If social robots are to respect and enact these laws and others like them then they will require significant social intelligence and the necessary underlying cognitive capabilities, which will include self-awareness, social awareness, empathy, theory of mind. For example, without the ability to empathize with humans, how will a robot determine what might harm them. Determining and assessing the nature and degree of harm is crucial to enacting Asmiov’s Laws. People are able to interpret and respect laws and codes of ethics by assessing other people’s experience, often by putting themselves in other people’s position and circumstance. In human society people regularly observe and mimic other people’s behaviour, and imagine what it would be like in other people’s position. In doing this their ability to interpret, copy and imagine how other people feel and think is bound by their own experiences. That is not to say that people cannot imagine entirely different experiences beyond their own subjective experience, but that their experience is a limiting factor in those imaginings. A person who has experienced weightlessness, can imagine what it would be like in the International Space Station much better than someone who has not; an accomplished flutist, can imagine how another flutist may respond to a new piece of music in a way that someone who has never tried to make a sound using a flute, or indeed, someone not able to read music.

Furthermore, ordinary people in society are not trained as lawyers, instead they learn the meaning of harm and the notion of property and ownership in culturally based social settings. Introspection is a powerful mechanism for assessing harm, and since people share the same kind of body they are able to use introspection to put themselves in other peoples position and to assess and explore the consequences.

Social robots must also be law abiding. They should not break the law of their own volition, nor undertake instructions from humans that would lead to laws being broken. Ryan Calo [46] provides a comprehensive treatise on the privacy implications of robots. He maintains that robots will impact human privacy in at least the following three ways: (i) they have a significant capacity for surveillance, (ii) they introduce new points of access to historically private spaces such as the home, and (iii) they trigger hardwired social responses in people that can threaten several of the values privacy protects.

4 Discussion

Robots need social intelligence in order to interact, engage and collaborate with people fluently, safely and effectively. Without social intelligence robots will exhibit unacceptable levels of social ineptitude and not deliver their expected value to

Page 9: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

Robot Social Intelligence 53

society. Robot social intelligence requires a theory of empathy and a theory of mind. Just like robots, pets and other nonhuman animals have different morphologies and cognitive capabilities to people, and yet they have been socialized and integrated into human society, albeit in limited ways. People are happy to have sociable animals in their homes, workplaces, and public spaces. Dogs in particular are often treated as trusted family members and used as workers on farms. None-the-less animals do not have the same cognitive capabilities or morphology as people, yet people happily make allowances and collaborate with them. Robots will need more social intelligence than animals, if they are to achieve their promised value as a trusted domestic servant, friend or co-worker. Furthermore, robots need social intelligence so that they will not exhibit social disabilities like autism, and not develop unacceptable characteristics such as sociopathic or psychopathic tendencies.

References

1. Albrecht, K.: Social Intelligence: the New Science of Success. Wiley (2005) 2. Asimov, I.: I, Robot. Doubleday & Company, New York (1950) 3. Anderson, J.R.: Language, memory, and thought. Erlbaum, Hillsdale (1976) 4. Asch, S.E.: Forming impressions of personality. Abnormal & Social Psy. 41, 258–290

(1946) 5. Baker, J.: Social Skills Training: for children and adolescents with Asperger Syndrome

and Social-Communication Problems. Autism Asperger Publishing Company (2003) 6. Bandura, A.: Aggression: A social learning analysis. Prentice-Hall (1973) 7. Bandura, A.: Social foundations of thought and action: A social cognitive theory. Prentice-

Hall, Englewood Cliffs (1986) 8. Bandura, A., Walters, R.H.: Social learning and personality development. Rinehart, &

Winston, New York (1963) 9. Bargh, J.A.: The four horsemen of automaticity: Awareness, intention, efficiency, and

control in social cognition. In: Wyer, R.S., Srull, T.K. (eds.) Handbook of Social Cognition, 2nd edn., vol. 1, pp. 1–40. Erlbaum, Hillsdale (1994)

10. Baron-Cohen, S.: Mindblindness: An essay on autism and theory of mind. MIT Press (1995)

11. Baron-Cohen, S.: Precursors to a theory of mind: Understanding attention in others. In: Whiten, A. (ed.) Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mindreading, pp. 233–251. Basil Blackwell, Oxford (1991)

12. Baron-Cohen, S., Tager-Flusberg, H., Cohen, D.J.: Does the autistic child have a theory of mind? Cognition 21, 37–46 (1993)

13. Brass, M., et al.: Investigating Action Understanding: Inferential Processes versus Action Simulation. Current Biology 17(24), 2117–2121 (2007)

14. Broom, M.E.: A note on the validity of a test of social intelligence. Journal of Applied Psychology 12, 426–428 (1928)

15. Byrne, R., Whiten, A. (eds.): Machiavellian intelligence: Social expertise and the evolution of intellect in monkeys, apes, and humans. Clarendon Press, Oxford (1988)

16. Call, J., Tomasello, M.: Distinguishing intentional from accidental actions in orangutans (Pongo pygmaeus), chimpanzees (Pan troglodytes), and human children (Homo sapiens). Journal of Comparative Psychology 112(2), 192–206 (1998)

Page 10: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

54 M.-A. Williams

17. Cantor, N., Fleeson, W.: Social intelligence and intelligent goal pursuit: A cognitive slice of motivation. In: Spaulding, W.D. (ed.) Integrative Views of Motivation, Cognition, and Emotion, pp. 125–180 (1994)

18. Cantor, N., Kihlstrom, J.F.: Social intelligence and cognitive assessments of personality. In: Wyer, Srull (eds.) Advances in Social Cognition, vol. 2, pp. 1–59 (1989)

19. Cantor, N., Zirkel, S.: Personality, cognition, and purposive behavior. In: Pervin, L. (ed.) Handbook of Personality: Theory and Research, pp. 125–164. Guilford, NY (1990)

20. Carruthers, P.: Simulation and self-knowledge: a defence of the theory-theory. In: Carruthers, P., Smith, P.K. (eds.) Theories of Theories of Mind. Cambridge University Press (1996)

21. Huang, C.-M., Mutlu, B.: Robot Behavior Toolkit: Generating Effective Social Behaviors for Robots. In: International Human-Robot Interaction Conference (2012)

22. Conway, M.A.: Autobiographical memory: An introduction. OUP, Milton Keynes (1990) 23. Courtin, C.: The impact of sign language on the cognitive development of deaf children:

The case of theories of mind. Cognition 77, 25–31 (2000) 24. Flavell, J.H., Ross, L.: Social and cognitive development: Frontiers and possible future.

CUP (1981) 25. Gallese, V., Goldman, A.: Mirror neurons and the simulation theory of mind-reading.

Trends in Cognitive Science 2(12), 493–501 (1998) 26. Gallup, G.G.: Chimpanzees: Self-recognition. Science 167, 86–87 (1970) 27. Gallup, G.G.: Self-awareness and the evolution of social intelligence. Behavioural

Processes 42, 239–247 (1998) 28. Gärdenfors, P.: How Homo became Sapiens. MIT Press, Cambridge (2004) 29. Gärdenfors, P., Williams, M.-A.: Communication, Planning and Collaboration based on

Representations and Simulations. In: Khlenthos, D., Schalley, A. (eds.) Language and Cognitive Structure, Benjamins, p. 56 (2007)

30. Gordon, R.M.: ’Radical’ simulationism. In: Carruthers, P., Smith, P.K. (eds.) Theories of Theories of Mind. Cambridge University Press, Cambridge (1996)

31. Hare, B., Call, J., Tomasello, M.: Do chimpanzees know what conspecifics know and do not know? Animal Behavior 61, 139–151 (2001)

32. Horowitz, A.: Attention to attention in domestic dog (Canis familiaris) dyadic play. Animal Cognition 12, 107–118 (2009)

33. Kahn, Kanda, Ishiguro, Gill, Ruckert, Shen: Do People Hold a Humanoid Robot Morally Accountable for the Harm It Causes? In: Human-Robot Interaction Conference (2012)

34. Kaminski, J., Neumann, M., Bräuer, J., Call, J., Tomasello, M.: Dogs (Canisfamiliaris) communicate with humans to request but not to inform. Animal Behavior 82(4), 651–658 (2011)

35. Kihlstrom, J., Cantor, N.: Social Intelligence. In: Sternberg, R.J. (ed.) Handbook of Intelligence, 2nd edn., pp. 359–379. Cambridge University Press, Cambridge

36. Meltzoff, A.N.: Imitation as a mechanism of social cognition: Origins of empathy, theory of mind, and the representation of action. In: Goswami, U. (ed.) Handbook of Childhood Cognitive Development, pp. 6–25. Blackwell Publishers, Oxford (2002)

37. Pelphrey, K.A., et al.: Grasping the Intentions of Others: The Perceived Intentionality of an Action Influences Activity in the Superior Temporal Sulcus during Social Perception. Journal of Cognitive Neuroscience 16(10), 1706–1716 (2004)

38. Pettersson, H., Kaminski, J., Herrmann, E., Tomasello, M.: Understanding of human communicative motives in domestic dogs. Applied Animal Behaviour Sciences 133(3-4), 235–245 (2011)

Page 11: [Lecture Notes in Computer Science] Social Robotics Volume 7621 || Robot Social Intelligence

Robot Social Intelligence 55

39. Povinelli, D.J., Nelson, K.E., Boysen, S.T.: Inferences about guessing and knowing by chimpanzees (Pan troglodytes). Journal of Comparative Psychology 104(3), 203–210 (1990)

40. Premack, D., Woodruff, G.: Does the chimpanzee have a theory of mind? Behavioral & Brain Sciences 1, 515–526 (1978)

41. Huang, C.-M., Thomaz, A.L.: Joint Attention in Human-Robot Interaction. In: AAAI Fall Symposium on Dialog with Robots, Arlington, VA (2010)

42. Scasselatti, B.: Theory of Mind for a Humanoid Robot (2004); Sechrest, L., Jackson, D.N.: Social intelligence and the accuracy of interpersonal predictions. Journal of Personality 29, 167–182 (1961)

43. Scheider, L., Grassman, S., Kaminski, J., Tomasello, M.: Domestic dogs use contextual information and tone of voice when following a human pointing gesture. PLOS One 6(7) (2011)

44. Sommerville, J.A., Decety, J.: Weaving the fabric of social interaction: Articulating developmental psychology and cognitive neuroscience in the domain of motor cognition. Psychonomic Bulletin & Review 13(2), 179–200 (2006)

45. Williams, M.-A.: Representation = Grounded Information. In: Ho, T.-B., Zhou, Z.-H. (eds.) PRICAI 2008. LNCS (LNAI), vol. 5351, pp. 473–484. Springer, Heidelberg (2008)

46. Williams, M.-A.: Autonomy: Life and Being. In: Bi, Y., Williams, M.-A. (eds.) KSEM 2010. LNCS (LNAI), vol. 6291, pp. 137–147. Springer, Heidelberg (2010)

47. Ryan Calo, M.: Robots and Privacy. In: Lin, P., Bekey, G., Abney, K. (eds.) Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press, Cambridge (2011)

48. Knapp, M., Romeo, R., Beecham, J.: Economic Cost of Autism in the UK, vol. 13, pp. 317–336 (autism May 1, 2009)

49. Novianto, R., Williams, M.-A.: The Role of Attention in Robot Self-Awareness. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2009, pp. 1047–1053 (2009)

50. Pfeifer, R., Bongard, J.C.: How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press (2006)

51. Johnston, B., Williams, M.-A.: Autonomous Learning of Commonsense Simulations. In: International Symposium on Logical Formalizations of Commonsense Reasoning, pp. 73–78 (2009)

52. Novianto, R., Johnston, B., Williams, M.-A.: Attention in the ASMO Cognitive Architecture. In: Proc. Bioinspired Cognitive Architectures Symposium, pp. 98–105. IOS Press (2010)

53. World Health Organisation ICD-10: Clinical descriptions and diagnostic guidelines: Disorders of adult personality and behaviour (2010)


Recommended