+ All Categories
Home > Documents > Design for a Robotic Companion

Design for a Robotic Companion

Date post: 15-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
24
1
Transcript

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

International Journal of Humanoid Roboticsc© World Scienti�c Publishing Company

DESIGN FOR A ROBOTIC COMPANION

JAN K�DZIERSKI, PAWE� KACZMAREK, MICHA� DZIERGWA, KRZYSZTOF TCHO�

Chair of Cybernetics and Robotics, Wrocªaw University of Technology,Wybrze»e Wyspia«skiego Street 27, 50-370 Wrocªaw, Poland,

email: {jan.kedzierski, pawel.m.kaczmarek, michal.dziergwa, krzysztof.tchon}@pwr.edu.pl

Received Day Month YearRevised Day Month YearAccepted Day Month Year

We can learn from the history of robotics that robots are getting closer to humans, both inthe pysical as well as in the social sense. The development line of robotics is marked withthe triad: industrial - assistive - social robots, that leads from human-robot separationtoward human-robot interaction. A social robot is a robot able to act autonomously andto interact with humans using social cues. A social robot that can assist a human for alonger period of time is called a robotic companion. This paper is devoted to the designand control issues of such a robotic companion, with reference to the robot FLASHdesigned at the Wroclaw University of Technology within the European project LIREC,and currently developed by the authors. Two HRI experiments with FLASH demonstratethe human attitude toward FLASH. A trial testing of the robot's emotional system isdescribed.

Keywords: Social robot; robotic companion; robot design; robot control; human-robotinteraction.

1. Introduction

It is acknowledged that the era of robotics as a domain of science and technology

began in 1960, due to the development of Unimate - the �rst robotic arm, which was

deployed in a General Motors plant in New Jersey in 1960.1 Industrial robots are

programmable machines designed to tirelessly and thoughtlessly carry out routine

manufacturing operations. Unable to communicate with the external world, these

machines are completely dependent on their human programmers, and, for security

reasons, kept at a distance from humans. This separation can also be seen in many

robotics applications other than the robotized industrial plants. On the other hand,

growing demand for robotic applications and increasing capabilities of robots al-

low them to get closer to humans. This is con�rmed most convincingly by medical

robotics. A robot for cardiac surgery following the movements of a surgeon's hands

or a robot providing a remote diagnosis by abdomen palpation remain in the clos-

est imaginable proximity to the human patient. Other similar examples include a

therapeutic robot embraced by a patient su�ering from the Alzheimer's disease or

1

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

2 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

a robotic toy pet in the hands of a child. These examples as well as many others

demonstrate that the robots have irreversibly invaded the human private space.

The next step in the development of robotics, beyond getting closer, is get-

ting more similar to humans. This similarity may have a double meaning: a robot

could look or behave similarly to humans. There has been signi�cant technological

progress in designing anthropomorphic robots resulting in a generation of android

robots like actroids, germinoids, etc., giving a human an illusion of meeting another

human being. However, this illusion quickly disappears when trying to interact or

communicate with such a creature: its behavior remains far behind what its ap-

pearance seems to promise. Such a discrepancy between the robot's appearance

and behavior is frightening and repulsive, and makes humans disapprove of the

robot. This phenomenon was described by M. Mori around 1970, and is known as

the uncanny valley (shown in Figure 1).2

Fig. 1. Uncanny valley (based on M. Mori)2

The other meaning of similarity has a behavioral dimension. Since the most

distinctive feature of humans is being social, we expect a robot to behave socially,

to be a social or a sociable robot.3,4 A fundamental ingredient of sociality is the

capability of interaction and communication with humans by human means and

in a human way. This implies that a social robot should be capable of voice com-

munication, of using gestures or facial and body expressions, of maintaining eye

contact, etc. Thus, a social robot gets engaged in the interaction with humans on

the rational as well as on the emotional level. Social robots can be tolerated or even

accepted by humans in their close vicinity, moreover, they may be welcome, if their

interactivity is su�ciently high, and if they are able to provide useful services. If

a robot can maintain interactivity and assistivity for a longer period of time, it is

called a robotic companion. The term companion originates from a Latin word pa-

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 3

nis, meaning bread. Therefore, a companion is someone with whom we are ready to

share bread during a journey. Such a companion should be interactive and assistive

for extended periods of time. Simultaneously, his appearance needs to be consistent

with his behavior, in order to avoid the uncanny valley phenomenon. A concise

characterization of a robotic companion would therefore be: lastingly interactive,

assistive, and consistent.

2. Robot FLASH

Robot FLASH (Flexible Lirec Autonomous Social Helper) has been created within

the EU 7FP IP LIREC (LIving with Robots and intEractive Companion).5 This

project was realized in the years 2008-2012 by a multidisciplinary, international re-

search team of ten partners, coordinated by Prof. P. W. McOwan from Queen Mary

and West�eld College, University of London. The main objective of the project was

to develop theoretical foundations of robot-human companionship, and to provide

the technology for the design of robotic companions. One of the tasks of the LIREC

partner from theWroclaw University of Technology was building a prototype robotic

companion. The design had to face a number of challenges, like consistency of ap-

pearance and behavior, perception and interaction, emotion expression, and learn-

ing. The robot is shown in Figure 2. FLASH served as a platform for integration of

Fig. 2. Robot FLASH: front and back view

diverse technologies developed in LIREC and for their experimental veri�cation in

social environments (HRI experiments). The �nal design of FLASH was preceded

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

4 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

by an experimental assessment of the robot's appearance and of the level of his

acceptance by humans. Another important design assumption made for FLASH

is modularity, both of hardware and of software. The �nal size, appearance, and

functions (referred to as competencies) of FLASH re�ect a balance between human

expectations, results of psychosocial experiments, and possibilities o�ered by the

modern mechanic/electronic/computer technology.

FLASH consists of a two-wheel balancing platform equipped with an expressive

head and a pair of arms with hands. The employment of a balancing platform,

functionally similar to that of Segway, has resulted in obtaining natural, �ne and

smooth robot movements, well perceived by humans. Furthermore, in contrast to

many multi-wheeled platforms, a two-wheel balancing platform with high positioned

center of gravity can easily move on rough grounds and overcome slopes. The re-

maining robot's components, i.e. the head and the arms �xed to the torso, are

basically tasked with expressing emotions. These capabilities increase the accep-

tance of the robot by humans and lay foundations for the establishment of long

term human-robot relationship.

The motion control system of FLASH includes a balancing platform controller,

seven two-axis controllers of arm joints, four six-axis controllers of hand joints and

a dedicated EMYS head controller. The core of the sensor system is constituted by

a Kinect depth sensor, a laser scanner and an RGB camera. The power supply is

based on a 16Ah, 42V battery pack and a collection of DC/DC converters, allowing

the robot to function uninterruptedly for 2-4 hours (depending on the intensity

of his movements, gestures, etc.). The robot's heart is the on board, multi-core

PC computer. An overview of the hardware structure of the robot can be seen in

Figure 3.

3. Design

In the following section we shall describe in detail FLASH's basic components:

the mobile platform, the arms and hands, and the head. The robot's mechanical

construction and low level controllers are covered in detail in.6 Broad and up-to-date

information on the robot is available on FLASH's website.7

3.1. Balancing platform

The platform's design is modular. Its chassis has been built of lightweight aluminum

pro�les. The platform moves on a pair of pneumatic wheels, 32 cm in diameter. Over

the platform's drives two rows of controllers have been installed (dedicated to the

platform and the arms) as well as the measuring systems and the power supply. The

bottom part of the platform hosts the power supply of the whole robot. The wheels

are actuated by a pair of brushed DC Maxon motors equipped with encoders. The

platform's mechanical setup is shown in Figure 4.

The platform's control is achieved by means of a navigation competency run on

the on board PC computer, and using the data incoming from the platform itself

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 5

Fig. 3. FLASH: Hardware overview

and from the laser scanner. The low level controller, based on an MPC555 micropro-

cessor, is responsible for balancing the platform, realization of prescribed velocities,

and generation of control signals. The signals generated in the controller are sent to

the power stage, which can also monitor the current state of the drives. The devia-

tion of the platform from the vertical position (tilt angle) is measured by an inertial

measurement unit. The tilt angle is obtained by means of data fusion realized by a

Kalman �lter and input to the balancing algorithm. Balancing is achieved using a

linear controller based on a linear approximation of the platform's dynamics. One of

the most important program modules is the communication module, compliant with

the ARCOS system installed on mobile platforms such as Pioneer 3-DX, P3-AT,

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

6 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

Fig. 4. Balancing platform

Fig. 5. WANDA: arm (left) and hand (right)

PeopleBot or PowerBot produced by Adept Mobile Robots.8 This module supplies

information on the platform's motion, the battery state as well as the sensor data. It

also allows the con�guration of basic robot parameters, e.g. the maximum velocity,

acceleration, displacement, etc. The communication module enables to control the

FLASH's platform with the help of Aria or Player systems.8,9

3.2. Torso and upper limbs

Robot FLASH has been equipped with two arms, each with seven degrees of freedom

(DOF), and dexterous four-�nger hands WANDA (Wrut hANDs for gesticulAtion).

The kinematic structure of arms and hands is presented in Figure 5. The arm's

structure resembles that of the human arm; it has three DOF in the shoulder,

an elbow with a single DOF, one DOF between the wrist and the elbow, and a

wrist with two DOF. The shoulder joint is driven using a belt transmission. The

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 7

arms consist of tendon driven Robolink joints manufactured by IGUS, which can be

linked together using dedicated carbon �ber or aluminum tubes to create complex

kinematic chains. The wrist construction is based on a modi�ed ball joint whose

rotation along arm axis has been blocked resulting in a joint with two DOF. The

wrist joint is tendon driven as well. Tendons connecting each joint to corresponding

motor are placed in Bowden cables. A tension of each tendon can be adjusted by

means of a special screw. Arm joints are actuated by Maxon brushed DC motors

mounted within the robot's torso.

Each hand consists of a total of four �ngers, three of which (index, middle and

ring) are identical, and the other one is an opposable thumb. Index, middle and

ring �ngers have four rotating joints, of which two are coupled and represented

by a single DOF. The thumb consists of three rotating joints. Overall, each hand

is equipped with twelve degrees of freedom. With the exception of the absence of

the little �nger, the kinematic structure of a FLASH's hand is similar to a human

hand. WANDA hands are tendon driven, apart from the straightening of the �ngers

which is achieved by small watch springs. Hand joints are actuated by servomotors

mounted on a frame located in the robot's forearms. Hand elements and motor

frame are 3D printed using MJM technology.

A central role in the arm/hand motion control system is played by the on-

board PC computer running the Urbi framework that implements the gesticulation

competency. The computer uses RS485 and Dynamixel protocol to communicate

with low-level, distributed motion controllers, which are able to control the motor

position, velocity or torque. The main part of each controller is a TI Stellaris mi-

crocontroller. FLASH utilizes two types of motion controllers: a two-axis controller

adapted to power stages driving the arm motors, and a small size, low weight,

six-axis controller with integrated power stages, suitable for use with hand servos.

3.3. Head

The FLASH's head EMYS (EMotive headY System), which is shown in Figure 6,

has eleven degrees of freedom (three in the neck, two in the lower and upper discs,

two in the eyes and four in the eyelids). EMYS can express six basic emotions

such as surprise, disgust, fear, angriness, joy and sadness (displayed in Figure 7).

In order to facilitate maintaining the robot's balance the head has been made of

lightweight aluminum parts. External head components have been made using 3D

rapid prototyping SLS technology. Facial expressions of emotions are achieved by

means of a pair of movable discs installed in the lower and the upper part of the

head: the former imitates jaw movements, the latter � movements of eyebrows and

wrinkling of the forehead. The eyelids can be closed and opened, and the eyes can

be thrust out by several centimeters. All these movements considerably enhance the

expressiveness of emotions.

The neck should enable smooth, moderately slow and natural movements in

order for EMYS to follow human face with his eyes, look around, nod, etc. To

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

8 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

Fig. 6. EMYS: kinematic structure

Fig. 7. EMYS: basic emotions

satisfy this requirement, four high quality Dynamixel servomotors made by Robotis

have been employed to actuate the neck. Two of them realize the tilt motion, one

turns the head, and one is responsible for nodding. Closing eyelids and rotating the

eye is actuated by Hitec micro servomotors. Eyes are thrust out using very rapid,

active slide potentiometers. The head control module is based on a HC9S12A64

microcontroller.

4. Control

As it was previously stated, the control system of FLASH complies with the three-

layer control architecture paradigm (see Figure 8).10,11 Its lowest layer provides

necessary hardware abstraction, and integrates low-level motion controllers, sensor

systems and algorithms implemented as external software. The middle layer is re-

sponsible for the functions of the robot and the implementation of his competencies.

It de�nes a set of tasks the robot will be able to perform. The highest layer may

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 9

incorporate a dedicated decision system, a �nite-state machine or a comprehensive

system simulating human mind functionalities.

When adding a new component to the system, the programmer should take care

to integrate it into the existing architecture with regards to its three layers. Low

level modules of the control system should provide a minimal set of features that

allow to utilize the full capabilities of the devices or software they are an interface to.

Due to the �exibility of the control architecture, modules can span more than one

layer. This feature allows to avoid the arti�cial partitioning of some components.

This happens most often in the case of components that use external libraries that

provide both low-level drivers and competencies that belong in the middle layer.

For example, OpenNI software can be used to retrieve data from an RGB-D sensor

(lowest layer) and provide data on silhouettes detected by the sensor (middle layer).

Gostai Urbi development platform is used as the main tool for handling the

various software modules.12 It integrates and provides communications between the

two lowest levels of the architecture. This allows dynamic loading of modules and

provides full control over their operation. Urbi also delivers urbiscript � a script pro-

gramming language for use in robotics, oriented towards parallel and event-based

programming. It serves as a tool for management and synchronization of various

components of the control system. Urbiscript syntax is based on well-known pro-

gramming languages and urbiscript itself is integrated with C++ and many other

languages such as Java, MATLAB or Python. Of particular interest is the orches-

tration mechanism, built into Urbi, which handles among others the scheduling and

parallelization of tasks. Thanks to this feature all the activities of the robot can be

synchronized with each other, e.g. movements of joints during head and arm gestic-

ulation, mouth movement with speech, tracking of objects detected in the camera

image, etc. The programmer decides how the various tasks should be scheduled

through the use of appropriate urbiscript instruction separators.

The Urbi engine, operating in the main thread, runs the low-level modules syn-

chronously using the data and functions that they provide. Modules which consume

a signi�cant amount of CPU time can be run asynchronously in separate threads.

The thread, that the Urbi engine runs in, will then not be blocked and the engine

will be able to e�ectively perform other tasks in the background. Urbi is thread-safe

since it provides synchronization and extensive access control for tasks that are run

in separate threads. It is possible to control access to resources at the level of mod-

ules, instances created within the system or single functions. An example could be

the detection of objects within an image. Urbi can run the time-consuming image

processing process in a separate thread leaving the other operations (e.g. trajectory

generation) una�ected. Components that use the data that are the result of image

processing will be waiting for the results of the module's operation. The above mech-

anism meets the criteria of a soft real-time system. Hardware robot drivers whose

operation is critical (e.g. balancing controller) are implemented on microprocessors

using a lightweight hard real-time operating system.

The competencies of the robot and his speci�c behaviors are programmed using

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

10 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

urbiscript by loading instructions into the Urbi engine by a client application. Ur-

biscript possesses an important feature that allows the programmer to assign tags

to pieces of code. This means that some tasks can be grouped and managed to-

gether which in turn can be used to implement task prioritization. This mechanism

helps to avoid con�icts that may occur during access to the physical components of

the robot. With it, the programmer can stop and resume fragments of instructions

at any time and also implement resource preemption. The process of generating

facial expressions during speech can be used as an example. Generating a smile

utilizes all the e�ectors installed in the robot's head. The movement of each drive

is tagged. Speech has a higher priority, and therefore when the robot speaks, the

tag encompassing jaw (or mouth) trajectory generation is stopped for the purpose

of generating speech-related mouth movements. When the robot stops speaking the

operation of the trajectory generator will resume. Moreover, gesture generation is

parameterized (with respect to duration, intensity, mode, etc.) so that their �nal

form can be adjusted to current situation (e.g. depending on the emotional state of

the robot).

The designed control system enables accessing the robot hardware and compe-

tencies in a uni�ed manner - using a tree structure called robot. It makes using the

API more convenient and helps to maintain the modularity of software. Elements of

the robot structure have been grouped based on their role. Example groups include

audio, video, ml (machine learning), body (platform control), arm, hand, head, di-

alogue, network and appraisal (see Figure 8). Thanks to this modularity, various

robot components can be easily interchanged, e.g. EMYS head can work just as

well when mounted on a platform other than FLASH. The software also allows for

quick disconnection of missing or faulty robot components.

More details on the control system can be found in the dissertation.13

4.1. Lowest layer

The lowest layer of the control system consists of dynamically loaded modules called

UObjects which are used to bind hardware or software components, such as actua-

tors and sensors on the one hand and voice synthesis or face recognition algorithms

on the other hand. Components with an UObject interface are supported by the

urbiscript programming language which is a part of Urbi software.

Communication with the hardware level is achieved by means of two modules

able to communicate through serial ports. One of them (UDynamixel) transfers data

using Dynamixel protocol which enables controlling actuators driving the arms,

hands, and the head. This module provides all necessary functions like velocity and

position control with torque limiting. The other module (UAria) enables controlling

the mobile platform via ARIA protocol. It gives FLASH full compatibility with

Mobile Robots products and o�ers a support for popular laser scanners.

The next group of modules provides image processing capabilities on RGB and

RGB-D data. Picture from a camera can be accessed and processed by modules

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 11

Fig. 8. FLASH: 3 layer control architecture

implementing OpenCV library functions. They provide: image capture functions and

camera settings (UCamera), basic image processing such as blurring, thresholding

and morphological operations (UImageTool), algorithms for object detection, e.g.

human faces or certain body parts using Haar classi�er (UObjectDetector), color

detection in HSV space (UColorDetector), and many more.

RGB-D data from the Kinect sensor can be extracted with OpenNI library

(UKinectOpenNI2 module) or Kinect SDK (UKinect module). The former allows

to measure distance to certain elements of image, detect human silhouette as well

as provide information on position of particular elements of human body. It also

implements very simple gesture recognition algorithms. The module based on Kinect

SDK provides the same functions as UKinectOpenNI2, and expands on them with

2D and 3D face tracking and microphone array support, speech recognition and

detection of voice direction.

The auditory modules are based on SDL library and Microsoft Speech Platform.

UPlayer module utilizes SDL to play pre-recorded .wav �les. It enables robot to play

back di�erent sounds and sentences recorded by external text-to-speech software.

URecog module uses Microsoft Speech Platform to recognize speech recorded us-

ing an external microphone. The last module, USpeech, utilizes MSP for real-time

speech synthesis.

Connection with the Internet is provided by UBrowser and UMail modules,

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

12 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

based on POCO library. The �rst module implements functions of a web browser

and an RSS reader. The module provides a wide variety of functions needed for

extracting particular information from the Internet, like weather forecast or news.

UMail serves as an e-mail client with the ability to check and read mails and send

messages with various types of attachments (e.g. image from the robot's camera or

a voice message recorded by Kinect).

Information gathered by the robot (from the websites, e-mails or via auditory

modules) can be a�ectively assessed to extract their emotional meaning. All nec-

essary functions to achieve this goal are implemented by UAnew, USentiWordNet

and UWordNet modules. The �rst one utilizes ANEW (A�ective Norms for English

Words) project, which is a database containing emotional ratings for a large num-

ber of English words.14,15 It can be used for evaluating a word or a set of words

in terms of feelings they are associated with. USentiWordNet is based on a project

similar to ANEW - SentiWordNet.16 It is a lexical resource for opinion mining,

assigning ratings to groups of semantic synonyms (synsets). UWordNet plays a dif-

ferent role than the two previous modules. It is an interface to WordNet - a large

lexical database of English words, in which nouns, verbs, adjectives and adverbs are

grouped into synsets, each expressing a distinct concept. When the word cannot be

assessed by previous modules, UWordNet is used as a synonyms dictionary to �nd

the basic form of a word.

4.2. Middle layer

The middle layer consists of all the functions necessary for the operation of robot's

competencies, as well as a system for managing those functions. It is important

that all tasks are carried out synchronously. A gesture of the hand, accompanied

by a rotation of the platform should be performed in appropriate time intervals.

This is of particular importance when it comes to the generation of speech. Speech

synthesizers used by the robot generate tags that inform of the mouth shape that

should accompany the spoken sounds. Position of the robot's jaw must keep up

with the uttered words. The purpose of the middle layer is executing commands of

the highest layer, and implementing adequate behaviors for the robot. A properly

con�gured competency manager layer decides which robot components should be

combined to achieve speci�c tasks/behaviors. This set of competencies should be

parameterized in such a way as to make it �t any situation and con�guration.

Using the example of a speech generator, competency parameters should include

not only the text to be uttered, but also the length of the utterance, the volume and

tone of voice (which would change based on the emotional state of the robot). Such

functions located in the middle layer can then be used by the software simulating the

human mind. The richer the repertoire of available skills and expressive behaviors,

the more interesting scenarios will be possible to implement.

Competencies can be implemented in a variety of ways. Typically, they are

created as scripts written in urbiscript language. These scripts can either rely solely

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 13

on urbiscript instructions or utilize functions delivered by UObjects. As it was

mentioned before, in case of external software, UObjects serve mainly as wrappers

and therefore competencies delivered by a particular library are implemented as

module functions accessible at urbiscript level.

4.3. Highest layer

The highest layer of the control architecture hosts the robot decision system. Dur-

ing short-term HRI, the role of this system is often played by �nite-state machines.

They are su�cient for creating interesting interaction scenarios, but after a couple

of minutes people notice that the robot is repeatable and previous events are not

a�ecting his behavior. During short-term studies, when the robot's behavior cannot

be obtained in autonomous operation, the decision system can be assisted by a hu-

man operator. This approach is called the Wizard of Oz (WoZ). In order for FLASH

to ful�ll the requirements set for social robots (e.g. according to the de�nition given

by Fong et al.), he should be equipped with some sort of a�ective mind.4 This mind

should consist of a rational component, enabling the robot to plan his action and

achieve his goals, and an emotional component, which would simulate his emotions

and produce reactive responses. The role of an emotional component in HRI is

crucial. It in�uences the control system, changing the perceptions and goals based

on simulated emotions. Emotions also provide reliable and non-repetitive reactions,

and increase the credibility of a social robot's behaviors.

FLASH's control system is well suited to working with all the above mentioned

decision systems. Wizard of Oz studies can be performed with the help of UJoystick

module which utilizes SDL library to handle joysticks, and pads to remotely control

the robot. Another helpful tool for this kind of operation is Gostai Lab software

which allows to create remote controls panels with access to robot's sensory data,

e.g. an image from the camera or a human silhouette detected by Kinect. Creation

of �nite-state machines is also supported by Urbi software - Gostai Studio. It is a

graphical user interface capable of creating behavior of a robot as a set of nodes and

transitions between them. Finite-state machines created in Gostai Studio served as

the FLASH's decision system in the experiments/trials described in the following

section.

Robot behaviors programmed as �nite-state machines can be enriched with an

emotional component simulated in external software. It cannot rival a�ective mind

architectures, but will provide a wide variety of reliable and less repetitive behaviors.

FLASH is adapted to working with two emotional systems - Wasabi and a dynamic

PAD-based model of emotion. Both systems are based on dimensional theories of

emotion, in which a�ective states are not only represented as discrete labels (like

fear or anger), but as points or areas in a space equipped with a coordinate system.

Emotions which can be directly represented in this space are called primary (basic).

Some theories also introduce secondary emotions which are a mixture of two or more

basic ones. The most popular theory in this group is PAD, proposed by Mehrabian

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

14 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

and Russell, whose name is an abbreviation of three orthogonal coordinate axes:

Pleasure, Arousal, Dominance.17

The Wasabi emotional system was proposed by Becker-Asano.18 It is based

on PAD theory, but is extended with new components like internal dynamics in

emotion-mood-boredom coordinates, implementation of secondary emotions (hope,

relief and con�rmed fears) and a direct way of interaction with software engine

(assessment of events directly in�uences emotion coordinate). In every simulation

step, following the calculation of internal dynamics, the state of the robot is mapped

into PAD space. After each iteration, we check if any of the primary emotions have

occurred - it is possible when current position in PAD space is close enough to

points/areas corresponding to one of the emotions.

The implementation of dynamic PAD-based model of emotion is centered around

the assumption that our emotional state is similar to the response of a dynamic

object. Experience suggests that our emotions expire with time, so this dynamic

object should be stable. Inputs of emotional system are called attractors. According

to aforementioned intuitions, the module implements emotional system as an inertial

�rst, second or third order element with programmable time constants and gain.

All input vectors are linearly transformed to three dimensional PAD space. Output

of the module is the robot's mood de�ned as the integral of all emotional impulses

over time.

Perhaps the most advanced available a�ective mind is FAtiMA (FearNot! A�ec-

tive Mind Architecture) based on the Orthony, Clore and Collins appraisal theory of

emotions.19 This software was successfully integrated with FLASH's control system

during an experiment regarding the migration of agents.20 By means of FAtiMA's

action planner, FLASH autonomously achieved goals de�ned in the highest layer of

his control system, using his competencies in an unknown environment.

5. HRI experiments

Several HRI experiments were performed, aimed at a veri�cation of the robot's ap-

pearance and behavior. Below we shall con�ne to two of them, one with EMYS and

the other with FLASH. The �rst study was meant to con�rm how children recog-

nize the robot's emotions. The second study was conducted to determine factors

which could impede interaction with FLASH. An emotion simulation trial is also

described, which veri�es the robot's emotional control system.

5.1. HRI with EMYS

The joint experiment of Wrocªaw University of Technology and a group of psycholo-

gists from the University of Bamberg on the interaction of children with the robotic

head EMYS was designed for examining both how the robot's emotional expressions

a�ect the interaction as well as for assessing whether the children are able to cor-

rectly decode the intended expressed emotions. It involved about 50 schoolchildren

aged 8-11 years.

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 15

Fig. 9. HRI with EMYS

EMYS was programmed to operate autonomously and realize two scenarios.

Each participant went through both of them. The �rst one relied on encouraging

children to repeat facial expressions made by EMYS. In the second scenario the

robot expressed various emotions and asked children to show him a toy whose color

corresponded to the expression. Four boxes with toys of di�erent colors were avail-

able to the child. A box with green toys corresponded to joy, red to anger, blue to

sadness, and a yellow toy was to be shown when the robots expression didn't �t

any of the three previous groups. EMYS was able to recognize the color of the toy

and react accordingly, i.e. to praise the child, if he/she chose the right toy or inform

that the choice was wrong. After each session the children watched the recorded

interaction from the �rst game and were asked which emotions EMYS showed.

Thus, the experimental procedure consisted of a mixture between a�ect description

assessment ("repeating expressions") and a�ect matching assessment ("toy show-

ing"). The duration of the interaction experiment with a single child was about 5-10

minutes. All sessions were recorded using two video cameras set at di�erent angles.

After the interaction the participants were interviewed and answered questions in-

cluding personal information, how they perceived EMYS, and how they liked the

interaction. From a psychological viewpoint the study on children interacting with

the robotic head EMYS served several di�erent purposes. Firstly, the study inves-

tigated the emotional expressiveness of EMYS. The robotic head is able to show

six di�erent emotions (anger, sadness, surprise, joy, disgust, fear). The experiment

examined if those emotions could be recognized by schoolchildren and if recogni-

tion rates di�er from the rates for when humans express them. Furthermore, the

association of certain variables like engagement or personality of the children with

the recognition rates was investigated. Because of its design, EMYS' capability to

display emotions is di�erent compared to humans (as described by Ekman) and re-

garding certain areas limited (e.g. EMYS is not capable of raising mouth corners or

wrinkling his nose).21 To diminish biases due to the method, we used two di�erent

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

16 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

tasks in the study which represent the two main methods in the research for emo-

tion recognition (a�ect description assessment and a�ect matching assessment).22

Secondly, the study examined the engagement of the children in the interaction

with autonomously behaving EMYS and variables that impact engagement (which

in�uences building a long-term relationships with EMYS). Variables possibly hav-

ing an impact on engagement are for example: age, sex and personality of the child

subject, the perceived personality of EMYS, the perceived emotionality of EMYS,

prior experience with robots and more. Additionally, it was investigated whether

the recognition of EMYS' emotional expressions was relevant for the engagement

of the child subjects. Children playing the toy game are shown in Figure 9.

Analysis of the study results con�rms that the children coped well with emotion

recognition. �Joy� and �disgust� expression caused the most problems. Generally,

FLASH aroused positive emotions and the children felt safe with him. They rec-

ognized that his appearance as well as his behaviors, were human-like. Experiment

participants had no trouble with pointing out example �elds, where the robot could

be applied. Most often they indicated activities that they would like to be relieved

of, such as doing homework, cleaning or walking the dog. Most of the children

declared that they would like to meet the robot again. Detailed description and

analysis of results of this HRI experiment are available in.23

5.2. HRI with FLASH

The study with FLASH was carried out with the help of market research specialists

from Millward Brown SMG/KRC to ensure the highest quality of gathered data. It

was aimed at discovering the key features that a�ect the human-robot interaction.

This included both the physical appearance of the robot (uncovered mechanical and

electronic elements, LEDs, wires, etc.) as well as the emotionality of the utterances

(conveyed using facial expressions and hand gestures). To test the in�uence of these

factors, three versions of the experiment have been carried out. In the �rst one, the

robot was emotional and covered with casings which hid its mechanical/electronic

components. In the second part FLASH was still emotional, but the casings were

removed. During the third stage the robot was covered, but devoid of emotional

reactions. In total, a group of 143 people over a period of 5 days took part in the

study. Two main tools were used to obtain data from this experiment. Firstly, a state

of the art mobile eye-tracker which allowed recording videos of the participants'

gaze patterns. There are social studies providing the information about proper gaze

distribution in human-human interaction.24,25 Every major deviation from these

patterns in the gathered data was analyzed. To authors knowledge there has been

only one experiment using an eye-tracking device to investigate robot features.26

The respondents of the study in question did not interact with a physical robot but

were only given a photo of a robot's face and they were also using a stationary eye-

tracking device. Secondly, the e�ects of the robot's emotionality (or lack thereof)

were analyzed using questionnaires and in-depth interviews containing personal

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 17

Fig. 10. Eye-tracking device (left) and a sample focus map (right)

questions as well as questions regarding the course of the experiment and the robot

himself. The study was based on a methodology re�ned and previously tested during

a small scale pilot study. The eye-tracking device and a sample result image (a gaze

focus map) are shown in Figure 10.

The participants were chosen randomly from amongst people with no previous

knowledge of FLASH, �tted with the eye-tracking device and then led into the room

with the robot. Participants were then left alone with the robot who provided them

with information needed to complete the task. The experiment was divided into two

parts. For the �rst minute, the interaction was minimal - FLASH was introducing

himself to the person, i.e. shaking hands, saying a few words about where and why

he was created, followed by a short compliment on the test participant's clothing.

After that, the 2-3 minute long main part (depending on the test participants

performance) commenced. The robot asked the person to take toys of di�erent

colors from a container, and show them to him. He then made emotional comments

(such as: �I hate this toy! Put it back�, �They never let me play with this toy�, �This

toy is my favorite!�, etc.) which were or were not enhanced by facial expressions

and gesticulation (depending on the experiment version). After being shown six

toys the robot turned itself o� and the participant was taken to another room,

where the questionnaires/interviews were carried out. Results of the study showed

that interaction with FLASH follows the same general patterns as interaction with

another human being. The main points of gaze focus were the head and the upper

parts of the torso. The main deviations from this rule happened during the �rst

phase of the interaction. Participants tend to look all over the robot's body, which

is attributed to the fact that they need some time to adjust to the robot as well

as to the low intensity of interaction. Removing the robot's casings results in a

change of perception - participants tend to look more at bottom parts of the robot

which contain various controllers and LED's. The neck of the robot as well as his

forearms (which are unnaturally bulky) divert the gaze of participants regardless of

the experiment version. Questionnaires suggest that the degree of emotionality that

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

18 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

the robot presents should depend on the application and nature of the conveyed

message. An emotional robot is viewed as more friendly and kind which is useful

for establishing and maintaining relations. A robot devoid of emotional reactions

is more suited to conveying information and persuasion, as it is treated as more

calm, reasonable and honest. Removing the robot's casings proved to increase social

distance - FLASH was viewed as a mechanical contraption instead of as a partner

and was even deemed slightly dangerous by some of the participants. More details

about this experiment can be found in.27,28

5.3. Emotion simulation trial

The abundance of information and the universal access to the Internet have con-

tributed to what can be described as an addiction to mass media. E�ortless access to

data has become the basis of a complex new system of social communication, in�u-

encing our intellect, emotions, and social behavior. This dependence on information

could potentially be used to stimulate human-robot interaction. By de�nition, a

social robot should be capable of generating behaviors (including methods of com-

municating information) that conform to his user's expectations, while at the same

time staying in accordance with social norms. Therefore he should communicate

information with regard to its emotional character. This could have paramount

implications for the process of forming a relation.

In order to evaluate the cooperation of modules tasked with acquiring data from

the Internet and the emotional appraisal, a trial scenario has been devised. EMYS'

dynamic emotional system is a�ected by the aforementioned components.29 The

connections between various components are shown in Figure 11. The scenario is

based on the possible everyday activities that a human may perform together with

his/her personal robot.

The dynamics of emotion in the presented example have been described as a

�rst order inertial element. According to the experiment scenario, a set of attractors

(emotional system inputs) has been created. These attractors along with the corre-

sponding emotions are: user's appearance (happiness), user's departure (sadness),

accurate assessment of news' nature (happiness), inaccurate assessment of news' na-

ture (anger), boredom - triggered every 1 second (boredom), ANEW/SentiWordNet

appraisal (depending on the evaluation) and weather forecast appraisal (depending

on weather conditions).

The scenario began with the human subject sitting at the table, face to face with

the robot. After detecting the user, FLASH greeted him/her and began tracking

his/her face. The human then asked the robot to check his/her emails. Finding new

messages positively stimulated the robot, and their lack made him sad. Next, the

user asked for some news from the BBC website, which were then read to him/her

in a way that re�ected the emotional appraisal calculated for that particular piece

of information. If the absolute value of the appraisal was greater than 0.1, the robot

added verbal means of expression (e.g. �Oh no!�, �Great!�, �Excellent!�, etc.). After

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 19

Fig. 11. Structure of the a�ective information communication system

every piece of news FLASH asked whether his evaluation had been correct. In the

�rst two cases the person contradicted the robot's opinion, and in the next two

agreed with him. Towards the end of the scenario the robot was asked to check

the weather and, after he answered, the user left. PAD values obtained during the

experiment are shown in Figure 12. The complete dialog list for the experiment is

presented below:

User: Hello!Robot: (t1, happy, low) Hi!User: Check my e-mail please.Robot: (t2, happy, mid) Great! You have 9 new emails.User: Check some news please.Robot: I found some news on BBC News channel.Robot: (t3, content, low) Moscow increases its rhetoric against Ukraine's... (P=0.01)Robot: It's a good news. Isn't it?User: No, it's not!Robot: (t4, angry, mid) Would you like to hear the next piece of news?User: Yes, please!Robot: (t5, content, high) Italy's Prime Minister Matteo Renzi sets out... (P=0.017)Robot: It's a good news. Isn't it?User: No, it's not!Robot: (t6, angry, mid) Would you like to hear the next piece of news?User: Yes, please!Robot: (t7, sad, high) Oh no!

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

20 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

Robot: Barcelona pay 11.2M L to Spain's authorities after being charged... (P=-0.12)Robot: It's a sad news. Isn't it?User: Yes, it is.Robot: (t8, happy, mid) Would you like to hear the next piece of news?User: Yes, please.Robot: (t9, content, mid) A new department at the Vatican is created by... (P=0.05)Robot: It's good news. Isn't it?User: Yes, it is.Robot: (t10, happy, mid) Would you like to hear the next piece of news?User: No, thank you.User: Check the weather please.Robot: (t11, sad, mid) The weather is fair. The temperature is 5 degrees Celsius.User: Bye.Robot: (t12, sad, high) - user has left, the robot starts getting boredRobot: (t13, bored, high) - the robot goes to sleep

Fig. 12. PAD values during the experiment

These trials were not intended to evaluate the emotionality of the robot's be-

havior from the psychological point of view. The main goal was to assess the proper

operation of the emotional module and the validity of its integration with the ex-

isting decision system as well as its usefulness in generating original, non-schematic

interactions. Long-term experiments utilizing the emotional assessment described

above are currently underway.

6. Conclusions

With reference to robot FLASH, we have characterized main design and control

challenges of a robotic companion. The main mechanical components of the robot,

which give him the means to express emotional states and communicate, have been

presented. The proposed design allows for proper interaction with humans by giving

the robot human-like communication modalities while at the same time avoiding the

problem of uncanny valley. Two experiments have been conducted to verify design

assumptions, one with only the EMYS head and the other with the whole robot.

Results show that both children and adults feel comfortable interacting with the

robot and can easily recognize the emotions he expresses.

An implementation of a three-layer control architecture has been described, the

highest layer containing the robot's decision system, middle realizing the robot's

competencies and lowest providing necessary hardware abstraction and access to

external software. Two lower layers of FLASH's control system, based on Urbi soft-

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 21

ware, are open and modular - they can be easily extended with new components. On

the lowest layer, new sensors drivers and algorithms implementations can be added

as dynamically loaded modules, which can be combined into new robot competen-

cies (functions and behaviors) on the middle layer. The highest layer can constitute

a remote control, a �nite-state machine (enriched with emotional modules) or an

arti�cial a�ective mind, depending on the application. A trial was conducted to test

the operation of the proposed emotion simulation system.

It has been argued that the contemporary robotics technology is prepared for

facing these challenges with good prospects. Urgent design questions are concerned

with increasing the manual competencies of the robotic companion to make him

more assistive to humans. Advancement of control of the companion calls for the

implementation of an a�ective mind, enabling the robot to interact with humans for

a longer time. It may be expected that these problems will dominate social robotics

in the nearest future.

Acknowledgements

This research was supported in part by Wroclaw University of Technology under

a statutory grant (K. Tcho«, M. Dziergwa, P. Kaczmarek) and in part by grant

no. 2012/05/N/ST7/01098 awarded by the National Science Centre of Poland (J.

K¦dzierski).

References1. V. Kumar, 50 Years of Robotics, IEEE Robotics Automation Magazine 17 (3) (2010)

p. 8.2. M. Mori, K. F. MacDorman, N. Kageki, The Uncanny Valley, IEEE Robotics & Au-

tomation Magazine 19(2) (2012) pp. 98�100.3. C. Breazeal, Designing Sociable Robots, Intelligent Robots and Autonomous Agents

series (A Bradford Book, Londyn, 2004), pp. 1�15.4. T. Fong, I. Nourbakhsh, K. Dautenhahn, A Survey of Socially Interactive Robots,

Robotics and Autonomous Systems 42(3-4) (2003) pp. 143�166.5. LIREC: Project website. http://www.lirec.eu (2014).6. J. K¦dzierski, M. Janiak, Budowa robota spoªecznego FLASH (in Polish), Prace

Naukowe - Politechnika Warszawska, Elektornika Vol. 2 (2012) pp. 681�694.7. FLASH: Homepage. http://�ash.ict.pwr.wroc.pl (2014).8. MobileRobots, A.: Homepage. http://www.mobilerobots.com (2014).9. Player/Stage: Project website. http://playerstage.sourceforge.net/ (2014).10. R. Aylet et al., Updated integration architecture, LIREC Deliverable 9.4 (2010).11. E. Gat, On Three-Layer Architectures, in: Arti�cial Intelligence and Mobile Robots,

American Association for Arti�cial Intelligence (MIT Press, Cambridge, Massachusetts,1998), pp . 195�210.

12. Urbi: Project website. http://www.urbiforge.org (2014).13. J. K¦dzierski System sterowania robota spoªecznego (in Polish), PhD thesis (University

of Technology, Wroclaw, 2014).14. M. Bradley, P. Lang, A�ective Norms for English Words: Instruction Manual and

A�ective Ratings, Technical Report C-1 (The Center for Research in Psychophysiology,University of Florida, 1999).

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

22 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

15. A. Warriner, V. Kuperman, M. Brysbaert, Norms of valence, arousal, and dominancefor 13,915 English lemmas, Behavior Research Methods 45(4) (2013) pp. 1191�1207.

16. S. Baccianella, A. Esuli, F. Sebastiani, SentiWordNet 3.0: An Enhanced Lexical Re-source for Sentiment Analysis and Opinion Mining, in: Proceedings of the Seventh In-ternational Conference on Language Resources and Evaluation (LREC'10) (EuropeanLanguage Resources Association, Valletta, 2010), pp. 2200�2204.

17. A. Mehrabian, J. Russell, An Approach to Environmental Psychology (The MIT Press,Cambridge, Massachusetts, 1974), p. 31.

18. C. Becker-Asano,WASABI: A�ect Simulation for Agents with Believable Interactivity,PhD thesis (University of Bielefeld, Bielefeld, 2008).

19. J. Dias, A. Paiva, Feeling and Reasoning: A Computational Model for EmotionalCharacters, Progress in Arti�cial Intelligence 3808 (2005) pp. 127�140.

20. K. L. Koay, D. S. Syrdal, K. Dautenhahn, K. Arent, �. Maªek, B. Kreczmer, Compan-ion Migration - Initial Participants' Feedback from a Video-Based Prototyping Study, inMixed Reality and Human-Robot Interaction, Intelligent Systems, Control and Automa-tion: Science and Engineering Vol. 1010 (Springer Netherlands, 2011), pp. 133�151.

21. P. Ekman, W. Friesen, P. Ellsworth, What emotion categories or dimensions can ob-servers judge from facial behavior?, in Emotion in the Human Face (Cambridge Uni-versity Press, Cambridge, 1982) pp. 39�55.

22. A. Gross, B. Ballif, Children's understanding of emotion from facial expressions andsituations: A review, Developmental Review 11(4) (1991) pp. 368�398.

23. J. K¦dzierski et al., EMYS - Emotive Head of a Social Robot, International Journalof Social Robotics 5(2) (2013) pp. 237�249.

24. O. Chelnokova, B. Laeng, Three-dimensional information in face recognition: An eye-tracking study, Journal of Vision 11(13) (2011) pp. 1�15.

25. R. Bannerman et al., Orienting to threat: faster localization of fearful facial expressionsand body postures revealed by saccadic eye movements, in Proceedings of the RoyalSociety B 276 (2009) pp. 1635�1641.

26. E. Park, K. J. Kim, A. P. del Pobil, Facial Recognition Patterns of Children andAdults Looking at Robotic Faces, International Journal of Advanced Robotic Systems9 (28) (2012) pp. 1�8.

27. M. Dziergwa et al., Study of a Social Robot's Appearance Using Interviews and aMobile Eye-Tracking Device, in Social Robotics, Lecture Notes in Computer ScienceVol. 8239 (Springer International Publishing, 2013), pp. 170�179.

28. D. Frydecka, M. Zagda«ska, Postrzeganie robota spoªecznego FLASH (in Polish),Report SPR No 1/2013 (Institue of Computer Engineering, Control and Robotics,Wrocªaw University of Technology, Wrocªaw, 2013).

29. J. K¦dzierski et al., Afektywny system przekazu informacji dla robota spoªecznego (inPolish), in Post¦py robotyki Vol. 1 (O�cyna Wydawnicza Politechniki Warszawskiej,Warszawa, 2014), pp. 197�212.

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

Design for a robotic companion 23

Jan K¦dzierski received his M. Sc. and Ph. D. degrees from theWrocªaw University of Technology (WRUT), in 2008 and 2014, respec-tively. Since 2008, he is a Research Assistant at the Chair of Cyber-netics and Robotics. He was a member of WRUT team participatingin the LIREC project. Currently, he is the leader of a project called�Social robot control for long-term human robot interaction�. Thisproject is a part of a comprehensive research programme aimed at

designing control algorithms of a social robot that would allow it to establish and maintainlong term human-robot interaction. His research is also focused on whether people canbecome emotionally attached to a robot. Currently, he is developing a uniform socialrobot control system oriented towards human-robot interactions. He is the author of 16technical publications, proceedings, editorials and books.

Paweª Kaczmarek received his M. Sc. degree in Control Engineer-ing and Robotics from Wrocªaw University of Technology, Poland in2012. He is currently a Ph. D. student at the Chair of Cybernet-ics and Robotics at the same university. His research is focused onperception and minds of social robots. He is particularly interested inenriching social robots' control systems with an emotional componentand more complex action planners. His other professional interests are

connected with low-level robot controllers and RGB-D perception.

Michaª Dziergwa received his M. Sc. degree in Control En-gineering and Robotics from Wrocªaw University of Technol-ogy, Poland in 2013. He is currently a Ph. D. student atthe Chair of Cybernetics and Robotics at the same univer-sity. His interests include social robotics and HRI research.His work encompasses generating natural, emotional, communica-tive gestures for robots with regard to their social aspects.

Krzysztof Tcho« received his M. Sc., Ph. D. and D. Sc. degreesfrom the Wrocªaw University of Technology, Poland in 1973, 1976and 1986 respectively. From 1976 till 2014 he was appointed at theInstitute of Engineering Cybernetics, working on the mathematicalsystem theory, geometric control, and robotics. In 1982-1983 he re-ceived a British Council postdoctoral scholarship, and spent one yearat the Control Theory Centre, University of Warwick, UK. From 1987

till 1996 he was Associate Professor, and till 2014 in charge of the Unit of FundamentalCybernetics and Robotics at the Institute. In the years 1992-1993 he visited Ecole desMines de Paris, and Twente University of Technology. In 1996 he received the title ofProfessor of technical sciences, and in 1998 became full Professor at the Wroclaw Universityof Technology. In 2006-2008 he was a recipient of Professor Subsidy of the Polish ScienceFoundation. Since 2014 he has been in charge of the Chair of Cybernetics and Robotics

November 25, 2014 12:59 WSPC/INSTRUCTION FILE IJHR

24 Jan K¦dzierski, Paweª Kaczmarek, Michaª Dziergwa, Krzysztof Tcho«

at the Electronics Faculty of the Wroclaw University of Technology.Krzysztof Tcho« is the author of nearly 200 technical publications, proceedings, edito-

rials and books. His research interests include control systems, mathematical robotics, andsocial robotics. From 2008 till 2012 he was a leader of the Polish team involved in the Eu-ropean project LIREC devoted to the technology of robotic companions. Krzysztof Tcho«has been in charge of the Scienti�c Committee of the Polish National Robotics Conferences(13 editions from 1985). He promoted 13 Ph. Doctors of robotics. IEEE member (since1994), a member of the Committee of Automation and Robotics of the Polish Academy ofSciences, a member of the Robot Companions for Citizens RCC Initiative, and a memberof the European Network on Social Intelligence SINTELNET.


Recommended