ICMI 2011ICMI 201113th International Conference on 13th International Conference on
Multimodal InteractionMultimodal Interaction
Program HandbookProgram Handbook
Alicante, Spain, 1418 November 2011Alicante, Spain, 1418 November 2011
Program at a GlanceMonday, November 14
08:30-09:00 Registration09:00-10:15 Opening remarks + Keynote 110:15-10:45 Coffee break10:45-12:25 Oral session 1 (4 papers)12:25-14:00 Lunch14:00-15:40 Special session 1 (5 papers)15:40-16:00 Coffee break16:00-18:00 Poster session (26 posters)20:00 Welcome reception (18:45 bus departure from hotel)
Tuesday, November 15
09:15-10:15 Keynote 210:15-10:45 Coffee break10:45-12:25 Oral session 2 (4 papers)12:25-14:00 Lunch14:00-15:40 Oral session 3 (4 papers)15:40-16:00 Coffee break16:00-17:00 Demonstrations (7), Exhibits & Doctoral Spotlight posters (10)17:00-18:30 Special session 2 (4 papers)20:00 Conference Banquet (19:40 meeting point at the hotel)
Wednesday, November 16
09:30-10:30 Keynote 310:30-11:00 Coffee break11:00-12:40 Oral session 4 (4 papers)12:40-14:00 Lunch14:00-15:40 Oral session 5 (4 papers)15:40-16:30 Town hall meeting
Thursday, November 17
▪ Workshop: Inferring cognitive and emotional states from multimodal behavioural measures▪ Workshop: Affective Interaction in Natural Environments (AFFINE)
Friday, November 18
▪ Workshop: Multimodal Corpora for Machine Learning: Taking Stock and Road mapping the Future
3 / 38
General Information
E-Mail: [email protected]
Phones:
- Conference venue: +34 965 20 50 00
- Workshops venue: +34 965 14 53 33 / +34 965 14 59 52
- Technical secretary: +34 610 48 89 78
Wireless:
- Conference venue: free WiFi Internet access is available in public areas.
- Workshops venue: Connection instructions are on the Memory Stick.
Name badges: Conference attendees are required to wear their badges
while in conference area and during social events in order to facilitate
identification of registered participants.
Lunch: Lunch is served in “Salón Postiguet” (see layout on page 33) and is
included in the conference fee. Please, wear your badge.
Speakers and presentations: All oral presentations have been given a 20
minute slot (including equipment connection, introduction of the presenter,
and talk), with 5 additional minutes for questions.
All presenters need to verify that their presentation equipment works
properly by contacting their session chair during the coffee break preceding
the session.
Instructions for poster and demo presenters: Authors will have available a
panel of size 1.1 meters wide by 1.5 meters tall for their poster.
The recommended poster size is A0, portrait size: width 841 mm (33.1
inches), height 1189 mm (46.8 inches).
4 / 38
Chairs’ WelcomeWelcome to Alicante and to the International Conference on Multimodal
Interaction, ICMI 2011. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. It is the fusion of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction which, for the last two years held a combined event under the name ICMI-MLMI. Starting in this thirteenth edition the combined conference uses the new, shorter name.
This year we had the largest number of submissions ever achieved in ICMI/MLMI: 127 papers, 4 Special Session proposals, 10 Demonstration papers and 6 Workshop proposals. From the 4 Special Session proposals 2 were selected, including 7 papers. Out of the 120 regular papers submitted, 47 were accepted for oral or poster presentation, bringing the conference acceptance rate to 39%. This rate was higher for the Demonstration papers, from which 7 were accepted. In addition, the program includes three invited Keynote talks. Finally, from the 6 post-conference workshop proposals, 4 were selected, centered on hot specific topics of multi-modal interaction.
The review process was organized using the PCS submission and review system, which ICMI has used in the past. Aiming at improving the quality of the finally accepted papers, for the first time, this year the review process included a rebuttal step. The process was assisted by 15 Area Chairs (ACs) who helped the Program Chairs in defining the Program Committee. The papers were allocated to ACs in areas of their expertise according to the indications of the submitters, and then checked for conflicts. The Program Chairs distributed the papers to members of program committee and volunteer reviewers for comments. Once reviews were submitted, the ACs provided meta-reviews for all papers which were sent to the authors for rebuttal consideration. After hearing the authors' arguments, the scores of the papers were then collected and tabulated. All reviews and papers were then again checked by the Program Chairs, and papers with highly varying scores received an additional round of reviews. All papers and their reviews were finally discussed by the Program Chairs on a two-day remote meeting in order to decide on the list of accepted submissions.
The program was formed by grouping papers into main topics of interest for this year's conference. Following the trend in previous ICMI-MLMI events and many other academic meetings, to minimize paper consumption we decided to distribute the conference proceedings on USB Flash Drives. This year we have selected 5 top scoring papers as candidates for two awards: Outstanding Student Paper, and Outstanding Paper. An anonymous committee has been appointed by Program Chairs to select the two awarded papers. You will find the nominated papers in the conference program marked with special symbol. The final award decisions will be announced at the conference banquet.
As in previous events, ICMI-2011 has been organized with the support of ACM and SIGCHI. In addition, despite the financial crisis, many sponsors have given
5 / 38
support to the event. A significant amount of funds has been provided by the Spanish "Ministerio de Ciencia e Innovación" (MICINN) and by several academic organizations of the Valencia Community: "Universitat Politècnica de València" (UPV), "Universidad de Alicante" (UA), the Departamento de Sistemas Informáticos y Computación" (DSIC-UPV), the "Escola Tècnica Superior d'Enginyeria Informàtica" (ETSINF-UPV), the "Departamento de Lenguajes y Sistemas Informáticos" (LSI-UA) and the "Institut Universitari de Investigació Informatica" (IUII-UA). On the other hand, the US National Science Foundation (NSF) has generously provided us with travel and housing support for several students to help offset pressure on academic travel budgets. Two academic projects have also contributed to the conference organization: The Spanish "Multimodal Interaction in Pattern Recognition and Computer Vision" (MIPRCV) and the European "Social games for conflIct REsolution based on natural iNteraction" (SIREN). In addition, we thank the European network of excellence on "Pattern Analysis, Statistical Modeling, and Computational Learning" (PASCAL 2) for partially supporting travel expenses of keynote speakers and students and the "Asociación Española de Reconocimiento de Formas y Análisis de Imágenes" (AERFAI) for supporting ICMI-2011 registration expenses for its members. Even in these difficult times, important companies affirmed their support to the multi-modal interaction and interface research community by providing ICMI with a reasonable level of financial support. These organizations deserve our warmest gratitude: Telefonica I+D, Microsoft Research and AT&T. Without the generous support of all these sponsors, this meeting would not have been possible.
The chairs would like to thank our colleagues on the conference organization committee for their tireless effort bringing this meeting together: Xavier Anguera, Jorge Calera, Fernando De-la-Torre, Li Deng, Antonio J. Gallego, Ida Hui, José M. Iñesta, Alejandro Jaimes, Helen Meng, Nuria Oliver, Jose Oncina, Kazuhiro Otsuka, Stefanie Tellex, Alejandro H. Toselli, Jordi Vitrià and, in particular, to the great work of our Local Organization Chair, Luisa Micó. We are also indebted to the Area Chairs: Tilman Becker, Trevor Darrell, Karrie Karahalios, Antonio Krueger, Anton Nijholt, Jean-Marc Odobez, Yiannis Patras, Catherine Pelachaud, Filiberto Pla, Alex Potamianos, Francis Quek, Adriana Tapus, Alessandro Vinciarelli, Jie Yang and Massimo Zancanaro. Finally our thanks are extended to all program committee members and volunteer reviewers that contributed their effort to the review process and made it easy to develop a high quality technical program.
Last but not least, we would like to thank you: the authors and attendees. Thank you for your work and your time. We hope your find a meeting filled with new ideas, old colleagues and future collaborators!
Hervé BoulardGeneral ChairIDIAP, Switzerland
Thomas S. HuangGeneral ChairUniversity of Illinois, USA
Enrique Vidal General ChairUniv. Pol. València, Spain
Daniel Gatica-PerezProgram ChairIDIAP, Switzerland
Louis-Philippe MorencyProgram Chair Univ. South. California, USA
Nicu SebeProgram Chair University of Trento, Italy
6 / 38
Social Events
Welcome Reception: Monday, November 14, 20:00h
Buses to the reception will depart from Hotel Meliá at 18:45h.
On the evening of Monday 14th the official reception will take
place at the Santa Barbara fortress. We will have a guided tour
through the castle for about 45 minutes. The fortress lies on top of
mount Benacantil (166m high) and offers splendid views over the
harbour, the city and the coastal line. Built by the Muslims in the
9th century, it was redesigned around 1580.
Conference Banquet: Tuesday, November 15, 20:00h
19:40 - Meeting point at the hotel
Conference dinner will be at the restaurant "Aldebaran" (Club de
regatas de Alicante) close to the marina of Alicante (see map on
page 32).
7 / 38
Organizing Committee ■ General Chairs
• Hervé Bourlard (Idiap Research Institute, Switzerland) • Thomas S. Huang (University of Illinois, USA) • Enrique Vidal (Universitat Politècnica de València, Spain)
■ Program Chairs• Daniel Gatica-Perez (Idiap Research Institute, Switzerland) • Louis-Philippe Morency (Univ. Southern California, USA) • Nicu Sebe (University of Trento, Italy)
■ Demo Chairs• Kazuhiro Otsuka (NTT Communication Science Labs, Japan) • Jordi Vitrià (UB/CVC, Barcelona, Spain)
■ Workshop Chairs• Fernando de la Torre (Carnegie Mellon University, USA) • Alejandro Jaimes (Yahoo! Research, Barcelona, Spain)
■ Publication Chair: Oncina (University of Alicante, Spain)
■ Student & Doctoral Spotlight Chair• Li Deng (Microsoft Research and Univ. of Washington) • Stefanie Tellex (MIT CSAIL, USA)
■ Sponsorship Chair: Nuria Oliver (Telefónica I+D, Spain)
■ Publicity Chair: Helen Mei-Ling Meng (CUHK, Hong Kong)
■ Local Organization Chair: Luisa Micó (University of Alicante, Spain)
■ Treasurer: Jorge Calera (University of Alicante, Spain)
■ Local organizers• Xavier Anguera (Telefónica I+D, Spain) • Antonio Javier Gallego Sánchez (University of Alicante, Spain) • Ida Hui (CUHK, Hong Kong) • Jose Manuel Iñesta (University of Alicante, Spain) • Alejandro Toselli (Universitat Politècnica de València, Spain)
■ Area Chairs• Tilman Becker (DFKI) • Trevor Darrell (UC Berkeley / ICSI) • Karrie Karahalios (UIUC) • Antonio Krueger (DFKI / Saarland University)
8 / 38
• Anton Nijholt (University of Twente) • Jean-Marc Odobez (Idiap Research Institute) • Yiannis Patras (Queen Mary University of London) • Catherine Pelachaud (CNRS / TELECOM ParisTech) • Filiberto Pla (University Jaume I) • Alex Potamianos (Technical University of Crete) • Francis Quek (Virginia Tech) • Adriana Tapus (ENSTA-ParisTech) • Alessandro Vinciarelli (University of Glasgow) • Jie Yang (CMU) • Massimo Zancanaro (FBK)
■ Advisory Board• Samy Benjio (Google) • Hervé Bourlard (IDIAP, Switzerland) • Jean Carletta (University of Edinburgh) • James L. Crowley (INRIA Grenoble Rhone Alpes, France) • Trevor Darrell (UCB/ICSI, USA) • Sadaoki Furui (Tokyo Institute of Technology, Japan) • Yuri Ivanov (MERL, USA) • Kenji Mase (University of Nagoya, Japan) • Sharon Oviatt (Incaa Designs, USA) • Catherine Pelachaud (CNRS, France) • Fabio Pianesi (FBK, Trento, Italy) • Andrei Popescu Belis (IDIAP, Switzerland) • Alex Potamianos (TUC,Greece) • Steve Renals (University of Edinburgh) • Rainer Stiefelhagen (KIT & Fraunhofer IITB, Germany) • Matthew Turk (UC Santa Barbara, USA) • Wolfgang Wahlster (DFKI, Germany) • Jie Yang (Carnegie Mellon Universiy, USA)
■ Local Committee and volunteers• Jehane Beldjelti (University of Alicante, Spain)• José Francisco Bernabeu Briones (University of Alicante, Spain)• Olimpia Mas Martínez (University of Alicante, Spain)• Javier Navarrete Sanchez (University of Alicante, Spain)• Tomás Pérez García (University of Alicante, Spain)• Carlos Pérez Sancho (University of Alicante, Spain)• Antonio Pertusa Ibáñez (University of Alicante, Spain)• Pedro José Ponce de León Amador (University of Alicante, Spain)• Juan Ramon Rico (University of Alicante, Spain)• Javier Sober Mira (University of Alicante, Spain)
9 / 38
Program Committee■Amir AlyLisa AnthonyGirouard AudreySileye BaJoan-Isaac BielLepri BrunoHarry BuntCarlos BussoProf. Nick CampbellJorge CardosoGinevra CastellanoJoyce ChaiGokul ChittaranjanMario ChristoudiasHenriette CramerGiuseppe Di FabbrizioTrinh Minh Tri doBruno DumasPatrick EhlenRicci ElisaSibylle EnzBailly GerardHatice GunesJoakim GustafsonHayrettin GürkökNorihiro HagitaNick HawesHung HayleyDirk HeylenJeffrey HoKotsia IreneDinesh Babu JayagopiSchoening JohannesMichael JohnstonKristiina JokinenKostas KarpouzisTaemie Kim
Simon KingSander KoelstraDenis LalanneLuis LeivaKatrin LohanFabien LotteSaturnino LuzMarkus LöckeltMathew Magimai DossPoel MannesCavazza MarcDavid MasipEskenazi MaxineChris MccoolDavid McGookinPeter W. McOwanHelen MengRohs MichaelYukiko NakanoFemke NijboerYoshimasa OhmotoDaniel Olguin OlguinBrdiczka OliverNuria OliverTim PaekHari ParthasarathiOlivier PatrickFabio PianesiAndrei Popescu-BelisRonald PoppeBenjamin PoppingaGerasimos PotamianosThierry PunLiu QiongAntoine RauxNorbert ReithingerSteve Renals
Laurel D. RiekAmir SadeghipourAlbert Ali SalahMaha SalemDairazalia Sanchez CortesYohichi SatoGianluca SchiavoBjoern SchullerAnirudh SharmaCandace SidnerParis SmaragdisErin SoloveyDunne StephenFairclough StephenRainer StiefelhagenYasuyuki SumiDag Sverre SyrdalKazuya TakedaAdriana TapusMariet TheuneAlejandro Héctor ToselliMatthew TurkJan van ErpGiovanna VarniRadu-Daniel VatavuGualtiero VolpeAstrid von der PüttenMichael WaltersJulie Rico WilliamsonHürst WolfgangMing-Hsuan YangChen YuThorsten Zander Liang-Guo Zhang
10 / 38
Invited Talk 1
Still Looking at PeopleDavid Forsyth
University of Illinois at Urbana-Champaign
Abstract:
There is a great need for programs that can describe what people are doing from video. Among other applications, such programs could be used to search for scenes in consumer video; in surveillance applications; to support the design of buildings and of public places; to screen humans for diseases; and to build enhanced human computer interfaces.
Building such programs is difficult, because it is hard to identify and track people in video sequences, because we have no canonical vocabulary for describing what people are doing, and because phenomena such as aspect and individual variation greatly affect the appearance of what people are doing. Recent work in kinematic tracking has produced methods that can report the kinematic configuration of the body automatically, and with moderate accuracy. While it is possible to build methods that use kinematic tracks to reason about the 3D configuration of the body, and from this the activities, such methods remain relatively inaccurate. However, they have the attraction that one can build models that are generative, and that allow activities to be assembled from a set of distinct spatial and temporal components. The models themselves are learned from labelled motion capture data and are assembled in a way that makes it possible to learn very complex finite automata without estimating large numbers of parameters. The advantage of such a model is that one can search videos for examples of activities specified with a simple query language, without possessing any example of the activity sought. In this case, aspect is dealt with by explicit 3D reasoning.
11 / 38
An alternative approach is to model the whole problem as k-way classification into a set of known classes. This approach is much more accurate at present, but has the difficulty that we don't really know what the classes should be in general. This is because we do not know how to describe activities. Recent work in object recognition on describing unfamiliar objects suggests that activities might be described in terms of attributes - properties that many activities share, that are easy to spot, and that are individually somewhat discriminative. Such a description would allow a useful response to an unfamiliar activity. I will sketch current progress on this agenda.
Bio:
David Forsyth is a full professor at U. Illinois at Urbana-Champaign, where he moved from U.C Berkeley, where he was also full professor. He has published over 130 papers on computer vision, computer graphics and machine learning. He has served as program co-chair for IEEE Computer Vision and Pattern Recognition in 2000, general co-chair for CVPR 2006, program co-chair for the European Conference on Computer Vision 2008, and is a regular member of the program committee of all major international conferences on computer vision. He has served four years on the SIGGRAPH program committee, and is a regular reviewer for that conference. He has received best paper awards at the International Conference on Computer Vision and at the European Conference on Computer Vision. He received an IEEE technical achievement award for 2005 for his research and became an IEEE fellow in 2009. His recent textbook, "Computer Vision: A Modern Approach" (joint with J. Ponce and published by Prentice Hall) is now widely adopted as a course text (adoptions include MIT, U. Wisconsin-Madison, UIUC, Georgia Tech and U.C. Berkeley).
12 / 38
Invited Talk 2
Learning in and from humans: Recalibration makes (the) perfect sense
Marc O. Ernst
Bielefeld University
Abstract:
The brain receives information about the environment from all the
sensory modalities, including vision, touch and audition. To efficiently
interact with the environment, this information must eventually
converge in the brain in order to form a reliable and accurate
multimodal percept. This process is often complicated by the existence
of noise at every level of signal processing, which makes the sensory
information derived from the world imprecise and potentially
inaccurate. There are several ways in which the nervous system may
minimize the negative consequences of noise in terms of precision and
accuracy. Two key strategies are to combine redundant sensory estimates
and to utilize acquired knowledge about the statistical regularities of
different sensory signals. In this talk, I elaborate on how these strategies
may be used by the nervous system in order to obtain the best possible
estimates from noisy sensory signals, such that we are able of efficiently
interact with the environment. Particularly, I will focus on the learning
aspects and how our perceptions are tuned to the statistical regularities
of an ever-changing environment.
13 / 38
Bio:
Marc Ernst is chair of the Cognitive
Neuroscience Department and member of
the CITEC cluster of Excellence at
Bielefeld University, Germany. He
received his Ph.D. from the Max Planck
Institute for Biological Cybernetics for investigations on human
visuomotor behavior. For this work he was awarded the Attempto-Prize
(2000) from the University of Tübingen and the Otto-Hahn-Medaille
(2001) from the Max Planck Society. After his Ph.D., he spent 2 years as a
research associate at the University of California, Berkeley, USA working
with Prof. Martin Banks on psychophysical experiments and
computational models investigating the integration of visual-haptic
information. In 2001, he returned to the Max Planck Institute and
became principle investigator of the Sensorimotor Lab in the Department
of Prof. Heinrich Bülthoff. In 2007 Marc Ernst then became leader of the
Max Planck Research Group on Human Multisensory Perception and
Action. In 2011 he then moved to Bielefeld.
The scientific interest of Marc Ernst is in human multisensory
perception, sensorimotor integration and men-machine interaction. Marc
Ernst has published over 50 papers and conference proceedings in high
profile journals including Nature, Science and Nature Neuroscience. He
was involved in several international collaborative grants, including
several European Projects. Furthermore, Marc Ernst was coordinating the
FP6 IST European Project CyberWalk, which developed an
omnidirectional treadmill in order to enable natural free walking in
Virtual Environments.
14 / 38
Invited Talk 3
The Sounds of Social Life:
Observing Humans in their Natural Habitat
Matthias R. Mehl
University of Arizona
Abstract:
This talk presents a novel methodology called the Electronically
Activated Recorder or EAR. The EAR is a portable audio recorder that
periodically records snippets of ambient sounds from participants’
momentary environments. In tracking moment-to-moment ambient
sounds, it yields acoustic logs of people’s days as they naturally unfold.
In sampling only a fraction of the time, it protects participants’ privacy.
As a naturalistic observation method, it provides an observer’s account
of daily life and is optimized for the assessment of audible aspects of
social environments, behaviors, and interactions. The talk discusses the
EAR method conceptually and methodologically and identifies three ways
in which it can enrich research in the social and behavioral sciences.
Specifically, it can (1) provide ecological, behavioral criteria that are
independent of self-report, (2) calibrate psychological effects against
frequencies of real-world behavior, and (3) help with the assessment of
subtle and habitual behaviors that evade self-report.
15 / 38
Bio:
Matthias Mehl is Associate Professor of
Psychology and an Adjunct Associated
Professor of Communication at the
University of Arizona. He received his
doctorate in social and personality
psychology from the University of Texas at
Austin. Over the last decade, he developed
the Electronically Activated Recorder (EAR)
as a novel methodology for the unobtrusive naturalistic observation of
daily life. He has given workshops and published numerous articles on
novel methods for studying daily life. Dr. Mehl is a founding member and
the current Vice President of the Society for Ambulatory Assessment and
co-editor of the Handbook of Research Methods for Studying Daily Life.
His research has been published in various high impact journals (incl.
Science, Psychological Science, Journal of Personality and Social
Psychology, Psychological Assessment, and Health Psychology) and has
been funded, among other sources, by the American Cancer Society and
the NIH (NCI, NCCAM).
16 / 38
Technical Program
Day 1: Monday, November 14
8:30 – 9:00 Registration
9:00 – 9:15 Opening RemarksEnrique Vidal, ICMI 2011 General Co-Chair
9:00 – 10:15 KeynoteStill Looking at PeopleDavid Forsyth, University of Illinois at Urbana ChampaignSession chair: Nicu Sebe
10:15 – 10:45 Coffee Break
10:45 – 12:25 Oral Session 1: AffectSession chair: Andruid Kerne
Mining Multimodal Sequential Patterns: A Case Study on Affect DetectionHector P. Martinez, Georgios N. Yannakakis
Crowdsourced Data Collection of Facial ResponsesDaniel McDuff, Rana el Kaliouby, Rosalind Picard
A Systematic Discussion of Fusion Techniques for Multi-Modal Affect Recognition TasksFlorian Lingenfelser, Johannes Wagner, Elisabeth Andre
Adaptive Facial Expression Recognition using Inter-Modal Top-Down ContextRavi Kiran Sarvadevabhatla, Mitchel Benovoy, Victor Ng-Thow-Hing, Sam Musallam
12:25 – 14:00 Lunch Break
14:00 – 15:40 Special session 1: Multimodal Interaction: Brain-Computer InterfacingSession chairs: Anton Nijholt and Robert Jacob
Brain-Computer Interaction: Can Multimodality Help?Anton Nijholt, Brendan Allison, Rob Jacob
17 / 38
All sessions will take place in the Mediterranean room (see page 33)
Modality Switching and Performance in a Thought and Speech Controlled Computer GameHayrettin Gurkok, Gido Hakvoort, Mannes Poel
An Approach towards Human-Robot-Human Interaction using a Hybrid Brain-Computer InterfaceHachmeister Nils, Riechmann Hannes, Ritter Helge, Finke Andrea
Towards Multimodal Error Responses: A Passive BCI for the Detection of Auditory ErrorsThorsten Oliver Zander, David Marius Klippel, Reinhold Scherer
Pseudo-Haptics: From the Theoretical Foundations to Practical System Design GuidelinesAndreas Pusch, Anatole Lecuyer
15:40 – 16:00 Coffee Break
16:00 – 18:00 Poster session
6th Senses for Everyone! The Value of Multimodal Feedback in Handheld Navigation AidsMartin Pielot, Benjamin Poppinga, Wilko Heuten, Susanne Boll
Adding Haptic Feedback to Touch Screens at the Right TimeYi Yang, Yuru Zhang, Zhu Hou, Betty Lemaire-Semail
Robust User Context Analysis for Multimodal InterfacesPrasenjit Dey, Muthuselvam Selvaraj, Bowon Lee
The Picture says it all! Multimodal Interactions and Interaction MetadataRamadevi Vennelakanti, Prasenjit Dey, Ankit Shekhawat, Phanindra Pisupati
Mudra: A Unified Multimodal Interaction FrameworkLode Hoste, Bruno Dumas, Beat Signer
Humans and Smart Environments: A Novel Multimodal Interaction ApproachStefano Carrino, Alexandre Peclat, Elena Mugellini, Omar Abou Khaled, Rolf Ingold
18 / 38
Exploiting Petri-Net Structure for Activity Classification and User Instruction within an Industrial SettingSimon Worgan, Ardhendu Behera, Anthony Cohn, David Hogg
Jerktilts: Using Accelerometers for Eight-Choice Selection on Mobile DevicesMathias Baglioni, Eric Lecolinet, Yves Guiard
On Multimodal Interactive Machine Translation Using Speech RecognitionVicent Alabau, Luis Rodriguez-Ruiz, Alberto Sanchis, Pascual Martinez-Gomez, Francisco Casacuberta
Multimodal Segmentation of Object Manipulation Sequences with Product ModelsAlexandra Barchunova, Robert Haschke, Mathias Franzius, Helge Ritter
Could a Dialog Save Your Life? - Analyzing the Effects of Speech Interaction Strategies while DrivingAkos Vetek, Saija Lemmela
Decisions about Turns in Multiparty Conversation: From Perception to ActionDan Bohus, Eric Horvitz
Evaluation of User Gestures in Multi-touch Interaction: a Case Study in Pair-programmingAlessandro Soro, Samuel Aldo Iacolina, Riccardo Scateni, Selene Uras
Towards Multimodal Sentiment Analysis: Harvesting Opinions from the WebLouis-Philippe Morency, Rada Mihalcea, Payal Doshi
The Impact of Unwanted Multimodal NotificationsDavid Warnock, Marilyn McGee-Lennon, Stephen Brewster
Freeform Pen-Input as Evidence of Cognitive Load and ExpertiseNatalie Ruiz, Ronnie Taib, Fang Chen
19 / 38
Acquisition of Dynamically Revealed Multimodal TargetsTeemu Ahmaniemi
Emotional Responses to Thermal StimuliKatri Salminen, Veikko Surakka, Jukka Raisamo, Jani Lylykangas, Kalle Makela, Johannes Pystynen, Roope Raisamo, Teemu Ahmaniemi
An Active Learning Scenario for Interactive Machine TranslationJesus Gonzalez-Rubio, Daniel Ortiz-Martinez, Francisco Casacuberta
Move, and I Will Tell You Who You Are: Detecting Deceptive Roles in Low-Quality DataNimrod Raiman, Hayley Hung, Gwenn Englebienne
Multimodal Person Independent Recognition of Workload Related Biosignal PatternsJan-Philip Jarvis, Felix Putze, Tanja Schultz
Study of Different Interactive Editing Operations in an Assisted Transcription SystemVeronica Romero, Alejandro H. Toselli, Enrique Vidal
Dynamic Perception-Production Oscillation Model in Human-Machine CommunicationIgor Jauk, Ipke Wachsmuth, Petra Wagner
The Effect of Clothing on Thermal Feedback PerceptionMartin Halvey, Yolanda Vazquez-Alvarez, Graham Wilson, Stephen Brewster
Comparing Multi-Touch Interaction Techniques for Manipulation of an Abstract Parameter SpaceSashikanth Damaraju, Andruid Kerne
A General Framework for Incremental Processing of Multimodal InputsAfshin Ameri E., Batu Akan, Baran Curuklu, Lars Asplund
20:00 Welcome reception: Santa Barbara's Castle18:45 - Bus departure from hotel
20 / 38
Day 2: Tuesday, November 15All sessions will take place in the Mediterranean room, except for the Demonstrations and Exhibits that will be held in Terra Lucis room (see page 33).
9:15 – 10:15 KeynoteLearning in and from humans: Recalibration makes (the) perfect senseMarc Ernst, Bielefeld UniversitySession chair: Louis-Philippe Morency
10:15 – 10:45 Coffee Break
10:45 – 12:25 Oral Session 2: Social InteractionSession chair: Fabio Pianesi
Detecting F-formations as Dominant SetsHayley Hung, Ben Krose
Toward Multimodal Situated AnalysisChreston Miller, Francis Quek
Finding Audio-Visual Events in an Informal Social GatheringXavier Alameda-Pineda, Radu Horaud, Vasil Khalidov, Florence Forbes
Please, Tell Me About Yourself: Automatic Personality Assessment Using Short Self-PresentationsLigia Maria Batrinca, Nadia Mana, Bruno Lepri, Fabio Pianesi, Nicu Sebe
12:25 – 14:00 Lunch Break
14:00 – 15:40 Oral Session 3: Gesture and TouchSession chair: Jordi Vitrià
Gesture-Aware Remote Controls: Guidelines and Interaction Techniques
Gilles Bailly, Dong-Bach Vo, Eric Lecolinet, Yves Guiard
The Effect of Sampling Rate on the Performance of Template-based Gesture RecognizersRadu-Daniel Vatavu
21 / 38
American Sign Language Recognition with the KinectZahoor Zafrulla, Helene Brashear, Thad Starner, Harley Hamilton, Peter Presti
Perceived Physicality in Audio-Enhanced Force InputChi-Hsia Lai, Matti Niinimaki, Koray Tahiroglu, Johan Kildal, Teemu Ahmaniemi
15:40 – 16:00 Coffee Break
16:00 – 17:00 Demonstration Session and Doctoral Spotlight Posters ▪ Demonstrations: (Terra Lucis room, see map on page 33)
BeeParking: An Ambient Display to Induce Cooperative Parking BehaviorSilvia Gabrielli, Jesús Muñoz, Cristina Costa
Speech Interaction in a Multimodal Tool for Handwritten Text TranscriptionMaria Jose Castro-Bleda, Salvador España-Boquera, David Llorens, Andrés Marzal, Federico Prat, Juan Miguel Vilar, Francisco Zamora-Martínez
Digital Pen in Mammography Patient FormsDaniel Sonntag, Marcus Liwicki, Markus Weber
MozArt: A Multimodal Interface for Conceptual 3D ModelingAnirudh Sharma, Sriganesh Madhvanath, Ankit Shekhawat, Mark Billinghurst
Query Refinement Suggestion in Multimodal Image Retrieval with Relevance FeedbackLuis A. Leiva, Mauricio Villegas, Roberto Paredes
A Multimodal Music Transcription PrototypeTomás Pérez, Jose Manuel Iñesta, Pedro Ponce de León, Antonio Pertusa
Socially-assisted Multi-view Video ViewerKenji Mase, Kosuke Niwa, Takafumi Marutani
22 / 38
▪ Doctoral Spotlight Posters: (Mediterranean room, page 33)
Crowdsourced Data Collection of Facial ResponsesDaniel McDuff, Rana el Kaliouby, Rosalind Picard
Toward Multimodal Situated AnalysisChreston Miller, Francis Quek
American Sign Language Recognition with the KinectZahoor Zafrulla, Helene Brashear, Thad Starner, Harley Hamilton, Peter Presti
Finding Audio-Visual Events in an Informal Social GatheringXavier Alameda-Pineda, Radu Horaud, Vasil Khalidov, Florence Forbes
Living With A Robot Companion. Empirical Study On The Interaction With An Artificial Health AdvisorAstrid M. von der Putten, Nicole C. Kramer, Sabrina C. Eimler
Perceived Physicality in Audio-Enhanced Force InputChi-Hsia Lai, Matti Niinimaki, Koray Tahiroglu, Johan Kildal, Teemu Ahmaniemi
Virtual Worlds and Active Learning for Human DetectionDavid Vazquez, Antonio M. Lopez, Francisco J. Marin, Daniel Ponsa
Multimodal Mobile Interactions: Usability Studies in Real World SettingsJulie Rico Williamson, Andrew Crossan, Stephen Brewster
Mining Multimodal Sequential Patterns: A Case Study on Affect DetectionHector P. Martinez, Georgios N. Yannakakis
Please, Tell Me About Yourself: Automatic Personality Assessment Using Short Self-PresentationsLigia Maria Batrinca, Nadia Mana, Bruno Lepri, Fabio Pianesi, Nicu Sebe
23 / 38
▪ Exhibits: (Terra Lucis room, see map on page 33)
RadSpeech---A Semantic Speech Dialogue System for the RadiologistDaniel Sonntag, Christian Reuschling, Christian Husodo Schulz, Michael Sintek, Daniel Porta, Maha Ahmed Baker, Jochen Setz, Milos Kalab, Luis Galárraga, Matthias Hammon, Alexander Cavallaro, Sascha Seifert
SpeeG: Voice and Gesture-based Text InputLode Hoste, Bruno Dumas, Beat Signer
17:00 – 18:30 Special Session 2: Long-term Socially Perceptive and Interactive Robot Companions: Challenges and Future PerspectivesSession chairs: Ginevra Castellano and Bogdan Raducanu
Long-term Socially Perceptive and Interactive Robot Companions: Challenges and Future PerspectivesRuth Aylett, Ginevra Castellano, Bogdan Raducanu, Ana Paiva, Marc Hanheide
Living With A Robot Companion. Empirical Study On The Interaction With An Artificial Health AdvisorAstrid M. von der Putten, Nicole C. Kramer, Sabrina C. Eimler
Child-Robot Interaction in the Wild: Advice for the Aspiring ExperimenterRaquel Ros Espinoza, Marco Nalin, Rachel Wood, Paul Baxter, Rosemarijn Looije, Yiannis Demiris, Tony Belpaeme, Alessio Giusti, Clara Pozzi
Characterization of Coordination in an Imitation Task: Human Evaluation and Automatically Computable CuesEmilie Delaherche, Mohamed Chetouani
20:00 Conference Banquet: Aldebarán restaurant (Club de regatas de Alicante)19:40 - Meeting point at the hotel
24 / 38
Day 3: Wednesday, November 16All sessions will take place in the Mediterranean room (see page 33)
9:30 – 10:30 KeynoteThe Sounds of Social Life: Observing Humans in their Natural HabitatMatthias Mehl, University of ArizonaSession chair: Daniel Gatica-Perez
10:30 – 11:00 Coffee Break
11:00 – 12:40 Oral Session 4: Ubiquitous InteractionSession chair: Kenji Mase
Smartphone Usage in the Wild: a Large-Scale Analysis of Applications and ContextTrinh Minh Tri Do, Jan Blom, Daniel Gatica-Perez
Multimodal Mobile Interactions: Usability Studies in Real World SettingsJulie Rico Williamson, Andrew Crossan, Stephen Brewster
Service-Oriented Autonomic Multimodal Interaction in a Pervasive EnvironmentPierre-Alain Avouac, Philippe Lalanda, Laurence Nigay
Visual Based Order Picking - Evaluation of Graphical User-Interfaces for Order Picking Using Head-Mounted DisplaysHannes Baumann, Thad Starner, Hendrik Iben, Anna Lewandowski, Patrick Zschaler
12:40 – 14:00 Lunch Break
14:00 – 15:40 Oral Session 5: Virtual and Real Worlds Session chair: Dan Bohus
Modeling and Interpretation of Multithreaded and Multimodal DialogueGregor Mehlmann, Birgit Endrass, Elisabeth Andre
25 / 38
Virtual Worlds and Active Learning for Human DetectionDavid Vazquez, Antonio M. Lopez, Francisco J. Marin, Daniel Ponsa
Making a Virtual Conversational Agent be Aware of the Addressee of Users' Utterances in Multi-user Conversation from Nonverbal InformationHung-Hsuan Huang, Naoya Baba, Yukiko Nakano
Temporal Binding of Multimodal Controls for Dynamic Map Displays: A Systems ApproachEllen Haas, Krishna Pillalamarri, Christopher Stachowiak, MaryAnne Fields
15:40 – 16:30 Town Hall Meeting
26 / 38
WorkshopsWorkshops will take place at Sede Ciudad de Alicante Av. Ramón y Cajal 4, 03001 Alicante, Spain (see map on page 32)
Thursday, November 17
▶ Workshop: Inferring cognitive and emotional states from multimodal behavioural measures
9:30 – 10:30 Invited speakersUsing Passive Brain-Computer Interfaces  for cognitive workload assessment during learning: A novel methodological approach. Prof. Dr. Peter Gerjets (Knowledge Media Research Center, Tuebingen)Dr. Thorsten Zander (Max-Planck Institute, Tuebingen)
10:30 – 11:00 Coffee Break
11:00 – 13:00 Paper presentations (20min each including Q&A)
Inferring Cognitive States from Multimodal Measures in Information Science Jacek Gwizdka and Michael J. ColePresenter: Jacek Gwizdka
Gesture Dynamics: Features Sensitive to Task Difficulty and Correlated with Physiological Sensors Lisa Anthony, Patrick Carrington, Peng Chu, Christopher Kidd, Jianwei Lai, Andrew SearsPresenter: Lisa Anthony
Classification of Cognitive Load from Task Performance & Multichannel Physiology during Affective Changes Sazzad Hussain, Siyuan Chen, Rafael A. Calvo and Fang ChenPresenter: Natalie Ruiz
Cognitive Load Measurement with Pen Orientation and Pressure Kun Yu, Julien Epps and Fang ChenPresenter: Fang Chen
27 / 38
13:00 – 14:30 Lunch Break
14:30 – 15:45 Invited papers (15 each including Q&A)
Facial Response to Video Content in Depression Gordon McIntyre, Roland Goecke, Michael Breakspear and Gordon Parker
Fusing Utterance-Level Classifiers for Robust Intoxication Recognition from Speech Felix Weninger and Björn Schuller
15:45 – 16:15 Afternoon break combined with informal discussion: Grand research challenges for multimodal cognitive and emotional inference
16:15 – 16:45 Work-In-Progress Papers (10 minutes each x3)
An Immersive View Control Method Using EMG Signals of Users' Eyelid Movements Masaki Omata, Satoshi Kagoshima, Atsumi Imamiya, Xiaoyang Mao Presenter: Masaki Omata
Handling Noise in Audio-Visual Emotion Recognition Ntombikayise Banda and Peter Robinson Presenter: Ntombikayise Banda
Assessment of the Emotional State by Psycho-physiological and Implicit Measurement Michael A. Bedek, Ben Cowley, Paul Seitlinger, Martino Fantato, Simone Kopeinik, Dietrich Albert and Niklas Ravaja Presenter: Michael Bedek
16:45 – 17:00 Concluding Remarks
28 / 38
▶ Workshop: Affective Interaction in Natural Environments (AFFINE)
9:00 – 9:30 Introduction, overview of the workshop, state of the art
9:30 – 10:30 Affect sensing (30 minutes each)
A GA-based Similarity Measurement and Feature Selection Method for Spontaneous Facial Expression Recognition Shangfei Wang, Shan He
Affect Sensing in Multi-threaded Improvisational Dialogue Li Zhang
10:30 – 11:00 Coffee Break
11:00 – 13:00 Body and hand gestures / mimicry (30 minutes each)
Full body expressivity analysis in 3D Natural Interaction: a comparative study George Caridakis, Kostas Karpouzis
Analysis of Dominance in Small Music Ensemble Donald Glowinski, Maurizio Mancini, Nadezhda Rukavishnikova
Emotion Communication via Copying Behaviour: A Case Study with the Greta Embodied Agent Ginevra Castellano, Maurizio Mancini, Christopher Peters
Modeling Hidden Dynamics of Multimodal mimicry Cues for human affect recognition Xiaofan Sun, Anton Nijholt, Maja Pantic
29 / 38
Friday, November 18
▶ Workshop: Multimodal Corpora for Machine Learning: Taking Stock and Road mapping the Future
* Each talk should last 20 minutes
9:15 – 9:30 Welcome
9:30 – 10:30 Session 1
Automatic detection of motion sequences for motion analysis Bernhard Bruning, Christian Schnier, Karola Pitsch, Sven Wachsmuth
Multimodal Corpora for an Automatic System Fostering Participants' Engagement in Informal Conversations around a Museum Café Table Nadia Mana, Alessandro Cappelletti, Oliviero Stock, Massimo Zancanaro
10:30 – 11:00 Coffee Break
11:00 – 13:00 Session 2
Communicating vagueness by hands and face Isabella Poggi, Laura Vincze
An audio visual corpus for emergent leader analysis Dairazalia Sanchez-Cortes, Oya Aran, Daniel Gatica-Perez
A multilingual corpus for rich audio-visual scene description in a meeting-room environment Taras Butko, Climent Nadeu, Asuncion Moreno
Turn taking, Utterance Density, and Gaze Patterns as Cues to Conversational Activity Kristiina Jokinen
13:00 – 14:30 Lunch Break
30 / 38
14:30 – 16:00 Session 3
Multimodal Annotations and Categorization for Political Debates Brigitte Bigi, Cristel Portes, Agnes Steuckardt, Marion Tellier
A Study on Cultural Variations of Smile Based on Empirical Recordings of Chinese and Swedish First Encounters Jia Lu, Jens Allwood, Elisabeth Ahlsen
A multimodal dance corpus for research into real-time interaction between humans in online virtual environments Slim Essid, Xinyu Lin, Marc Gowing, Georgios Kordelas, Anil Aksay, Philip Kelly, Thomas Fillon, Qianni Zhang, Alfred Dielmann, Vlado Kitanovski, RobinTournemenne, Noel E. O'Connor, Petros Daras and Gaël Richard
16:00 – 16:30 Coffee Break
16:30 – 17:30 Session 4
Learning to classify the feedback function of head movements in a Danish Corpus of first encounters Patrizia Paggio, Costanza Navarretta
The Ravel Data Set Xavier Alameda-Pineda, Jordi Sanchez-Riera, Johannes Wienke, Vojtech Franc, Jan Cech, Kaustubh Kulkarni, Antoine Deleforge, Radu Horaud
31 / 38
Local map of the Conference Venue
32 / 38
Layout of the Conference Venue1st floor of Meliá Hotel
Main Conference Hall
Demonstrations
Lunch
WC: bathrooms are next to the
Wellness Center and in the hall
next to “Almirante” room.
33 / 38
Notes
34 / 38
35 / 38
36 / 38
37 / 38
ICMI 2011Alicante
38 / 38