HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2013 International Conference on Multimodal Interaction

Fullname:Proceedings of the 15th ACM International Conference on Multimodal Interaction
Editors:Julien Epps; Fang Chen; Sharon Oviatt; Kenji Mase; Andrew Sears; Kristiina Jokinen; Björn Schuller
Location:Sydney, Australia
Dates:2013-Dec-09 to 2013-Dec-13
Publisher:ACM
Standard No:ISBN: 978-1-4503-2129-7; ACM DL: Table of Contents; hcibib: ICMI13
Papers:100
Pages:612
Links:Conference Website
  1. Keynote 1
  2. Oral session 1: personality
  3. Oral session 2: communication
  4. Demo session 1
  5. Poster session 1
  6. Oral session 3: intelligent & multimodal interfaces
  7. Keynote 2
  8. Oral session 4: embodied interfaces
  9. Oral session 5: hand and body
  10. Demo session 2
  11. Poster session 2: doctoral spotlight
  12. Grand challenge overviews
  13. Keynote 3
  14. Oral session 6: AR, VR & mobile
  15. Oral session 7: eyes & body
  16. ChaLearn challenge and workshop on multi-modal gesture recognition
  17. Emotion recognition in the wild challenge and workshop
  18. Multimodal learning analytics challenge
  19. Workshop overview

Keynote 1

Behavior imaging and the study of autism BIBAFull-Text 1-2
  James M. Rehg

Oral session 1: personality

On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions BIBAFull-Text 3-10
  Ramanathan Subramanian; Yan Yan; Jacopo Staiano; Oswald Lanz; Nicu Sebe
One of a kind: inferring personality impressions in meetings BIBAFull-Text 11-18
  Oya Aran; Daniel Gatica-Perez
Who is persuasive?: the role of perceived personality and communication modality in social multimedia BIBAFull-Text 19-26
  Gelareh Mohammadi; Sunghyun Park; Kenji Sagae; Alessandro Vinciarelli; Louis-Philippe Morency
Going beyond traits: multimodal classification of personality states in the wild BIBAFull-Text 27-34
  Kyriaki Kalimeri; Bruno Lepri; Fabio Pianesi

Oral session 2: communication

Implementation and evaluation of a multimodal addressee identification mechanism for multiparty conversation systems BIBAFull-Text 35-42
  Yukiko I. Nakano; Naoya Baba; Hung-Hsuan Huang; Yuki Hayashi
Managing chaos: models of turn-taking in character-multichild interactions BIBAFull-Text 43-50
  Iolanda Leite; Hannaneh Hajishirzi; Sean Andrist; Jill Lehman
Speaker-adaptive multimodal prediction model for listener responses BIBAFull-Text 51-58
  Iwan de Kok; Dirk Heylen; Louis-Philippe Morency
User experiences of mobile audio conferencing with spatial audio, haptics and gestures BIBAFull-Text 59-66
  Jussi Rantala; Sebastian Müller; Roope Raisamo; Katja Suhonen; Kaisa Väänänen-Vainio-Mattila; Vuokko Lantz

Demo session 1

A framework for multimodal data collection, visualization, annotation and learning BIBAFull-Text 67-68
  Anne Loomis Thompson; Dan Bohus
Demonstration of sketch-thru-plan: a multimodal interface for command and control BIBAFull-Text 69-70
  Philip R. Cohen; M. Cecelia Buchanan; Edward J. Kaiser; Michael Corrigan; Scott Lind; Matt Wesson
Robotic learning companions for early language development BIBAFull-Text 71-72
  Jacqueline M. Kory; Sooyeon Jeong; Cynthia L. Breazeal
WikiTalk human-robot interactions BIBAFull-Text 73-74
  Graham Wilcock; Kristiina Jokinen

Poster session 1

Saliency-guided 3D head pose estimation on 3D expression models BIBAFull-Text 75-78
  Peng Liu; Michael Reale; Xing Zhang; Lijun Yin
Predicting next speaker and timing from gaze transition patterns in multi-party meetings BIBAFull-Text 79-86
  Ryo Ishii; Kazuhiro Otsuka; Shiro Kumano; Masafumi Matsuda; Junji Yamato
A semi-automated system for accurate gaze coding in natural dyadic interactions BIBAFull-Text 87-90
  Kenneth A. Funes Mora; Laurent Nguyen; Daniel Gatica-Perez; Jean-Marc Odobez
Evaluating the robustness of an appearance-based gaze estimation method for multimodal interfaces BIBAFull-Text 91-98
  Nanxiang Li; Carlos Busso
A gaze-based method for relating group involvement to individual engagement in multimodal multiparty dialogue BIBAFull-Text 99-106
  Catharine Oertel; Giampiero Salvi
Leveraging the robot dialog state for visual focus of attention recognition BIBAFull-Text 107-110
  Samira Sheikhi; Vasil Khalidov; David Klotz; Britta Wrede; Jean-Marc Odobez
CoWME: a general framework to evaluate cognitive workload during multimodal interaction BIBAFull-Text 111-118
  Davide Maria Calandra; Antonio Caso; Francesco Cutugno; Antonio Origlia; Silvia Rossi
Hi YouTube!: personality impressions and verbal content in social video BIBAFull-Text 119-126
  Joan-Isaac Biel; Vagia Tsiminaki; John Dines; Daniel Gatica-Perez
Cross-domain personality prediction: from video blogs to small group meetings BIBAFull-Text 127-130
  Oya Aran; Daniel Gatica-Perez
Automatic detection of deceit in verbal communication BIBAFull-Text 131-134
  Rada Mihalcea; Verónica Pérez-Rosas; Mihai Burzo
Audiovisual behavior descriptors for depression assessment BIBAFull-Text 135-140
  Stefan Scherer; Giota Stratou; Louis-Philippe Morency
A Markov logic framework for recognizing complex events from multimodal data BIBAFull-Text 141-148
  Young Chol Song; Henry Kautz; James Allen; Mary Swift; Yuncheng Li; Jiebo Luo; Ce Zhang
Interactive relevance search and modeling: support for expert-driven analysis of multimodal data BIBAFull-Text 149-156
  Chreston Miller; Francis Quek; Louis-Philippe Morency
Predicting speech overlaps from speech tokens and co-occurring body behaviours in dyadic conversations BIBAFull-Text 157-164
  Costanza Navarretta
Interaction analysis and joint attention tracking in augmented reality BIBAFull-Text 165-172
  Alexander Neumann; Christian Schnier; Thomas Hermann; Karola Pitsch
Mo!Games: evaluating mobile gestures in the wild BIBAFull-Text 173-180
  Julie R. Williamson; Stephen Brewster; Rama Vennelakanti
Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent BIBAFull-Text 181-188
  Benjamin Inden; Zofia Malisz; Petra Wagner; Ipke Wachsmuth
Video analysis of approach-avoidance behaviors of teenagers speaking with virtual agents BIBAFull-Text 189-196
  David Antonio Gómez Jáuregui; Léonor Philip; Céline Clavel; Stéphane Padovani; Mahin Bailly; Jean-Claude Martin
A dialogue system for multimodal human-robot interaction BIBAFull-Text 197-204
  Lorenzo Lucignano; Francesco Cutugno; Silvia Rossi; Alberto Finzi
The zigzag paradigm: a new P300-based brain computer interface BIBAFull-Text 205-212
  Qasem Obeidat; Tom Campbell; Jun Kong
SpeeG2: a speech- and gesture-based interface for efficient controller-free text input BIBAFull-Text 213-220
  Lode Hoste; Beat Signer

Oral session 3: intelligent & multimodal interfaces

Interfaces for thinkers: computer input capabilities that support inferential reasoning BIBAFull-Text 221-228
  Sharon Oviatt
Adaptive timeline interface to personal history data BIBAFull-Text 229-236
  Antti Ajanki; Markus Koskela; Jorma Laaksonen; Samuel Kaski
Learning a sparse codebook of facial and body microexpressions for emotion recognition BIBAFull-Text 237-244
  Yale Song; Louis-Philippe Morency; Randall Davis

Keynote 2

Giving interaction a hand: deep models of co-speech gesture in multimodal systems BIBAFull-Text 245-246
  Stefan Kopp

Oral session 4: embodied interfaces

Five key challenges in end-user development for tangible and embodied interaction BIBAFull-Text 247-254
  Daniel Tetteroo; Iris Soute; Panos Markopoulos
How can i help you': comparing engagement classification strategies for a robot bartender BIBAFull-Text 255-262
  Mary Ellen Foster; Andre Gaschler; Manuel Giuliani
Comparing task-based and socially intelligent behaviour in a robot bartender BIBAFull-Text 263-270
  Manuel Giuliani; Ronald P. A. Petrick; Mary Ellen Foster; Andre Gaschler; Amy Isard; Maria Pateraki; Markos Sigalas
A dynamic multimodal approach for assessing learners' interaction experience BIBAFull-Text 271-278
  Imène Jraidi; Maher Chaouachi; Claude Frasson

Oral session 5: hand and body

Relative accuracy measures for stroke gestures BIBAFull-Text 279-286
  Radu-Daniel Vatavu; Lisa Anthony; Jacob O. Wobbrock
LensGesture: augmenting mobile interactions with back-of-device finger gestures BIBAFull-Text 287-294
  Xiang Xiao; Teng Han; Jingtao Wang
Aiding human discovery of handwriting recognition errors BIBAFull-Text 295-302
  Ryan Stedman; Michael Terry; Edward Lank
Context-based conversational hand gesture classification in narrative interaction BIBAFull-Text 303-310
  Shogo Okada; Mayumi Bono; Katsuya Takanashi; Yasuyuki Sumi; Katsumi Nitta

Demo session 2

A haptic touchscreen interface for mobile devices BIBAFull-Text 311-312
  Jong-Uk Lee; Jeong-Mook Lim; Heesook Shin; Ki-Uk Kyung
A social interaction system for studying humor with the Robot NAO BIBAFull-Text 313-314
  Laurence Y. Devillers; Mariette Soury
TaSST: affective mediated touch BIBAFull-Text 315-316
  Aduen Darriba Freriks; Dirk Heylen; Gijs Huisman
Talk ROILA to your Robot BIBAFull-Text 317-318
  Omar Mubin; Joshua Henderson; Christoph Bartneck
NEMOHIFI: an affective HiFi agent BIBAFull-Text 319-320
  Syaheerah Lebai Lutfi; Fernando Fernandez-Martinez; Jaime Lorenzo-Trueba; Roberto Barra-Chicote; Juan Manuel Montero

Poster session 2: doctoral spotlight

Persuasiveness in social multimedia: the role of communication modality and the challenge of crowdsourcing annotations BIBAFull-Text 321-324
  Sunghyun Park
Towards a dynamic view of personality: multimodal classification of personality states in everyday situations BIBAFull-Text 325-328
  Kyriaki Kalimeri
Designing effective multimodal behaviors for robots: a data-driven perspective BIBAFull-Text 329-332
  Chien-Ming Huang
Controllable models of gaze behavior for virtual agents and humanlike robots BIBAFull-Text 333-336
  Sean Andrist
The nature of the bots: how people respond to robots, virtual agents and humans as multimodal stimuli BIBAFull-Text 337-340
  Jamy Li
Adaptive virtual rapport for embodied conversational agents BIBAFull-Text 341-344
  Ivan Gris Sepulveda
3D head pose and gaze tracking and their application to diverse multimodal tasks BIBAFull-Text 345-348
  Kenneth Alberto Funes Mora
Towards developing a model for group involvement and individual engagement BIBAFull-Text 349-352
  Catharine Oertel
Gesture recognition using depth images BIBAFull-Text 353-356
  Bin Liang
Modeling semantic aspects of gaze behavior while catalog browsing BIBAFull-Text 357-360
  Erina Ishikawa
Computational behaviour modelling for autism diagnosis BIBAFull-Text 361-364
  Shyam Sundar Rajagopalan

Grand challenge overviews

ChaLearn multi-modal gesture recognition 2013: grand challenge and workshop summary BIBAFull-Text 365-368
  Sergio Escalera; Jordi Gonzàlez; Xavier Baró; Miguel Reyes; Isabelle Guyon; Vassilis Athitsos; Hugo Escalante; Leonid Sigal; Antonis Argyros; Cristian Sminchisescu; Richard Bowden; Stan Sclaroff
Emotion recognition in the wild challenge (EmotiW) challenge and workshop summary BIBAFull-Text 371-372
  Abhinav Dhall; Roland Goecke; Jyoti Joshi; Michael Wagner; Tom Gedeon
ICMI 2013 grand challenge workshop on multimodal learning analytics BIBAFull-Text 373-378
  Louis-Philippe Morency; Sharon Oviatt; Stefan Scherer; Nadir Weibel; Marcelo Worsley

Keynote 3

Hands and speech in space: multimodal interaction with augmented reality interfaces BIBAFull-Text 379-380
  Mark Billinghurst

Oral session 6: AR, VR & mobile

Evaluating dual-view perceptual issues in handheld augmented reality: device vs. user perspective rendering BIBAFull-Text 381-388
  Klen Copic Pucihar; Paul Coulton; Jason Alexander
MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces BIBAFull-Text 389-396
  Kazuhiro Otsuka; Shiro Kumano; Ryo Ishii; Maja Zbogar; Junji Yamato
Investigating appropriate spatial relationship between user and ar character agent for communication using AR WoZ system BIBAFull-Text 397-404
  Reina Aramaki; Makoto Murakami
Inferring social activities with mobile sensor networks BIBAFull-Text 405-412
  Trinh Minh Tri Do; Kyriaki Kalimeri; Bruno Lepri; Fabio Pianesi; Daniel Gatica-Perez

Oral session 7: eyes & body

Effects of language proficiency on eye-gaze in second language conversations: toward supporting second language collaboration BIBAFull-Text 413-420
  Ichiro Umata; Seiichi Yamamoto; Koki Ijuin; Masafumi Nishida
Predicting where we look from spatiotemporal gaps BIBAFull-Text 421-428
  Ryo Yonetani; Hiroaki Kawashima; Takashi Matsuyama
Automatic multimodal descriptors of rhythmic body movement BIBAFull-Text 429-436
  Marwa Mahmoud; Louis-Philippe Morency; Peter Robinson
Multimodal analysis of body communication cues in employment interviews BIBAFull-Text 437-444
  Laurent Son Nguyen; Alvaro Marcos-Ramiro; Martha Marrón Romera; Daniel Gatica-Perez

ChaLearn challenge and workshop on multi-modal gesture recognition

Multi-modal gesture recognition challenge 2013: dataset and results BIBAFull-Text 445-452
  Sergio Escalera; Jordi Gonzàlez; Xavier Baró; Miguel Reyes; Oscar Lopes; Isabelle Guyon; Vassilis Athitsos; Hugo Escalante
Fusing multi-modal features for gesture recognition BIBAFull-Text 453-460
  Jiaxiang Wu; Jian Cheng; Chaoyang Zhao; Hanqing Lu
A multi modal approach to gesture recognition from audio and video data BIBAFull-Text 461-466
  Immanuel Bayer; Thierry Silbermann
Online RGB-D gesture recognition with extreme learning machines BIBAFull-Text 467-474
  Xi Chen; Markus Koskela
A multi-modal gesture recognition system using audio, video, and skeletal joint data BIBAFull-Text 475-482
  Karthik Nandakumar; Kong Wah Wan; Siu Man Alice Chan; Wen Zheng Terence Ng; Jian Gang Wang; Wei Yun Yau
ChAirGest: a challenge for multimodal mid-air gesture recognition for close HCI BIBAFull-Text 483-488
  Simon Ruffieux; Denis Lalanne; Elena Mugellini
Gesture spotting and recognition using salience detection and concatenated hidden Markov models BIBAFull-Text 489-494
  Ying Yin; Randall Davis
Multi-modal social signal analysis for predicting agreement in conversation settings BIBAFull-Text 495-502
  Víctor Ponce-López; Sergio Escalera; Xavier Baró
Multi-modal descriptors for multi-class hand pose recognition in human computer interaction systems BIBAFull-Text 503-508
  Jordi Abella; Raúl Alcaide; Anna Sabaté; Joan Mas; Sergio Escalera; Jordi Gonzàlez; Coen Antens

Emotion recognition in the wild challenge and workshop

Emotion recognition in the wild challenge 2013 BIBAFull-Text 509-516
  Abhinav Dhall; Roland Goecke; Jyoti Joshi; Michael Wagner; Tom Gedeon
Multiple kernel learning for emotion recognition in the wild BIBAFull-Text 517-524
  Karan Sikka; Karmen Dykstra; Suchitra Sathyanarayana; Gwen Littlewort; Marian Bartlett
Partial least squares regression on Grassmannian manifold for emotion recognition BIBAFull-Text 525-530
  Mengyi Liu; Ruiping Wang; Zhiwu Huang; Shiguang Shan; Xilin Chen
Emotion recognition with boosted tree classifiers BIBAFull-Text 531-534
  Matthew Day
Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP BIBAFull-Text 535-542
  Timur R. Almaev; Anil Yüce; Alexandru Ghitulescu; Michel F. Valstar
Combining modality specific deep neural networks for emotion recognition in video BIBAFull-Text 543-550
  Samira Ebrahimi Kanou; Christopher Pal; Xavier Bouthillier; Pierre Froumenty; Çaglar Gülçehre; Roland Memisevic; Pascal Vincent; Aaron Courville; Yoshua Bengio; Raul Chandias Ferrari; Mehdi Mirza; Sébastien Jean; Pierre-Luc Carrier; Yann Dauphin; Nicolas Boulanger-Lewandowski; Abhishek Aggarwal; Jeremie Zumer; Pascal Lamblin; Jean-Philippe Raymond; Guillaume Desjardins; Razvan Pascanu; David Warde-Farley; Atousa Torabi; Arjun Sharma; Emmanuel Bengio; Kishore Reddy Konda; Zhenzhou Wu
Multi classifier systems and forward backward feature selection algorithms to classify emotional coloured speech BIBAFull-Text 551-556
  Sascha Meudt; Dimitri Zharkov; Markus Kächele; Friedhelm Schwenker
Emotion recognition using facial and audio features BIBAFull-Text 557-564
  Tarun Krishna; Ayush Rai; Shubham Bansal; Shubham Khandelwal; Shubham Gupta; Dushyant Goel

Multimodal learning analytics challenge

Multimodal learning analytics: description of math data corpus for ICMI grand challenge workshop BIBAFull-Text 563-568
  Sharon Oviatt; Adrienne Cohen; Nadir Weibel
Problem solving, domain expertise and learning: ground-truth performance results for math data corpus BIBAFull-Text 569-574
  Sharon Oviatt
Automatic identification of experts and performance prediction in the multimodal math data corpus through analysis of speech interaction BIBAFull-Text 575-582
  Saturnino Luz
Expertise estimation based on simple multimodal features BIBAFull-Text 583-590
  Xavier Ochoa; Katherine Chiluiza; Gonzalo Méndez; Gonzalo Luzardo; Bruno Guamán; James Castells
Using micro-patterns of speech to predict the correctness of answers to mathematics problems: an exercise in multimodal learning analytics BIBAFull-Text 591-598
  Kate Thompson
Written and multimodal representations as predictors of expertise and problem-solving success in mathematics BIBAFull-Text 599-606
  Sharon Oviatt; Adrienne Cohen

Workshop overview

ERM4HCI 2013: the 1st workshop on emotion representation and modelling in human-computer-interaction-systems BIBAFull-Text 607-608
  Kim Hartmann; Ronald Böck; Christian Becker-Asano; Jonathan Gratch; Björn Schuller; Klaus R. Scherer
Gazein'13: the 6th workshop on eye gaze in intelligent human machine interaction: gaze in multimodal interaction BIBAFull-Text 609-610
  Roman Bednarik; Hung-Hsuan Huang; Yukiko Nakano; Kristiina Jokinen
Smart material interfaces: "another step to a material future" BIBAFull-Text 611-612
  Manuel Kretzer; Andrea Minuto; Anton Nijholt