HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2012 International Conference on Multimodal Interfaces

Fullname:Proceedings of the 14th ACM International Conference on Multimodal Interaction
Editors:Louis-Philippe Morency; Dan Bohus; Hamid Aghajan; Justine Cassell; Anton Nijholt; Julien Epps
Location:Santa Monica, California
Dates:2012-Oct-22 to 2012-Oct-26
Standard No:ISBN: 978-1-4503-1467-1; ACM DL: Table of Contents; hcibib: ICMI12
Links:Conference Website
  1. Keynote 1
  2. Nonverbal / behaviour
  3. Affect
  4. Demo session 1
  5. Poster session
  6. 3 Vision
  7. Keynote 2
  8. Special session: child-computer interaction
  9. Gestures
  10. Demo session 2
  11. Doctoral spotlight session
  12. Grand challenge overview
  13. Keynote 3
  14. Touch / taste
  15. Multimodal interaction
  16. Challenge 1: 2nd international audio/visual emotion challenge and workshop -- AVEC 2012
  17. Challenge 2: haptic voice recognition grand challenge
  18. Challenge 3: BCI grand challenge: brain-computer interfaces as intelligent sensors for enhancing human-computer interaction
  19. Workshop overview

Keynote 1

The co-operative, transformative organization of human action and knowledge BIBAFull-Text 1-2
  Charles Goodwin

Nonverbal / behaviour

Two people walk into a bar: dynamic multi-party social interaction with a robot agent BIBAFull-Text 3-10
  Mary Ellen Foster; Andre Gaschler; Manuel Giuliani; Amy Isard; Maria Pateraki; Ronald P. A. Petrick
Changes in verbal and nonverbal conversational behavior in long-term interaction BIBAFull-Text 11-18
  Daniel Schulman; Timothy Bickmore
I already know your answer: using nonverbal behaviors to predict immediate outcomes in a dyadic negotiation BIBAFull-Text 19-22
  Sunghyun Park; Jonathan Gratch; Louis-Philippe Morency
Modeling dominance effects on nonverbal behaviors using granger causality BIBAFull-Text 23-26
  Kyriaki Kalimeri; Bruno Lepri; Oya Aran; Dinesh Babu Jayagopi; Daniel Gatica-Perez; Fabio Pianesi
Multimodal human behavior analysis: learning correlation and interaction across modalities BIBAFull-Text 27-30
  Yale Song; Louis-Philippe Morency; Randall Davis


Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies BIBAFull-Text 31-38
  Sidney D'Mello; Jacqueline Kory
Multimodal recognition of personality traits in human-computer collaborative tasks BIBAFull-Text 39-46
  Ligia Batrinca; Bruno Lepri; Nadia Mana; Fabio Pianesi
Automatic detection of pain intensity BIBAFull-Text 47-52
  Zakia Hammal; Jeffrey F. Cohn
FaceTube: predicting personality from facial expressions of emotion in online conversational video BIBAFull-Text 53-56
  Joan-Isaac Biel; Lucía Teijeiro-Mosquera; Daniel Gatica-Perez

Demo session 1

The blue one to the left: enabling expressive user interaction in a multimodal interface for object selection in virtual 3d environments BIBAFull-Text 57-58
  Pulkit Budhiraja; Sriganesh Madhvanath
Pixene: creating memories while sharing photos BIBAFull-Text 59-60
  Ramadevi Vennelakanti; Sriganesh Madhvanath; Anbumani Subramanian; Ajith Sowndararajan; Arun David; Prasenjit Dey
Designing multiuser multimodal gestural interactions for the living room BIBAFull-Text 61-62
  Sriganesh Madhvanath; Ramadevi Vennelakanti; Anbumani Subramanian; Ankit Shekhawat; Prasenjit Dey; Amit Rajan
Using explanations for runtime dialogue adaptation BIBAFull-Text 63-64
  Florian Nothdurft; Frank Honold; Peter Kurzok
NeuroDialog: an EEG-enabled spoken dialog interface BIBAFull-Text 65-66
  Seshadri Sridharan; Yun-Nung Chen; Kai-Min Chang; Alexander I. Rudnicky
Companion technology for multimodal interaction BIBAFull-Text 67-68
  Frank Honold; Felix Schüssel; Florian Nothdurft; Peter Kurzok

Poster session

IrisTK: a statechart-based toolkit for multi-party face-to-face interaction BIBAFull-Text 69-76
  Gabriel Skantze; Samer Al Moubayed
Estimating conversational dominance in multiparty interaction BIBAFull-Text 77-84
  Yukiko Nakano; Yuki Fukuhara
Learning relevance from natural eye movements in pervasive interfaces BIBAFull-Text 85-92
  Melih Kandemir; Samuel Kaski
Fishing or a Z?: investigating the effects of error on mimetic and alphabet device-based gesture interaction BIBAFull-Text 93-100
  Abdallah El Ali; Johan Kildal; Vuokko Lantz
Structural and temporal inference search (STIS): pattern identification in multimodal data BIBAFull-Text 101-108
  Chreston Miller; Louis-Philippe Morency; Francis Quek
Integrating word acquisition and referential grounding towards physical world interaction BIBAFull-Text 109-116
  Rui Fang; Changsong Liu; Joyce Yue Chai
Effects of modality on virtual button motion and performance BIBAFull-Text 117-124
  Adam Faeth; Chris Harding
Modeling multimodal integration with event logic charts BIBAFull-Text 125-132
  Gregor Ulrich Mehlmann; Elisabeth André
Multimodal motion guidance: techniques for adaptive and dynamic feedback BIBAFull-Text 133-140
  Christian Schönauer; Kenichiro Fukushi; Alex Olwal; Hannes Kaufmann; Ramesh Raskar
Multimodal detection of salient behaviors of approach-avoidance in dyadic interactions BIBAFull-Text 141-144
  Bo Xiao; Panayiotis Georgiou; Brian Baucom; Shrikanth Narayanan
Multimodal analysis of the implicit affective channel in computer-mediated textual communication BIBAFull-Text 145-152
  Joseph F. Grafsgaard; Robert M. Fulton; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
Towards sensing the influence of visual narratives on human affect BIBAFull-Text 153-160
  Mihai Burzo; Daniel McDuff; Rada Mihalcea; Louis-Philippe Morency; Alexis Narvaez; Veronica Perez-Rosas
Integrating video and accelerometer signals for nocturnal epileptic seizure detection BIBAFull-Text 161-164
  Kris Cuppens; Chih-Wei Chen; Kevin Bing-Yung Wong; Anouk Van de Vel; Lieven Lagae; Berten Ceulemans; Tinne Tuytelaars; Sabine Van Huffel; Bart Vanrumste; Hamid Aghajan
GeoGazemarks: providing gaze history for the orientation on small display maps BIBAFull-Text 165-172
  Ioannis Giannopoulos; Peter Kiefer; Martin Raubal
Lost in navigation: evaluating a mobile map app for a fair BIBAFull-Text 173-180
  Anders Bouwer; Frank Nack; Abdallah El Ali
An evaluation of game controllers and tablets as controllers for interactive tv applications BIBAFull-Text 181-188
  Dale Cox; Justin Wolford; Carlos Jensen; Dedrie Beardsley
Towards multimodal deception detection -- step 1: building a collection of deceptive videos BIBAFull-Text 189-192
  Rada Mihalcea; Mihai Burzo
A portable audio/video recorder for longitudinal study of child development BIBAFull-Text 193-200
  Soroush Vosoughi; Matthew S. Goodwin; Bill Washabaugh; Deb Roy
Integrating PAMOCAT in the research cycle: linking motion capturing and conversation analysis BIBAFull-Text 201-208
  Bernhard Brüning; Christian Schnier; Karola Pitsch; Sven Wachsmuth

3 Vision

Motion retrieval based on kinetic features in large motion database BIBAFull-Text 209-216
  Tianyu Huang; Haiying Liu; Gangyi Ding
Vision-based handwriting recognition for unrestricted text input in mid-air BIBAFull-Text 217-220
  Alexander Schick; Daniel Morlock; Christoph Amma; Tanja Schultz; Rainer Stiefelhagen
Investigating the midline effect for visual focus of attention recognition BIBAFull-Text 221-224
  Samira Sheikhi; Jean-Marc Odobez
Let's have dinner together: evaluate the mediated co-dining experience BIBAFull-Text 225-228
  Jun Wei; Adrian David Cheok; Ryohei Nakatsu

Keynote 2

Infusing the physical world into user interfaces BIBAFull-Text 229-230
  Ivan Poupyrev

Special session: child-computer interaction

Child-computer interaction: ICMI 2012 special session BIBAFull-Text 231-232
  Anton Nijholt
Knowledge gaps in hands-on tangible interaction research BIBAFull-Text 233-240
  Alissa N. Antle
Evaluating artefacts with children: age and technology effects in the reporting of expected and experienced fun BIBAFull-Text 241-248
  Janet C. Read
Measuring enjoyment of an interactive museum experience BIBAFull-Text 249-256
  Elisabeth M. A. G. van Dijk; Andreas Lingnau; Hub Kockelkorn
Bifocal modeling: a study on the learning outcomes of comparing physical and computational models linked in real time BIBAFull-Text 257-264
  Paulo Blikstein
Connecting play: understanding multimodal participation in virtual worlds BIBAFull-Text 265-272
  Yasmin Kafai; Deborah Fields


Gestures as point clouds: a $P recognizer for user interface prototypes BIBAFull-Text 273-280
  Radu-Daniel Vatavu; Lisa Anthony; Jacob O. Wobbrock
Influencing gestural representation of eventualities: insights from ontology BIBAFull-Text 281-288
  Magdalena Lis
Using self-context for multimodal detection of head nods in face-to-face interactions BIBAFull-Text 289-292
  Laurent Nguyen; Jean-Marc Odobez; Daniel Gatica-Perez

Demo session 2

Multimodal multiparty social interaction with the furhat head BIBAFull-Text 293-294
  Samer Al Moubayed; Gabriel Skantze; Jonas Beskow; Kalin Stefanov; Joakim Gustafson
An avatar-based help system for a grid computing web portal BIBAFull-Text 295-296
  Helmut Lang; Florian Nothdurft
GamEMO: how physiological signals show your emotions and enhance your game experience BIBAFull-Text 297-298
  Chanel Guillaume; Kalogianni Konstantina; Pun Thierry
Multimodal collaboration for crime scene investigation in mediated reality BIBAFull-Text 299-300
  Dragos Datcu; Thomas Swart; Stephan Lukosch; Zoltan Rusak
PAMOCAT: linking motion capturing and conversation analysis BIBAFull-Text 301-302
  Bernhard Andreas Brüning; Christian Schnier
Multimodal dialogue in mobile local search BIBAFull-Text 303-304
  Patrick Ehlen; Michael Johnston

Doctoral spotlight session

Toward an argumentation-based dialogue framework for human-robot collaboration BIBAFull-Text 305-308
  Mohammad Q. Azhar
Timing multimodal turn-taking for human-robot cooperation BIBAFull-Text 309-312
  Crystal Chao
My automated conversation helper (MACH): helping people improve social skills BIBAFull-Text 313-316
  Mohammed E. Hoque
A touch of affect: mediated social touch and affect BIBAFull-Text 317-320
  Gijs Huisman
Depression analysis: a multimodal approach BIBAFull-Text 321-324
  Jyoti Joshi
Design space for finger gestures with hand-held tablets BIBAFull-Text 325-328
  Katrin Wolf
Multi-modal interfaces for control of assistive robotic devices BIBAFull-Text 329-332
  Christopher Dale McMurrough
Space, speech, and gesture in human-robot interaction BIBAFull-Text 333-336
  Ross Mead
Machine analysis and recognition of social contexts BIBAFull-Text 337-340
  Maria O'Connor
Task-learning policies for collaborative task solving in human-robot interaction BIBAFull-Text 341-344
  Hae Won Park
Simulating real danger?: validation of driving simulator test and psychological factors in brake response time to danger BIBAFull-Text 345-348
  Daniele Ruscio
Virtual patients to teach cultural competency BIBAFull-Text 349-352
  Raghavi Sakpal
Multimodal learning analytics: enabling the future of learning through multimodal data analysis and interfaces BIBAFull-Text 353-356
  Marcelo Worsley
A hierarchical approach to continuous gesture analysis for natural multi-modal interaction BIBAFull-Text 357-360
  Ying Yin

Grand challenge overview

AVEC 2012: the continuous audio/visual emotion challenge -- an introduction BIBAFull-Text 361-362
  Björn Schuller; Michel Valstar; Roddy Cowie; Maja Pantic
ICMI'12 grand challenge: haptic voice recognition BIBAFull-Text 363-370
  Khe Chai Sim; Shengdong Zhao; Kai Yu; Hank Liao
Audio-visual robot command recognition: D-META'12 grand challenge BIBAFull-Text 371-378
  Jordi Sanchez-Riera; Xavier Alameda-Pineda; Radu Horaud
Brain computer interfaces as intelligent sensors for enhancing human-computer interaction BIBAFull-Text 379-382
  Mannes Poel; Femke Nijboer; Egon L. van den Broek; Stephen Fairclough; Anton Nijholt

Keynote 3

Using psychophysical techniques to design and evaluate multimodal interfaces: psychophysics and interface design BIBAFull-Text 383-384
  Roberta L. Klatzky

Touch / taste

Reproducing materials of virtual elements on touchscreens using supplemental thermal feedback BIBAFull-Text 385-392
  Hendrik Richter; Doris Hausen; Sven Osterwald; Andreas Butz
Feeling it: the roles of stiffness, deformation range and feedback in the control of deformable ui BIBAFull-Text 393-400
  Johan Kildal; Graham Wilson
Audible rendering of text documents controlled by multi-touch interaction BIBAFull-Text 401-408
  Yasmine El-Glaly; Francis Quek; Tonya Smith-Jackson; Gurjot Dhillon
Taste/IP: the sensation of taste for digital communication BIBAFull-Text 409-416
  Nimesha Ranasinghe; Adrian David Cheok; Ryohei Nakatsu

Multimodal interaction

Learning speaker, addressee and overlap detection models from multimodal streams BIBAFull-Text 417-424
  Oriol Vinyals; Dan Bohus; Rich Caruana
Analysis of the correlation between the regularity of work behavior and stress indices based on longitudinal behavioral data BIBAFull-Text 425-432
  Shogo Okada; Yusaku Sato; Yuki Kamiya; Keiji Yamada; Katsumi Nitta
Linking speaking and looking behavior patterns with group composition, perception, and performance BIBAFull-Text 433-440
  Dineshbabu Jayagopi; Dairazalia Sanchez-Cortes; Kazuhiro Otsuka; Junji Yamato; Daniel Gatica-Perez
Semi-automatic generation of multimodal user interfaces for dialogue-based interactive systems BIBAFull-Text 441-444
  Dominik Ertl; Hermann Kaindl
Designing multimodal reminders for the home: pairing content with presentation BIBAFull-Text 445-448
  Julie R. Williamson; Marilyn McGee-Lennon; Stephen Brewster

Challenge 1: 2nd international audio/visual emotion challenge and workshop -- AVEC 2012

AVEC 2012: the continuous audio/visual emotion challenge BIBAFull-Text 449-456
  Björn Schuller; Michel Valster; Florian Eyben; Roddy Cowie; Maja Pantic
Facial emotion recognition with expression energy BIBAFull-Text 457-464
  Albert C. Cruz; Bir Bhanu; Ninad Thakoor
Multiple classifier combination using reject options and Markov fusion networks BIBAFull-Text 465-472
  Michael Glodek; Martin Schels; Günther Palm; Friedhelm Schwenker
Audio-visual emotion challenge 2012: a simple approach BIBAFull-Text 473-476
  Laurens van der Maaten
Step-wise emotion recognition using concatenated-HMM BIBAFull-Text 477-484
  Derya Ozkan; Stefan Scherer; Louis-Philippe Morency
Combining video, audio and lexical indicators of affect in spontaneous conversation via particle filtering BIBAFull-Text 485-492
  Arman Savran; Houwei Cao; Miraj Shah; Ani Nenkova; Ragini Verma
A multimodal fuzzy inference system using a continuous facial expression representation for emotion detection BIBAFull-Text 493-500
  Catherine Soladié; Hanan Salam; Catherine Pelachaud; Nicolas Stoiber; Renaud Séguier
Robust continuous prediction of human emotions using multiscale dynamic cues BIBAFull-Text 501-508
  Jérémie Nicolle; Vincent Rapp; Kévin Bailly; Lionel Prevost; Mohamed Chetouani
Elastic net for paralinguistic speech recognition BIBAFull-Text 509-516
  Pouria Fewzee; Fakhri Karray
Improving generalisation and robustness of acoustic affect recognition BIBAFull-Text 517-522
  Florian Eyben; Björn Schuller; Gerhard Rigoll
Preserving actual dynamic trend of emotion in dimensional speech emotion recognition BIBAFull-Text 523-528
  Wenjing Han; Haifeng Li; Florian Eyben; Lin Ma; Jiayin Sun; Björn Schuller
Negative sentiment in scenarios elicit pupil dilation response: an auditory study BIBAFull-Text 529-532
  Serdar Baltaci; Didem Gokcay

Challenge 2: haptic voice recognition grand challenge

Design and implementation of the note-taking style haptic voice recognition for mobile devices BIBAFull-Text 533-538
  Seungwhan Moon; Khe Chai Sim
Development of the 2012 SJTU HVR system BIBAFull-Text 539-544
  Hainan Xu; Yuchen Fan; Kai Yu
Improving mandarin predictive text input by augmenting pinyin initials with speech and tonal information BIBAFull-Text 545-550
  Guangsen Wang; Bo Li; Shilin Liu; Xuancong Wang; Xiaoxuan Wang; Khe Chai Sim
LUI: lip in multimodal mobile GUI interaction BIBAFull-Text 551-554
  Maryam Azh; Shengdong Zhao
Speak-as-you-swipe (SAYS): a multimodal interface combining speech and gesture keyboard synchronously for continuous mobile text entry BIBAFull-Text 555-560
  Khe Chai Sim

Challenge 3: BCI grand challenge: brain-computer interfaces as intelligent sensors for enhancing human-computer interaction

Interpersonal biocybernetics: connecting through social psychophysiology BIBAFull-Text 561-566
  Alan T. Pope; Chad L. Stephens
Adaptive EEG artifact rejection for cognitive games BIBAFull-Text 567-570
  Olexiy Kyrgyzov; Antoine Souloumiac
Construction of the biocybernetic loop: a case study BIBAFull-Text 571-578
  Stephen Fairclough; Kiel Gilleade
An interactive control strategy is more robust to non-optimal classification boundaries BIBAFull-Text 579-586
  Virginia R. de Sa
Improving BCI performance after classification BIBAFull-Text 587-594
  Danny Plass-Oude Bos; Hayrettin Gürkök; Boris Reuderink; Mannes Poel
Electroencephalographic detection of visual saliency of motion towards a practical brain-computer interface for video analysis BIBAFull-Text 601-606
  Matthew Weiden; Deepak Khosla; Matthew Keegan

Workshop overview

Workshop on speech and gesture production in virtually and physically embodied conversational agents BIBAFull-Text 607-608
  Ross Mead; Maha Salem
1st international workshop on multimodal learning analytics: extended abstract BIBAFull-Text 609-610
  Stefan Scherer; Marcelo Worsley; Louis-Philippe Morency
4th workshop on eye gaze in intelligent human machine interaction: eye gaze and multimodality BIBAFull-Text 611-612
  Yukiko I. Nakano; Kristiina Jokinen; Hung-Hsuan Huang
The 3rd international workshop on social behaviour in music: SBM2012 BIBAFull-Text 613-614
  Antonio Camurri; Donald Glowinski; Maurizio Mancini; Giovanna Varni; Gualtiero Volpe
Smart material interfaces: a material step to the future BIBAFull-Text 615-616
  Anton Nijholt; Leonardo Giusti; Andrea Minuto; Patrizia Marti