HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2008 International Conference on Multimodal Interfaces

Fullname:ICMI'08 Proceedings of the 10th International Conference on Multimodal Interfaces
Editors:Vassilis Digalakis; Alex Potamianos; Matthew Turk; Roberto Pieraccini; Yuri Ivanov
Location:Chania, Crete, Greece
Dates:2008-Oct-20 to 2008-Oct-22
Publisher:ACM
Standard No:ISBN: 1-60558-198-4, 978-1-60558-198-9; ACM DL: Table of Contents hcibib: ICMI08
Papers:53
Pages:312
Links:Conference Home Page
  1. Keynote
  2. Multimodal system evaluation (oral session)
  3. Special session on social signal processing (oral session)
  4. Multimodal systems I (poster session)
  5. Multimodal system design and tools (oral session)
  6. Multimodal interfaces I (oral session)
  7. Demo session
  8. Keynote
  9. Multimodal interfaces II (oral session)
  10. Multimodal modelling (oral session)
  11. Multimodal systems II (poster session)

Keynote

Natural interfaces in the field: the case of pen and paper BIBAKFull-Text 1-2
  Phil Cohen

Multimodal system evaluation (oral session)

Manipulating trigonometric expressions encodedthrough electro-tactile signals BIBAKFull-Text 3-8
  Tatiana G. Evreinova
Multimodal system evaluation using modality efficiency and synergy metrics BIBAKFull-Text 9-16
  Manolis Perakakis; Alexandros Potamianos
Effectiveness and usability of an online help agent embodied as a talking head BIBAKFull-Text 17-20
  Jérôme Simonin; Noëlle Carbonell; Danielle Pelé
Interaction techniques for the analysis of complex data on high-resolution displays BIBAKFull-Text 21-28
  Chreston Miller; Ashley Robinson; Rongrong Wang; Pak Chung; Francis Quek

Special session on social signal processing (oral session)

Role recognition in multiparty recordings using social affiliation networks and discrete distributions BIBAKFull-Text 29-36
  Sarah Favre; Hugues Salamin; John Dines; Alessandro Vinciarelli
Audiovisual laughter detection based on temporal features BIBAKFull-Text 37-44
  Stavros Petridis; Maja Pantic
Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cues BIBAKFull-Text 45-52
  Dinesh Babu Jayagopi; Sileye Ba; Jean-Marc Odobez; Daniel Gatica-Perez
Multimodal recognition of personality traits in social interactions BIBAKFull-Text 53-60
  Fabio Pianesi; Nadia Mana; Alessandro Cappelletti; Bruno Lepri; Massimo Zancanaro
Social signals, their function, and automatic analysis: a survey BIBAKFull-Text 61-68
  Alessandro Vinciarelli; Maja Pantic; Hervé Bourlard; Alex Pentland

Multimodal systems I (poster session)

VoiceLabel: using speech to label mobile sensor data BIBAKFull-Text 69-76
  Susumu Harada; Jonathan Lester; Kayur Patel; T. Scott Saponas; James Fogarty; James A. Landay; Jacob O. Wobbrock
The babbleTunes system: talk to your ipod! BIBAKFull-Text 77-80
  Jan Schehl; Alexander Pfalzgraf; Norbert Pfleger; Jochen Steigner
Evaluating talking heads for smart home systems BIBAKFull-Text 81-84
  Christine Kühnel; Benjamin Weiss; Ina Wechsung; Sascha Fagel; Sebastian Möller
Perception of dynamic audiotactile feedback to gesture input BIBAKFull-Text 85-92
  Teemu Tuomas Ahmaniemi; Vuokko Lantz; Juha Marila
An integrative recognition method for speech and gestures BIBAKFull-Text 93-96
  Madoka Miki; Chiyomi Miyajima; Takanori Nishino; Norihide Kitaoka; Kazuya Takeda
As go the feet...: on the estimation of attentional focus from stance BIBAKFull-Text 97-104
  Francis Quek; Roger Ehrich; Thurmon Lockhart
Knowledge and data flow architecture for reference processing in multimodal dialog systems BIBAKFull-Text 105-108
  Ali Choumane; Jacques Siroux
The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements BIBAKFull-Text 109-116
  Elise Arnaud; Heidi Christensen; Yan-Chen Lu; Jon Barker; Vasil Khalidov; Miles Hansard; Bertrand Holveck; Hervé Mathieu; Ramya Narasimha; Elise Taillant; Florence Forbes; Radu Horaud
Towards a minimalist multimodal dialogue framework using recursive MVC pattern BIBAKFull-Text 117-120
  Li Li; Wu Chou
Explorative studies on multimodal interaction in a PDA- and desktop-based scenario BIBAKFull-Text 121-128
  Andreas Ratzka

Multimodal system design and tools (oral session)

Designing context-aware multimodal virtual environments BIBAKFull-Text 129-136
  Lode Vanacken; Joan De Boeck; Chris Raymaekers; Karin Coninx
A high-performance dual-wizard infrastructure for designing speech, pen, and multimodal interfaces BIBAKFull-Text 137-140
  Phil Cohen; Colin Swindells; Sharon Oviatt; Alex Arthur
The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces BIBAKFull-Text 141-148
  Alexander Gruenstein; Ian McGraw; Ibrahim Badr
A three-dimensional characterization space of software components for rapidly developing multimodal interfaces BIBAKFull-Text 149-156
  Marcos Serrano; David Juras; Laurence Nigay

Multimodal interfaces I (oral session)

Crossmodal congruence: the look, feel and sound of touchscreen widgets BIBAKFull-Text 157-164
  Eve Hoggan; Topi Kaaresoja; Pauli Laitinen; Stephen Brewster
MultiML: a general purpose representation language for multimodal human utterances BIBAKFull-Text 165-172
  Manuel Giuliani; Alois Knoll
Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios BIBAKFull-Text 173-180
  Michael Voit; Rainer Stiefelhagen
Context-based recognition during human interactions: automatic feature selection and encoding dictionary BIBAKFull-Text 181-188
  Louis-Philippe Morency; Iwan de Kok; Jonathan Gratch

Demo session

AcceleSpell, a gestural interactive game to learn and practice finger spelling BIBAKFull-Text 189-190
  José Luis Hernandez-Rebollar; Ethar Ibrahim Elsakay; José D. Alanís-Urquieta
A multi-modal spoken dialog system for interactive TV BIBAKFull-Text 191-192
  Rajesh Balchandran; Mark E. Epstein; Gerasimos Potamianos; Ladislav Seredi
Multimodal slideshow: demonstration of the openinterface interaction development environment BIBAKFull-Text 193-194
  David Juras; Laurence Nigay; Michael Ortega; Marcos Serrano
A browser-based multimodal interaction system BIBAKFull-Text 195-196
  Kouichi Katsurada; Teruki Kirihata; Masashi Kudo; Junki Takada; Tsuneo Nitta
IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situations BIBAKFull-Text 197-198
  Dominic W. Massaro; Miguel Á Carreira-Perpiñán; David J. Merrill; Cass Sterling; Stephanie Bigler; Elise Piazza; Marcus Perlman
Innovative interfaces in MonAMI: the reminder BIBAKFull-Text 199-200
  Jonas Beskow; Jens Edlund; Teodore Gjermani; Björn Granström; Joakim Gustafson; Oskar Jonsson; Gabriel Skanze; Helena Tobiasson
PHANTOM prototype: exploring the potential for learning with multimodal features in dentistry BIBAKFull-Text 201-202
  Jonathan Padilla San Diego; Alastair Barrow; Margaret Cox; William Harwin

Keynote

Audiovisual 3d rendering as a tool for multimodal interfaces BIBAKFull-Text 203-204
  George Drettakis

Multimodal interfaces II (oral session)

Multimodal presentation and browsing of music BIBAKFull-Text 205-208
  David Damm; Christian Fremerey; Frank Kurth; Meinard Müller; Michael Clausen
An audio-haptic interface based on auditory depth cues BIBAKFull-Text 209-216
  Delphine Devallez; Federico Fontana; Davide Rocchesso
Detection and localization of 3d audio-visual objects using unsupervised clustering BIBAKFull-Text 217-224
  Vasil Khalidov; Florence Forbes; Miles Hansard; Elise Arnaud; Radu Horaud
Robust gesture processing for multimodal interaction BIBAKFull-Text 225-232
  Srinivas Bangalore; Michael Johnston

Multimodal modelling (oral session)

Investigating automatic dominance estimation in groups from visual attention and speaking activity BIBAKFull-Text 233-236
  Hayley Hung; Dinesh Babu Jayagopi; Sileye Ba; Jean-Marc Odobez; Daniel Gatica-Perez
Dynamic modality weighting for multi-stream hmms in audio-visual speech recognition BIBAKFull-Text 237-240
  Mihai Gurban; Jean-Philippe Thiran; Thomas Drugman; Thierry Dutoit
A Fitts Law comparison of eye tracking and manual input in the selection of visual targets BIBAKFull-Text 241-248
  Roel Vertegaal
A Wizard of Oz study for an AR multimodal interface BIBAKFull-Text 249-256
  Minkyung Lee; Mark Billinghurst

Multimodal systems II (poster session)

A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarization BIBAKFull-Text 257-264
  Kazuhiro Otsuka; Shoko Araki; Kentaro Ishizuka; Masakiyo Fujimoto; Martin Heinrich; Junji Yamato
Designing and evaluating multimodal interaction for mobile contexts BIBAKFull-Text 265-272
  Saija Lemmelä; Akos Vetek; Kaj Mäkelä; Dari Trendafilov
Automated sip detection in naturally-evoked video BIBAKFull-Text 273-280
  Rana el Kaliouby; Mina Mikhail
Perception of low-amplitude haptic stimuli when biking BIBAKFull-Text 281-284
  Toni Pakkanen; Jani Lylykangas; Jukka Raisamo; Roope Raisamo; Katri Salminen; Jussi Rantala; Veikko Surakka
TactiMote: a tactile remote control for navigating in long lists BIBAKFull-Text 285-288
  Muhammad Tahir; Gilles Bailly; Eric Lecolinet; Gérard Mouret
The DIRAC AWEAR audio-visual platform for detection of unexpected and incongruent events BIBAKFull-Text 289-292
  Jörn Anemüller; Jörg-Hendrik Bach; Barbara Caputo; Michal Havlena; Luo Jie; Hendrik Kayser; Bastian Leibe; Petr Motlicek; Tomas Pajdla; Misha Pavel; Akihiko Torii; Luc Van Gool; Alon Zweig; Hynek Hermansky
Smoothing human-robot speech interactions by using a blinking-light as subtle expression BIBAKFull-Text 293-296
  Kotaro Funakoshi; Kazuki Kobayashi; Mikio Nakano; Seiji Yamada; Yasuhiko Kitamura; Hiroshi Tsujino
Feel-good touch: finding the most pleasant tactile feedback for a mobile touch screen button BIBAKFull-Text 297-304
  Emilia Koskinen; Topi Kaaresoja; Pauli Laitinen
Embodied conversational agents for voice-biometric interfaces Álvaro Hernández-Trapote, Beatriz López-Mencía, David Díaz, Rubén Fernández-Pozo, Javier Caminero BIBAKFull-Text 305-312