HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2005 International Conference on Multimodal Interfaces

Fullname:ICMI'05 Proceedings of the 7th International Conference on Multimodal Interfaces
Editors:Gianni Lazzari; Fabio Pianesi; James Crowley; Kenji Mase; Sharon Oviatt
Location:Torento, Italy
Dates:2005-Oct-04 to 2005-Oct-06
Publisher:ACM
Standard No:ISBN: 1-59593-028-0; ACM DL: Table of Contents hcibib: ICMI05
Papers:47
Pages:334
  1. Keynote
  2. Recognition and multimodal gesture patterns
  3. Posters
  4. Visual attention
  5. Keynote
  6. Semantics and dialog
  7. Recognizing communication patterns
  8. Keynote
  9. Affective interaction
  10. Posters
  11. Tangible interfaces and universal access

Keynote

The "puzzle" of sensory perception: putting together multisensory information BIBAFull-Text 1
  Marc O. Ernst

Recognition and multimodal gesture patterns

Integrating sketch and speech inputs using spatial information BIBAKFull-Text 2-9
  Bee-Wah Lee; Alvin W. Yeo
Distributed pointing for multimodal collaboration over sketched diagrams BIBAKFull-Text 10-17
  Paulo Barthelmess; Ed Kaiser; Xiao Huang; David Demirdjian
Contextual recognition of head gestures BIBAKFull-Text 18-24
  Louis-Philippe Morency; Candace Sidner; Christopher Lee; Trevor Darrell
Combining environmental cues & head gestures to interact with wearable devices BIBAKFull-Text 25-31
  M. Hanheide; C. Bauckhage; G. Sagerer

Posters

Automatic detection of interaction groups BIBAKFull-Text 32-36
  Oliver Brdiczka; Jérôme Maisonnasse; Patrick Reignier
Meeting room configuration and multiple camera calibration in meeting analysis BIBAKFull-Text 37-44
  Yingen Xiong; Francis Quek
A multimodal perceptual user interface for video-surveillance environments BIBAKFull-Text 45-52
  Giancarlo Iannizzotto; Carlo Costanzo; Francesco La Rosa; Pietro Lanzafame
Inferring body pose using speech content BIBAKFull-Text 53-60
  Sy Bor Wang; David Demirdjian
A joint particle filter for audio-visual speaker tracking BIBAKFull-Text 61-68
  Kai Nickel; Tobias Gehrig; Rainer Stiefelhagen; John McDonough
The connector: facilitating context-aware communication BIBAKFull-Text 69-75
  M. Danninger; G. Flaherty; K. Bernardin; H. K. Ekenel; T. Köhler; R. Malkin; R. Stiefelhagen; A. Waibel
A user interface framework for multimodal VR interactions BIBAKFull-Text 76-83
  Marc Erich Latoschik
Multimodal output specification / simulation platform BIBAKFull-Text 84-91
  Cyril Rousseau; Yacine Bellik; Frédéric Vernier
Migratory MultiModal interfaces in MultiDevice environments BIBAKFull-Text 92-99
  Silvia Berti; Fabio Paternò
Exploring multimodality in the laboratory and the field BIBAKFull-Text 100-107
  Lynne Baillie; Raimund Schatz

Visual attention

Understanding the effect of life-like interface agents through users' eye movements BIBAKFull-Text 108-115
  Helmut Prendinger; Chunling Ma; Jin Yingzi; Arturo Nakasone; Mitsuru Ishizuka
Analyzing and predicting focus of attention in remote collaborative tasks BIBAKFull-Text 116-123
  Jiazhi Ou; Lui Min Oh; Susan R. Fussell; Tal Blum; Jie Yang
Gaze-based selection of standard-size menu items BIBAKFull-Text 124-128
  Oleg Spakov; Darius Miniotas
Region extraction of a gaze object using the gaze point and view image sequences BIBAKFull-Text 129-136
  Norimichi Ukita; Tomohisa Ono; Masatsugu Kidode

Keynote

Interactive humanoids and androids as ideal interfaces for humans BIBAFull-Text 137
  Hiroshi Ishiguro

Semantics and dialog

Probabilistic grounding of situated speech using plan recognition and reference resolution BIBAKFull-Text 138-143
  Peter Gorniak; Deb Roy
Augmenting conversational dialogue by means of latent semantic googling BIBAKFull-Text 144-150
  Robin Senior; Roel Vertegaal
Human-style interaction with a robot for cooperative learning of scene objects BIBAFull-Text 151-158
  Shuyin Li; Axel Haasch; Britta Wrede; Jannik Fritsch; Gerhard Sagerer
A look under the hood: design and development of the first SmartWeb system demonstrator BIBAKFull-Text 159-166
  Norbert Reithinger; Simon Bergweiler; Ralf Engel; Gerd Herzog; Norbert Pfleger; Massimo Romanelli; Daniel Sonntag

Recognizing communication patterns

Audio-visual cues distinguishing self- from system-directed speech in younger and older adults BIBAKFull-Text 167-174
  Rebecca Lunsford; Sharon Oviatt; Rachel Coulston
Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal features BIBAKFull-Text 175-182
  Koen van Turnhout; Jacques Terken; Ilse Bakx; Berry Eggen
Multimodal multispeaker probabilistic tracking in meetings BIBAKFull-Text 183-190
  Daniel Gatica-Perez; Guillaume Lathoud; Jean-Marc Odobez; Iain McCowan
A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances BIBAKFull-Text 191-198
  Kazuhiro Otsuka; Yoshinao Takemae; Junji Yamato

Keynote

Socially aware computation and communication BIBAKFull-Text 199
  Alex (Sandy) Pentland

Affective interaction

Synthetic characters as multichannel interfaces BIBAKFull-Text 200-207
  Elena Not; Koray Balci; Fabio Pianesi; Massimo Zancanaro
XfaceEd: authoring tool for embodied conversational agents BIBAKFull-Text 208-213
  Koray Balci
A first evaluation study of a database of kinetic facial expressions (DaFEx) BIBAKFull-Text 214-221
  Alberto Battocchi; Fabio Pianesi; Dina Goren-Bar
Hapticat: exploration of affective touch BIBAKFull-Text 222-229
  Steve Yohanan; Mavis Chan; Jeremy Hopkins; Haibo Sun; Karon MacLean

Posters

Using observations of real designers at work to inform the development of a novel haptic modeling system BIBAKFull-Text 230-235
  Umberto Giraudo; Monica Bordegoni
A comparison of two methods of scaling on form perception via a haptic interface BIBAKFull-Text 236-243
  Mounia Ziat; Olivier Gapenne; John Stewart; Charles Lenay
An initial usability assessment for symbolic haptic rendering of music parameters BIBAKFull-Text 244-251
  Meghan Allen; Jennifer Gluck; Karon MacLean; Erwin Tang
Tangible user interfaces for 3D clipping plane interaction with volumetric data: a case study BIBAKFull-Text 252-258
  Wen Qi; Jean-Bernard Martens
A transformational approach for multimodal web user interfaces based on UsiXML BIBAKFull-Text 259-266
  Adrian Stanciulescu; Quentin Limbourg; Jean Vanderdonckt; Benjamin Michotte; Francisco Montero
A pattern mining method for interpretation of interaction BIBAKFull-Text 267-273
  Tomoyuki Morita; Yasushi Hirano; Yasuyuki Sumi; Shoji Kajita; Kenji Mase
A study of manual gesture-based selection for the PEMMI multimodal transport management interface BIBAKFull-Text 274-281
  Fang Chen; Eric Choi; Julien Epps; Serge Lichman; Natalie Ruiz; Yu Shi; Ronnie Taib; Mike Wu
Recognition of sign language subwords based on boosted hidden Markov models BIBAKFull-Text 282-287
  Liang-Guo Zhang; Xilin Chen; Chunli Wang; Yiqiang Chen; Wen Gao
Gesture-driven American sign language phraselator BIBAKFull-Text 288-292
  Jose L. Hernandez-Rebollar
Interactive vision to detect target objects for helper robots BIBAKFull-Text 293-300
  Altab Hossain; Rahmadi Kurnia; Akio Nakamura; Yoshinori Kuno

Tangible interfaces and universal access

The contrastive evaluation of unimodal and multimodal interfaces for voice output communication aids BIBAKFull-Text 301-308
  Melanie Baljko
Agent-based architecture for implementing multimodal learning environments for visually impaired children BIBAKFull-Text 309-316
  Rami Saarinen; Janne Järvi; Roope Raisamo; Jouni Salo
Perceiving ordinal data haptically under workload BIBAKFull-Text 317-324
  Anthony Tang; Peter McLachlan; Karen Lowe; Chalapati Rao Saka; Karon MacLean
Virtual tangible widgets: seamless universal interaction with personal sensing devices BIBAKFull-Text 325-332
  Eiji Tokunaga; Hiroaki Kimura; Nobuyuki Kobayashi; Tatsuo Nakajima