HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2004 International Conference on Multimodal Interfaces

Fullname:ICMI'04 Proceedings of the 6th IEEE International Conference on Multimodal Interfaces
Editors:Rajeev Sharma; Trevor Darrell; Mary Harper; Gianni Lazzari; Matthew Turk
Location:State College, Pennsylvania, USA
Dates:2004-Oct-13 to 2004-Oct-15
Publisher:ACM
Standard No:ISBN: 1-58113-995-0; ACM DL: Table of Contents hcibib: ICMI04
Papers:69
Pages:358
  1. Gaze
  2. Multimodial conversational agents
  3. Architecture
  4. Multimodal applications
  5. Multimodal communication
  6. Multimodal interaction
  7. Poster session 1
  8. Poster session 2
  9. Demo session 1
  10. Demo session 2
  11. Doctoral spotlight session

Gaze

Two-way eye contact between humans and robots BIBAKFull-Text 1-8
  Yoshinori Kuno; Arihiro Sakurai; Dai Miyauchi; Akio Nakamura
Another person's eye gaze as a cue in solving programming problems BIBAKFull-Text 9-15
  Randy Stein; Susan E. Brennan
EyePrint: support of document browsing with eye gaze trace BIBAKFull-Text 16-23
  Takehiko Ohno

Multimodial conversational agents

A framework for evaluating multimodal integration by humans and a role for embodied conversational agents BIBAKFull-Text 24-31
  Dominic W. Massaro
From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systems BIBAKFull-Text 32-37
  Louis-Philippe Morency; Trevor Darrell
Evaluation of spoken multimodal conversation BIBAKFull-Text 38-45
  Niels Ole Bernsen; Laila Dybkjær
Multimodal transformed social interaction BIBAKFull-Text 46-52
  Matthew Turk; Jeremy Bailenson; Andrew Beall; Jim Blascovich; Rosanna Guadagno

Architecture

Multimodal interaction in an augmented reality scenario BIBAKFull-Text 53-60
  Gunther Heidemann; Ingo Bax; Holger Bekel
The ThreadMill architecture for stream-oriented human communication analysis applications BIBAKFull-Text 61-68
  Paulo Barthelmess; Clarence A. Ellis
TouchLight: an imaging touch screen and display for gesture-based interaction BIBAKFull-Text 69-76
  Andrew D. Wilson

Multimodal applications

Walking-pad: a step-in-place locomotion interface for virtual environments BIBAKFull-Text 77-81
  Laroussi Bouguila; Florian Evequoz; Michele Courant; Beat Hirsbrunner
Multimodal detection of human interaction events in a nursing home environment BIBAKFull-Text 82-89
  Datong Chen; Robert Malkin; Jie Yang
Elvis: situated speech and gesture understanding for a robotic chandelier BIBAKFull-Text 90-96
  Joshua Juster; Deb Roy

Multimodal communication

Towards integrated microplanning of language and iconic gesture for multimodal output BIBAKFull-Text 97-104
  Stefan Kopp; Paul Tepper; Justine Cassell
Exploiting prosodic structuring of coverbal gesticulation BIBAKFull-Text 105-112
  Sanshzar Kettebekov
Visual and linguistic information in gesture classification BIBAKFull-Text 113-120
  Jacob Eisenstein; Randall Davis
Multimodal model integration for sentence unit detection BIBAKFull-Text 121-128
  Mary P. Harper; Elizabeth Shriberg

Multimodal interaction

When do we interact multimodally?: cognitive load and multimodal communication patterns BIBAKFull-Text 129-136
  Sharon Oviatt; Rachel Coulston; Rebecca Lunsford
Bimodal HCI-related affect recognition BIBAKFull-Text 137-143
  Zhihong Zeng; Jilin Tu; Ming Liu; Tong Zhang; Nicholas Rizzolo; Zhenqiu Zhang; Thomas S. Huang; Dan Roth; Stephen Levinson
Identifying the addressee in human-human-robot interactions based on head pose and speech BIBAKFull-Text 144-151
  Michael Katzenmaier; Rainer Stiefelhagen; Tanja Schultz
Articulatory features for robust visual speech recognition BIBAKFull-Text 152-158
  Kate Saenko; Trevor Darrell; James R. Glass

Poster session 1

M/ORIS: a medical/operating room interaction system BIBAKFull-Text 159-166
  Sébastien Grange; Terrence Fong; Charles Baur
Modality fusion for graphic design applications BIBAKFull-Text 167-174
  André D. Milota
Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures BIBAKFull-Text 175-182
  Hartwig Holzapfel; Kai Nickel; Rainer Stiefelhagen
AROMA: ambient awareness through olfaction in a messaging application BIBAKFull-Text 183-190
  Adam Bodnar; Richard Corbett; Dmitry Nekrasovski
The virtual haptic back for palpatory training BIBAKFull-Text 191-197
  Robert L., II Williams; Mayank Srivastava; John N. Howell; Robert R., Jr. Conatser; David C. Eland; Janet M. Burns; Anthony G. Chila
A vision-based sign language recognition system using tied-mixture density HMM BIBAKFull-Text 198-204
  Liang-Guo Zhang; Yiqiang Chen; Gaolin Fang; Xilin Chen; Wen Gao
Analysis of emotion recognition using facial expressions, speech and multimodal information BIBAKFull-Text 205-211
  Carlos Busso; Zhigang Deng; Serdar Yildirim; Murtaza Bulut; Chul Min Lee; Abe Kazemzadeh; Sungbok Lee; Ulrich Neumann; Shrikanth Narayanan
Support for input adaptability in the ICON toolkit BIBAKFull-Text 212-219
  Pierre Dragicevic; Jean-Daniel Fekete
User walkthrough of multimodal access to multidimensional databases BIBAKFull-Text 220-226
  M. P. van Esch-Bussemakers; A. H. M. Cremers
Multimodal interaction under exerted conditions in a natural field setting BIBAKFull-Text 227-234
  Sanjeev Kumar; Philip R. Cohen; Rachel Coulston
A segment-based audio-visual speech recognizer: data collection, development, and initial experiments BIBAKFull-Text 235-242
  Timothy J. Hazen; Kate Saenko; Chia-Hao La; James R. Glass

Poster session 2

A model-based approach for real-time embedded multimodal systems in military aircrafts BIBAKFull-Text 243-250
  Rémi Bastide; David Navarre; Philippe Palanque; Amélie Schyn; Pierre Dragicevic
ICARE software components for rapidly developing multimodal interfaces BIBAKFull-Text 251-258
  Jullien Bouchet; Laurence Nigay; Thierry Ganille
MacVisSTA: a system for multimodal analysis BIBAKFull-Text 259-264
  R. Travis Rose; Francis Quek; Yang Shi
Context based multimodal fusion BIBAKFull-Text 265-272
  Norbert Pfleger
Emotional Chinese talking head system BIBAKFull-Text 273-280
  Jianhua Tao; Tieniu Tan
Experiences on haptic interfaces for visually impaired young children BIBAKFull-Text 281-288
  Saija Patomäki; Roope Raisamo; Jouni Salo; Virpi Pasto; Arto Hippula
Visual touchpad: a two-handed gestural input device BIBAKFull-Text 289-296
  Shahzad Malik; Joe Laszlo
An evaluation of virtual human technology in informational kiosks BIBAKFull-Text 297-302
  Curry Guinn; Rob Hubal
Software infrastructure for multi-modal virtual environments BIBAKFull-Text 303-308
  Brian Goldiez; Glenn Martin; Jason Daly; Donald Washburn; Todd Lazarus
GroupMedia: distributed multi-modal interfaces BIBAKFull-Text 309-316
  Anmol Madan; Ron Caneel; Alex Sandy Pentland

Demo session 1

Agent and library augmented shared knowledge areas (ALASKA) BIBAKFull-Text 317-318
  Eric R. Hamilton
MULTIFACE: multimodal content adaptations for heterogeneous devices BIBAKFull-Text 319-320
  Songsak Channarukul; Susan W. McRoy; Syed S. Ali
Command and control resource performance predictor (C2RP2) BIBKFull-Text 321-322
  Joseph M. Dalton; Ali Ahmad; Kay Stanney
A multi-modal architecture for cellular phones BIBKFull-Text 323-324
  Luca Nardelli; Marco Orlandi; Daniele Falavigna
'SlidingMap': introducing and evaluating a new modality for map interaction BIBAKFull-Text 325-326
  Matthias Merdes; Jochen Häußler; Matthias Jöst
Multimodal interaction for distributed collaboration BIBAKFull-Text 327-328
  Levent Bolelli; Guoray Cai; Hongmei Wang; Bita Mortazavi; Ingmar Rauschert; Sven Fuhrmann; Rajeev Sharma; Alan MacEachren

Demo session 2

A multimodal learning interface for sketch, speak and point creation of a schedule chart BIBAKFull-Text 329-330
  Ed Kaiser; David Demirdjian; Alexander Gruenstein; Xiaoguang Li; John Niekrasz; Matt Wesson; Sanjeev Kumar
Real-time audio-visual tracking for meeting analysis BIBAKFull-Text 331-332
  David Demirdjian; Kevin Wilson; Michael Siracusa; Trevor Darrell
Collaboration in parallel worlds BIBAKFull-Text 333-334
  Ashutosh Morde; Jun Hou; S. Kicha Ganapathy; Carlos Correa; Allan Krebs; Lawrence Rabiner
Segmentation and classification of meetings using multiple information streams BIBAKFull-Text 335-336
  Paul E. Rybski; Satanjeev Banerjee; Fernando de la Torre; Carlos Vallespi; Alexander I. Rudnicky; Manuela Veloso
A maximum entropy based approach for multimodal integration BIBAKFull-Text 337-338
  Péter Pál Boda
Multimodal interface platform for geographical information systems (GeoMIP) in crisis management BIBAKFull-Text 339-340
  Pyush Agrawal; Ingmar Rauschert; Keerati Inochanon; Levent Bolelli; Sven Fuhrmann; Isaac Brewer; Guoray Cai; Alan MacEachren; Rajeev Sharma

Doctoral spotlight session

Adaptations of multimodal content in dialog systems targeting heterogeneous devices BIBAKFull-Text 341
  Songsak Channarukul
Utilizing gestures to better understand dynamic structure of human communication BIBAKFull-Text 342
  Lei Chen
Multimodal programming for dyslexic students BIBAFull-Text 343
  Dale-Marie Wilson
Gestural cues for speech understanding BIBKFull-Text 344
  Jacob Eisenstein
Using language structure for adaptive multimodal language acquisition BIBAKFull-Text 345
  Rajesh Chandrasekaran
Private speech during multimodal human-computer interaction BIBKFull-Text 346
  Rebecca Lunsford
Projection augmented models: the effect of haptic feedback on subjective and objective human factors BIBKFull-Text 347
  Emily Bennett
Multimodal interface design for multimodal meeting content retrieval BIBAKFull-Text 348
  Agnes Lisowska
Determining efficient multimodal information-interaction spaces for C2 systems BIBAKFull-Text 349
  Leah M. Reeves
Using spatial warning signals to capture a driver's visual attention BIBAKFull-Text 350
  Cristy Ho
Multimodal interfaces and applications for visually impaired children BIBAKFull-Text 351
  Saija Patomäki
Multilayer architecture in sign language recognition system BIBAKFull-Text 352-353
  Feng Jiang; Hongxun Yao; Guilin Yao
Computer vision techniques and applications in human-computer interaction BIBAKFull-Text 354
  Erno Mäkinen
Multimodal response generation in GIS BIBAFull-Text 355
  Levent Bolelli
Adaptive multimodal recognition of voluntary and involuntary gestures of people with motor disabilities BIBKFull-Text 356
  Ingmar Rauschert