HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2006 International Conference on Multimodal Interfaces

Fullname:ICMI'06 Proceedings of the 8th International Conference on Multimodal Interfaces
Editors:Francis Quek; Jie Yang; Dominic Massaro; Abeer Alwan; Timothy J. Hazen
Location:Banff, Alberta, Canada
Dates:2006-Nov-02 to 2006-Nov-04
Publisher:ACM
Standard No:ISBN: 1-59593-541-X; ACM DL: Table of Contents hcibib: ICMI06
Papers:60
Pages:392
  1. Keynote
  2. Poster Session 1
  3. Oral session 1: speech and gesture integration
  4. Oral session 2: perception and feedback
  5. Demonstration session
  6. Special poster session on human computing
  7. Oral session 3: language understanding and content analysis
  8. Oral session 4: collaborative systems and environments
  9. Special oral session: special session on human computing
  10. Poster session 2
  11. Oral session 5: speech and dialogue systems
  12. Oral session 6: interfaces and usability
  13. Panel

Keynote

Weight, weight, don't tell me BIBAFull-Text 1
  Ted Warburton
Movement and music: designing gestural interfaces for computer-based musical instruments BIBAFull-Text 2
  Sile O'Modhrain
Mixing virtual and actual BIBAFull-Text 3
  Herbert H. Clark

Poster Session 1

Collaborative multimodal photo annotation over digital paper BIBAKFull-Text 4-11
  Paulo Barthelmess; Edward Kaiser; Xiao Huang; David McGee; Philip Cohen
MyConnector: analysis of context cues to predict human availability for communication BIBAKFull-Text 12-19
  Maria Danninger; Tobias Kluge; Rainer Stiefelhagen
Human perception of intended addressee during computer-assisted meetings BIBAKFull-Text 20-27
  Rebecca Lunsford; Sharon Oviatt
Automatic detection of group functional roles in face to face interactions BIBAKFull-Text 28-34
  Massimo Zancanaro; Bruno Lepri; Fabio Pianesi
Speaker localization for microphone array-based ASR: the effects of accuracy on overlapping speech BIBAKFull-Text 35-38
  Hari Krishna Maganti; Daniel Gatica-Perez
Automatic speech recognition for webcasts: how good is good enough and what to do when it isn't BIBAKFull-Text 39-42
  Cosmin Munteanu; Gerald Penn; Ron Baecker; Yuecheng Zhang
Cross-modal coordination of expressive strength between voice and gesture for personified media BIBAKFull-Text 43-50
  Tomoko Yonezawa; Noriko Suzuki; Shinji Abe; Kenji Mase; Kiyoshi Kogure
VirtualHuman: dialogic and affective interaction with virtual characters BIBAKFull-Text 51-58
  Norbert Reithinger; Patrick Gebhard; Markus Löckelt; Alassane Ndiaye; Norbert Pfleger; Martin Klesen
From vocal to multimodal dialogue management BIBAKFull-Text 59-67
  Miroslav Melichar; Pavel Cenek
Human-Robot dialogue for joint construction tasks BIBAKFull-Text 68-71
  Mary Ellen Foster; Tomas By; Markus Rickert; Alois Knoll
roBlocks: a robotic construction kit for mathematics and science education BIBAKFull-Text 72-75
  Eric Schweikardt; Mark D. Gross

Oral session 1: speech and gesture integration

GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications BIBAKFull-Text 76-83
  Edward Tse; Saul Greenberg; Chia Shen
Co-Adaptation of audio-visual speech and gesture classifiers BIBAKFull-Text 84-91
  C. Mario Christoudias; Kate Saenko; Louis-Philippe Morency; Trevor Darrell
Towards the integration of shape-related information in 3-D gestures and speech BIBAKFull-Text 92-99
  Timo Sowa

Oral session 2: perception and feedback

Which one is better?: information navigation techniques for spatially aware handheld displays BIBAKFull-Text 100-107
  Michael Rohs; Georg Essl
Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis BIBAKFull-Text 108-117
  Jennifer L. Burke; Matthew S. Prewett; Ashley A. Gray; Liuquin Yang; Frederick R. B. Stilson; Michael D. Coovert; Linda R. Elliot; Elizabeth Redden
Multimodal estimation of user interruptibility for smart mobile telephones BIBAKFull-Text 118-125
  Robert Malkin; Datong Chen; Jie Yang; Alex Waibel

Demonstration session

Short message dictation on Symbian series 60 mobile phones BIBAKFull-Text 126-127
  E. Karpov; I. Kiss; J. Leppänen; J. Olsen; D. Oria; S. Sivadas; J. Tian
The NIST smart data flow system II multimodal data transport infrastructure BIBAKFull-Text 128
  Antoine Fillinger; Stéphane Degré; Imad Hamchi; Vincent Stanford
A contextual multimodal integrator BIBAKFull-Text 129-130
  Péter Pál Boda
Collaborative multimodal photo annotation over digital paper BIBAKFull-Text 131-132
  Paulo Barthelmess; Edward Kaiser; Xiao Huang; David McGee; Philip Cohen
CarDialer: multi-modal in-vehicle cellphone control application BIBAKFull-Text 133-134
  Vladimír Bergl; Martin Èmejrek; Martin Fanta; Martin Labský; Ladislav Seredi; Jan Sedivý; Lubos Ures
Gender and age estimation system robust to pose variations BIBAKFull-Text 135-136
  Erina Takikawa; Koichi Kinoshita; Shihong Lao; Masato Kawade
A fast and robust 3D head pose and gaze estimation system BIBAKFull-Text 137-138
  Koichi Kinoshita; Yong Ma; Shihong Lao; Masato Kawaade

Special poster session on human computing

Audio-visual emotion recognition in adult attachment interview BIBAKFull-Text 139-145
  Zhihong Zeng; Yuxiao Hu; Yun Fu; Thomas S. Huang; Glenn I. Roisman; Zhen Wen
Modeling naturalistic affective states via facial and vocal expressions recognition BIBAKFull-Text 146-154
  George Caridakis; Lori Malatesta; Loic Kessous; Noam Amir; Amaryllis Raouzaiou; Kostas Karpouzis
A 'need to know' system for group classification BIBAKFull-Text 155-161
  Wen Dong; Jonathan Gips; Alex (Sandy) Pentland
Spontaneous vs. posed facial behavior: automatic analysis of brow actions BIBAKFull-Text 162-170
  Michel F. Valstar; Maja Pantic; Zara Ambadar; Jeffrey F. Cohn
Gaze-X: adaptive affective multimodal interface for single-user office scenarios BIBAKFull-Text 171-178
  Ludo Maat; Maja Pantic
Human computing, virtual humans and artificial imperfection BIBAKFull-Text 179-184
  Z. M. Ruttkay; D. Reidsma; A. Nijholt

Oral session 3: language understanding and content analysis

Using maximum entropy (ME) model to incorporate gesture cues for SU detection BIBAKFull-Text 185-192
  Lei Chen; Mary Harper; Zhongqiang Huang
Salience modeling based on non-verbal modalities for spoken language understanding BIBAKFull-Text 193-200
  Shaolin Qu; Joyce Y. Chai
EM detection of common origin of multi-modal cues BIBAKFull-Text 201-208
  A. K. Noulas; B. J. A. Kröse

Oral session 4: collaborative systems and environments

Prototyping novel collaborative multimodal systems: simulation, data collection and analysis tools for the next decade BIBAKFull-Text 209-216
  Alexander M. Arthur; Rebecca Lunsford; Matt Wesson; Sharon Oviatt
Combining audio and video to predict helpers' focus of attention in multiparty remote collaboration on physical tasks BIBAKFull-Text 217-224
  Jiazhi Ou; Yanxin Shi; Jeffrey Wong; Susan R. Fussell; Jie Yang
The role of psychological ownership and ownership markers in collaborative working environment BIBAKFull-Text 225-232
  QianYing Wang; Alberto Battocchi; Ilenia Graziola; Fabio Pianesi; Daniel Tomasini; Massimo Zancanaro; Clifford Nass

Special oral session: special session on human computing

Foundations of human computing: facial expression and emotion BIBAKFull-Text 233-238
  Jeffrey F. Cohn
Human computing and machine understanding of human behavior: a survey BIBAKFull-Text 239-248
  Maja Pantic; Alex Pentland; Anton Nijholt; Thomas Huang
Computing human faces for human viewers: automated animation in photographs and paintings BIBAFull-Text 249-256
  Volker Blanz

Poster session 2

Detection and application of influence rankings in small group meetings BIBAKFull-Text 257-264
  Rutger Rienks; Dong Zhang; Daniel Gatica-Perez; Wilfried Post
Tracking the multi person wandering visual focus of attention BIBAKFull-Text 265-272
  Kevin Smith; Sileye O. Ba; Daniel Gatica-Perez; Jean-Marc Odobez
Toward open-microphone engagement for multiparty interactions BIBAKFull-Text 273-280
  Rebecca Lunsford; Sharon Oviatt; Alexander M. Arthur
Tracking head pose and focus of attention with multiple far-field cameras BIBAKFull-Text 281-286
  Michael Voit; Rainer Stiefelhagen
Recognizing gaze aversion gestures in embodied conversational discourse BIBAKFull-Text 287-294
  Louis-Philippe Morency; C. Mario Christoudias; Trevor Darrell
Explorations in sound for tilting-based interfaces BIBAKFull-Text 295-301
  Matthias Rath; Michael Rohs
Haptic phonemes: basic building blocks of haptic communication BIBAKFull-Text 302-309
  Mario Enriquez; Karon MacLean; Christian Chita
Toward haptic rendering for a virtual dissection BIBAKFull-Text 310-317
  Nasim Melony Vafai; Shahram Payandeh; John Dill
Embrace system for remote counseling BIBAKFull-Text 318-325
  Osamu Morikawa; Sayuri Hashimoto; Tsunetsugu Munakata; Junzo Okunaka
Enabling multimodal communications for enhancing the ability of learning for the visually impaired BIBAKFull-Text 326-332
  Francis Quek; David McNeill; Francisco Oliveira
The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedback BIBAKFull-Text 333-338
  Matthew S. Prewett; Liuquin Yang; Frederick R. B. Stilson; Ashley A. Gray; Michael D. Coovert; Jennifer Burke; Elizabeth Redden; Linda R. Elliot

Oral session 5: speech and dialogue systems

Word graph based speech recognition error correction by handwriting input BIBAKFull-Text 339-346
  Peng Liu; Frank K. Soong
Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations BIBAKFull-Text 347-356
  Edward C. Kaiser
Multimodal fusion: a new hybrid strategy for dialogue systems BIBAKFull-Text 357-363
  Pilar Manchón Portillo; Guillermo Pérez García; Gabriel Amores Carredano

Oral session 6: interfaces and usability

Evaluating usability based on multimodal information: an empirical study BIBAKFull-Text 364-371
  Tao Lin; Atsumi Imamiya
A new approach to haptic augmentation of the GUI BIBAKFull-Text 372-379
  Thomas N. Smyth; Arthur E. Kirkpatrick
HMM-based synthesis of emotional facial expressions during speech in synthetic talking heads BIBAKFull-Text 380-387
  Nadia Mana; Fabio Pianesi

Panel

Embodiment and multimodality BIBAKFull-Text 388-390
  Francis Quek