HCI Bibliography Home | HCI Conferences | ICMI Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
ICMI Tables of Contents: 0203040506070809101112131415

Proceedings of the 2007 International Conference on Multimodal Interfaces

Fullname:ICMI'07 Proceedings of the 9th International Conference on Multimodal Interfaces
Editors:Kenji Mase; Dominic Massaro; Kazuya Takeda; Deb Roy; Alexandros Potamianos
Location:Nagoya, Aichi, Japan
Dates:2007-Nov-12 to 2007-Nov-15
Publisher:ACM
Standard No:ISBN: 1-59593-817-6, 978-1-59593-817-6; ACM DL: Table of Contents hcibib: ICMI07
Papers:60
Pages:388
Links:Conference Home Page
  1. Keynote
  2. Oral session 1: spontaneous behavior 1
  3. Oral session 2: spontaneous behavior 2
  4. Poster session 1
  5. Oral session 3: cross-modality
  6. Poster session 2
  7. Oral session 4: meeting applications
  8. Poster session 3
  9. Oral session 5: interactive systems 1
  10. Oral session 6: interactive systems 2
  11. Workshops

Keynote

Interfacing life: a year in the life of a research lab BIBAKFull-Text 1
  Yuri Ivanov
The great challenge of multimodal interfaces towards symbiosis of human and robots BIBAKFull-Text 2
  Norihiro Hagita
Just in time learning: implementing principles of multimodal processing and learning for education BIBAKFull-Text 3-8
  Dominic W. Massaro

Oral session 1: spontaneous behavior 1

The painful face: pain expression recognition using active appearance models BIBAKFull-Text 9-14
  Ahmed Bilal Ashraf; Simon Lucey; Jeffrey F. Cohn; Tsuhan Chen; Zara Ambadar; Ken Prkachin; Patty Solomon; Barry J. Theobald
Faces of pain: automated measurement of spontaneous all facial expressions of genuine and posed pain BIBAKFull-Text 15-21
  Gwen C. Littlewort; Marian Stewart Bartlett; Kang Lee
Visual inference of human emotion and behaviour BIBAKFull-Text 22-29
  Shaogang Gong; Caifeng Shan; Tao Xiang

Oral session 2: spontaneous behavior 2

Audiovisual recognition of spontaneous interest within conversations BIBAKFull-Text 30-37
  Bjöern Schuller; Ronald Müeller; Benedikt Höernler; Anja Höethker; Hitoshi Konosu; Gerhard Rigoll
How to distinguish posed from spontaneous smiles using geometric features BIBAKFull-Text 38-45
  Michel F. Valstar; Hatice Gunes; Maja Pantic
Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder BIBAKFull-Text 46-53
  Rana el Kaliouby; Alea Teeters

Poster session 1

Statistical segmentation and recognition of fingertip trajectories for a gesture interface BIBAKFull-Text 54-57
  Kazuhiro Morimoto; Chiyomi Miyajima; Norihide Kitaoka; Katunobu Itou; Kazuya Takeda
A tactile language for intuitive human-robot communication BIBAKFull-Text 58-65
  Andreas J. Schmid; Martin Hoffmann; Heinz Woern
Simultaneous prediction of dialog acts and address types in three-party conversations BIBAKFull-Text 66-73
  Yosuke Matsusaka; Mika Enomoto; Yasuharu Den
Developing and analyzing intuitive modes for interactive object modeling BIBAKFull-Text 74-81
  Alexander Kasper; Regine Becher; Peter Steinhaus; Rüdiger Dillmann
Extraction of important interactions in medical interviews using nonverbal information BIBAKFull-Text 82-85
  Yuichi Sawamoto; Yuichi Koyama; Yasushi Hirano; Shoji Kajita; Kenji Mase; Kimiko Katsuyama; Kazunobu Yamauchi
Towards smart meeting: enabling technologies and a real-world application BIBAKFull-Text 86-93
  Zhiwen Yu; Motoyuki Ozeki; Yohsuke Fujii; Yuichi Nakamura
Multimodal cues for addressee-hood in triadic communication with a human information retrieval agent BIBAKFull-Text 94-101
  Jacques Terken; Irene Joris; Linda De Valk
The effect of input mode on inactivity and interaction times of multimodal systems BIBAKFull-Text 102-109
  Manolis Perakakis; Alexandros Potamianos
Positional mapping: keyboard mapping based on characters writing positions for mobile devices BIBAKFull-Text 110-117
  Ye Kyaw Thu; Yoshiyori Urano
Five-key text input using rhythmic mappings BIBAKFull-Text 118-121
  Christine Szentgyorgyi; Edward Lank
Toward content-aware multimodal tagging of personal photo collections BIBAKFull-Text 122-125
  Paulo Barthelmess; Edward Kaiser; David R. McGee
A survey of affect recognition methods: audio, visual and spontaneous expressions BIBAKFull-Text 126-133
  Zhihong Zeng; Maja Pantic; Glenn I. Roisman; Thomas S. Huang
Real-time expression cloning using appearance models BIBAKFull-Text 134-139
  Barry-John Theobald; Iain A. Matthews; Jeffrey F. Cohn; Steven M. Boker
Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking BIBAKFull-Text 140-145
  Tomoko Yonezawa; Hirotake Yamazoe; Akira Utsumi; Shinji Abe
Map navigation with mobile devices: virtual versus physical movement with and without visual context BIBAKFull-Text 146-153
  Michael Rohs; Johannes Schöning; Martin Raubal; Georg Essl; Antonio Krüger

Oral session 3: cross-modality

Can you talk or only touch-talk: A VoIP-based phone feature for quick, quiet, and private communication BIBAKFull-Text 154-161
  Maria Danninger; Leila Takayama; Qianying Wang; Courtney Schultz; Jörg Beringer; Paul Hofmann; Frankie James; Clifford Nass
Designing audio and tactile crossmodal icons for mobile devices BIBAKFull-Text 162-169
  Eve Hoggan; Stephen Brewster
A study on the scalability of non-preferred hand mode manipulation BIBAKFull-Text 170-177
  Jaime Ruiz; Edward Lank

Poster session 2

VoicePen: augmenting pen input with simultaneous non-linguistic vocalization BIBAKFull-Text 178-185
  Susumu Harada; T. Scott Saponas; James A. Landay
A large-scale behavior corpus including multi-angle video data for observing infants' long-term developmental processes BIBAKFull-Text 186-192
  Shinya Kiriyama; Goh Yamamoto; Naofumi Otani; Shogo Ishikawa; Yoichi Takebayashi
The micole architecture: multimodal support for inclusion of visually impaired children BIBAKFull-Text 193-200
  Thomas Pietrzak; Benoît Martin; Isabelle Pecci; Rami Saarinen; Roope Raisamo; Janne Järvi
Interfaces for musical activities and interfaces for musicians are not the same: the case for codes, a web-based environment for cooperative music prototyping BIBAKFull-Text 201-207
  Evandro Manara Miletto; Luciano Vargas Flores; Marcelo Soares Pimenta; Jérôme Rutily; Leonardo Santagada
TotalRecall: visualization and semi-automatic annotation of very large audio-visual corpora BIBAKFull-Text 208-215
  Rony Kubat; Philip DeCamp; Brandon Roy
Extensible middleware framework for multimodal interfaces in distributed environments BIBAKFull-Text 216-219
  Vitor Fernandes; Tiago Guerreiro; Bruno Araújo; Joaquim Jorge; João Pereira
Temporal filtering of visual speech for audio-visual speech recognition in acoustically and visually challenging environments BIBAKFull-Text 220-227
  Jong-Seok Lee; Cheol Hoon Park
Reciprocal attentive communication in remote meeting with a humanoid robot BIBAKFull-Text 228-235
  Tomoyuki Morita; Kenji Mase; Yasushi Hirano; Shoji Kajita
Password management using doodles BIBAKFull-Text 236-239
  Naveen Sundar Govindarajulu; Sriganesh Madhvanath
A computational model for spatial expression resolution BIBAKFull-Text 240-246
  Andrea Corradini
Disambiguating speech commands using physical context BIBAKFull-Text 247-254
  Katherine M. Everitt; Susumu Harada; Jeff Bilmes; James A. Landay

Oral session 4: meeting applications

Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances BIBAKFull-Text 255-262
  Kazuhiro Otsuka; Hiroshi Sawada; Junji Yamato
Influencing social dynamics in meetings through a peripheral display BIBAKFull-Text 263-270
  Janienke Sturm; Olga Houben-van Herwijnen; Anke Eyck; Jacques Terken
Using the influence model to recognize functional roles in meetings BIBAKFull-Text 271-278
  Wen Dong; Bruno Lepri; Alessandro Cappelletti; Alex Sandy Pentland; Fabio Pianesi; Massimo Zancanaro

Poster session 3

User impressions of a stuffed doll robot's facing direction in animation systems BIBAKFull-Text 279-284
  Hiroko Tochigi; Kazuhiko Shinozawa; Norihiro Hagita
Speech-driven embodied entrainment character system with hand motion input in mobile environment BIBAKFull-Text 285-290
  Kouzi Osaki; Tomio Watanabe; Michiya Yamamoto
Natural multimodal dialogue systems: a configurable dialogue and presentation strategies component BIBAKFull-Text 291-298
  Meriam Horchani; Benjamin Caron; Laurence Nigay; Franck Panaget
Modeling human interaction resources to support the design of wearable multimodal systems BIBAKFull-Text 299-306
  Tobias Klug; Max Mühlhäuser
Speech-filtered bubble ray: improving target acquisition on display walls BIBAKFull-Text 307-314
  Edward Tse; Mark Hancock; Saul Greenberg
Using pen input features as indices of cognitive load BIBAKFull-Text 315-318
  Natalie Ruiz; Ronnie Taib; Yu (David) Shi; Eric Choi; Fang Chen
Automated generation of non-verbal behavior for virtual embodied characters BIBAKFull-Text 319-322
  Werner Breitfuss; Helmut Prendinger; Mitsuru Ishizuka
Detecting communication errors from visual cues during the system's conversational turn BIBAKFull-Text 323-326
  Sy Bor Wang; David Demirdjian; Trevor Darrell
Multimodal interaction analysis in a smart house BIBAKFull-Text 327-334
  Pilar Manchón; Carmen del Solar; Gabriel Amores; Guillermo Pérez
A multi-modal mobile device for learning Japanese kanji characters through mnemonic stories BIBAKFull-Text 335-338
  Norman Lin; Shoji Kajita; Kenji Mase

Oral session 5: interactive systems 1

3d augmented mirror: a multimodal interface for string instrument learning and teaching with gesture support BIBAKFull-Text 339-345
  Kia C. Ng; Tillman Weyde; Oliver Larkin; Kerstin Neubarth; Thijs Koerselman; Bee Ong
Interest estimation based on dynamic bayesian networks for visual attentive presentation agents BIBAKFull-Text 346-349
  Boris Brandherm; Helmut Prendinger; Mitsuru Ishizuka
On-line multi-modal speaker diarization BIBAKFull-Text 350-357
  Athanasios Noulas; Ben J. A. Krose

Oral session 6: interactive systems 2

Presentation sensei: a presentation training system using speech and image processing BIBAKFull-Text 358-365
  Kazutaka Kurihara; Masataka Goto; Jun Ogata; Yosuke Matsusaka; Takeo Igarashi
The world of mushrooms: human-computer interaction prototype systems for ambient intelligence BIBAKFull-Text 366-373
  Yasuhiro Minami; Minako Sawaki; Kohji Dohsaka; Ryuichiro Higashinaka; Kentaro Ishizuka; Hideki Isozaki; Tatsushi Matsubayashi; Masato Miyoshi; Atsushi Nakamura; Takanobu Oba; Hiroshi Sawada; Takeshi Yamada; Eisaku Maeda
Evaluation of haptically augmented touchscreen gui elements under cognitive load BIBAKFull-Text 374-381
  Rock Leung; Karon MacLean; Martin Bue Bertelsen; Mayukh Saubhasik

Workshops

Multimodal interfaces in semantic interaction BIBAKFull-Text 382
  Naoto Iwahashi; Mikio Nakano
Workshop on tagging, mining and retrieval of human related activity information BIBAKFull-Text 383-384
  Paulo Barthelmess; Edward Kaiser
Workshop on massive datasets BIBKFull-Text 385
  Christopher R. Wren; Yuri A. Ivanov