| Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications | | BIBAK | Full-Text | 3-11 | |
| Wolfgang Beinhauer; Cornelia Hipp | |||
| Multimodal mobile applications are gaining momentum in the field of location
based services for special purposes. One of them is navigation systems and
tourist guides for pedestrians. In some cases, when the visibility is limited
or blind people are longing for guidance, acoustic landmarks are used for
macro-navigation rather than visual landmarks. Likewise, micro-navigation
supported by pedestrian navigation systems must comply to the user's
expectations. In this paper, we present an acoustic landscape that allows the
emulation of arbitrary out-door situations dedicated for the evaluation of
navigation systems. We present the evaluation capabilities and limitations of
the laboratory as well as an example of an evaluation of a pedestrian
navigation system that uses acoustic and haptic feedback. Keywords: Acoustic landscape; test bed; pedestrian navigation; haptic feedback | |||
| Modeling and Using Salience in Multimodal Interaction Systems | | BIBA | Full-Text | 12-18 | |
| Ali Choumane; Jacques Siroux | |||
| We are interested in input to human-machine multimodal interaction systems for geographical information search. In our context of study, the system offers to the user the ability of using speech, gesture and visual modes. The system displays a map on the screen, the user ask the system about sites (hotels, campsites, ...) by specifying a place of search. Referenced places are objects in the visual context like cities, road, river, etc. The system should determine the designated object to complete the understanding process of user's request. In this context, we aim to improve the reference resolution process while taking into account ambiguous designations. In this paper, we focus on the modeling of visual context. In this modeling we take into account the notion of salience, its role in the designation and in the processing methods. | |||
| Exploring Multimodal Interaction in Collaborative Settings | | BIBAK | Full-Text | 19-28 | |
| Luís Duarte; Marco de Sá; Luís Carriço | |||
| This paper presents an initial study of a multimodal collaborative platform
concerning user preferences and interaction technique adequacy towards a task.
True collaborative interactions are a missing aspect of the majority of
nowadays' multi-user system on par with the lack of support towards impaired
users. In order to surpass these obstacles we provide an accessible platform
for co-located collaborative environments which aims at not only improving the
ways users interact within them but also at exploring novel interaction
patterns. A brief study regarding a set of interaction techniques and tasks was
conducted in order to assess the most suited modalities in certain settings. We
discuss the results drawn from this study, detail some related conclusions and
present future work directions. Keywords: Collaboration; Multimodal Interaction; Accessibility | |||
| Towards a Multidimensional Approach for the Evaluation of Multimodal Application User Interfaces | | BIBAK | Full-Text | 29-38 | |
| José Eustáquio Rangel de Queiroz; Joseana M. Fechine; Ana E. V. Barbosa; Danilo de Sousa Ferreira | |||
| This paper focuses on a multidimensional approach for evaluating multimodal
UI based upon a set of well known techniques for usability evaluation. Each
technique evaluates the problem from a different perspective which combines
user opinion, standard conformity assessment, and user performance measurement.
The method's presentation and the evaluation analysis will be supported by a
discussion of the results obtained from the method's application to a case
study involving a Smartphone. The main objective of the study was to
investigate the need for adapting a multidimensional evaluation approach for
desktop applications to the context of multimodal devices and applications. Keywords: Multimodal user interface; usability evaluation techniques; multidimensional
approach; multimodal application | |||
| Multimodal Shopping Lists | | BIBAK | Full-Text | 39-47 | |
| Jhilmil Jain; Riddhiman Ghosh; Mohamed Dekhil | |||
| In this paper we present a prototype for creating shopping lists using
multiple input devices such as desktop, smart phones, landline or cell phones
and in multimodal formats such as structured text, audio, still images, video,
unstructured text and annotated media. The prototype was used by 10
participants in a two week longitudinal study. The goal was to analyze the
process that users go through in order to create and manage shopping related
projects. Based on these findings, we recommend desirable features for personal
information management systems specifically designed for managing collaborative
shopping lists. Keywords: shopping lists; images; text; video; audio; mobile; web | |||
| Value of Using Multimodal Data in HCI Methodologies | | BIBAK | Full-Text | 48-57 | |
| Jhilmil Jain | |||
| In this paper, I will discuss two HCI methodologies, diary study and
affinity diagramming, that were carried out using multimodal data such as
images, audio, video, annotated media along with the traditional use of text. I
will discuss a software solution that was developed at HP Labs to conduct a
multimodal diary study using three touch points: PCs, mobile devices and any
type of landline/cellular phone. This will be followed by a discussion on how
Microsoft StickySorter software was used to conduct multimodal affinity
diagramming exercises. Keywords: multimodal diary study; multimodal affinity diagrams; audio; video; images;
text; free-hand notes; annotated media | |||
| Effective Combination of Haptic, Auditory and Visual Information Feedback in Operation Feeling | | BIBAK | Full-Text | 58-65 | |
| Keiko Kasamatsu; Tadahiro Minami; Kazuki Izumi; Hideo Jinguh | |||
| This research designed the means to offer the haptic feedback to the touch
panel by producing the pen-shaped vibratory device. The task performance and
the feeling of push touch using this device was experimented by the sensory
evaluation. The psychological effects concerning the difference in the feeling
of push touch and the acceptance of the device was evaluated by giving the
haptic feedback compared with the visual and the auditory feedback. As a
result, not only the haptic sense but also the effectiveness of two or more
sensory integration was taken as an operation feeling. The goodness of the
operation feeling was confirmed the improvement by a clear impression and
comprehensible feedback. It is thought that the goodness of operativeness by
the vibratory stimulation is connected with the sensibility such as the feeling
of comfortable and security. Keywords: feeling of push touch; haptic feedback; psychological effects | |||
| Multi-modal Interface in Multi-Display Environment for Multi-users | | BIBAK | Full-Text | 66-74 | |
| Yoshifumi Kitamura; Satoshi Sakurai; Tokuo Yamaguchi; Ryo Fukazawa; Yuichi Itoh; Fumio Kishino | |||
| Multi-display environments (MDEs) are becoming more and more common. By
introducing multi-modal interaction techniques such as gaze, body/hand and
gestures, we established a sophisticated and intuitive interface for MDEs where
the displays are stitched seamlessly and dynamically according to the users'
viewpoints. Each user can interact with the multiple displays as if she is in
front of an ordinary desktop GUI environment. Keywords: 3D user interfaces; CSCW; graphical user interfaces; perspective correction | |||
| Reliable Evaluation of Multimodal Dialogue Systems | | BIBAK | Full-Text | 75-83 | |
| Florian Metze; Ina Wechsung; Stefan Schaffer; Julia Seebode; Sebastian Möller | |||
| Usability evaluation is an indispensable issue during the development of new
interfaces and interaction paradigms [1]. Although a wide range of reliable
usability evaluation methods exists for graphical user interfaces, mature
methods are rarely available for speech-based interfaces [2]. When it comes to
multimodal interfaces, no standardized approach has so far been established. In
previous studies [3], it was shown that usability questionnaires initially
developed for unimodal systems may lead to unreliable results when applied to
multimodal systems. In the current study, we therefore used several data
sources (direct and indirect measurements) to evaluate two unimodal versions
and one multimodal version of an information system. We investigated, to which
extent the different data showed concordance for the three system versions. The
aim was to examine, if, and under which conditions, common and widely used
methods originally developed for graphical user interfaces are also appropriate
for speech-based and multimodal intelligent interfaces. Keywords: usability evaluation methods; multimodal interfaces | |||
| Evaluation Proposal of a Framework for the Integration of Multimodal Interaction in 3D Worlds | | BIBA | Full-Text | 84-92 | |
| Héctor Olmedo-Rodríguez; David Escudero Mancebo; Valentín Cardeñoso-Payo | |||
| This paper describes a multimodal architecture to control 3D avatars with speech dialogs and mouse events. We briefly describe the scripting language used to specify the sequences and the components of the architecture supporting the system. Then we focus on the evaluation procedure that is proposed to test the system. The discussion on the evaluation results shows us the future work to be accomplished. | |||
| Building a Practical Multimodal System with a Multimodal Fusion Module | | BIBAK | Full-Text | 93-102 | |
| Yong Sun; Yu (David) Shi; Fang Chen; Vera Chung | |||
| A multimodal system is a system equipped with a multimodal interface through
which a user can interact with the system by using his/her natural
communication modalities, such as speech, gesture, eye gaze, etc. To understand
a user's intension, multimodal input fusion, a critical component of a
multimodal interface, integrates a user's multimodal inputs and finds the
combined semantic interpretation of them. As powerful, yet affordable input and
output technologies becoming available, such as speech recognition and eye
tracking, it becomes possible to attach recognition technologies to existing
applications with a multimodal input fusion module; therefore, a practical
multimodal system can be built. This paper documents our experience about
building a practical multimodal system with our multimodal input fusion
technology. The pilot study has been conducted over the multimodal system. By
outlining observations from the pilot study, the implications on multimodal
interface design are laid out. Keywords: Multimodal system design; practical multimodal system; multimodal input
fusion | |||
| Modeling Multimodal Interaction for Performance Evaluation | | BIBAK | Full-Text | 103-112 | |
| Emile Verdurand; Gilles Coppin; Franck Poirier; Olivier Grisvard | |||
| When designing multimodal systems, the designer faces the problem of the
choice of modalities to optimize the system usability. Based on modeling at a
high level of abstraction, we propose an evaluation of this choice during the
design phase, using a multi-criteria principle. The evaluation focuses on
several points of view simultaneously, weighted according to the environment
and the nature of the task. It relies on measures estimating the adequacies
between the elements involved in the interaction. These measures arise from a
fine decomposition of the interaction modalities. Keywords: modeling; evaluation; modality; context adequacy; interaction language;
multimodal interaction | |||
| Usability Evaluation of Multimodal Interfaces: Is the Whole the Sum of Its Parts? | | BIBA | Full-Text | 113-119 | |
| Ina Wechsung; Klaus-Peter Engelbrecht; Stefan Schaffer; Julia Seebode; Florian Metze; Sebastian Möller | |||
| Usability evaluation of multimodal systems is a complex issue. Multimodal systems provide multiple channels to communicate with the system. Thus, the single modalities as well as their combination have to be taken into account. This paper aims to investigate how ratings of single modalities relate to the ratings of their combination. Therefore a usability evaluation study was conducted testing an information system in two unimodal versions and one multimodal version. Multiple linear regression showed that for overall and global judgments ratings of the single modalities are very good predictors for the ratings of the multimodal system. For separate usability aspects (e.g. hedonic qualities) the prediction was less accurate. | |||
| An Open Source Framework for Real-Time, Incremental, Static and Dynamic Hand Gesture Learning and Recognition | | BIBAK | Full-Text | 123-130 | |
| Todd C. Alexander; Hassan S. Ahmed; Georgios C. Anagnostopoulos | |||
| Real-time, static and dynamic hand gesture learning and recognition makes it
possible to have computers recognize hand gestures naturally. This creates
endless possibilities in the way humans can interact with computers, allowing a
human hand to be a peripheral by itself. The software framework developed
provides a lightweight, robust, and practical application programming interface
that helps further research in the area of human-computer interaction.
Approaches that have proven in analogous areas such as speech and handwriting
recognition were applied to static and dynamic hand gestures. A semi-supervised
Fuzzy ARTMAP neural network was used for incremental online learning and
recognition of static gestures; and, Hidden Markov models for online
recognition of dynamic gestures. A simple anticipatory method was implemented
for determining when to update key frames allowing the framework to work with
dynamic backgrounds. Keywords: Motion detection; hand tracking; real-time gesture recognition; software
framework; FAST corner detection; ART Neural Networks | |||
| Gesture-Controlled User Input to Complete Questionnaires on Wrist-Worn Watches | | BIBAK | Full-Text | 131-140 | |
| Oliver Amft; Roman Amstutz; Asim Smailagic; Daniel P. Siewiorek; Gerhard Tröster | |||
| The aim of this work was to investigate arm gestures as an alternative input
modality for wrist-worn watches. In particular we implemented a gesture
recognition system and questionnaire interface into a watch prototype. We
analyzed the wearer's effort and learning performance to use the gesture
interface and compared their performance to a classical button-based solution.
Moreover we evaluated the system performance to spot wearer gestures and the
system responsiveness. Our wearer study showed that the watch achieved a
recognition accuracy of more than 90%. Completion times showed a clear decrease
from 3 min in the first repetition to 1 min, 49 sec in the last one. Similarly,
variance of completion times between wearers decreased during repetitions.
Completion time using the button interface was 36 sec. Ratings of physical and
concentration effort decreased during the study. Our results confirm that
wearer training state is rather reflected in completion time than recognition
performance. Keywords: gesture spotting; activity recognition; eWatch; user evaluation | |||
| UbiGesture: Customizing and Profiling Hand Gestures in Ubiquitous Environment | | BIBAK | Full-Text | 141-150 | |
| Ayman Atia; Shin Takahashi; Kazuo Misue; Jiro Tanaka | |||
| One of the main challenges of interaction in a ubiquitous environment is the
use of hand gestures for interacting with day to day applications. This
interaction may be negatively affected due to the change in the user's
position, interaction device, or the level of social acceptance of a specific
hand gesture. We present UbiGesture as architecture for developers and users
who frequently change locations while interacting in ubiquitous environments.
The architecture enables applications to be operated by using hand gestures.
Normal users can customize their own hand gestures when interacting with
computers in context-aware ubiquitous environments. UbiGesture is based on
combining user preferences, location, input/output devices, applications, and
hand gestures into one profile. A prototype implementation application for
UbiGesture is presented. Then a subjective and objective primary evaluation for
UbiGesture while interacting in different locations with different hand gesture
profiles is presented. Keywords: Ubiquitous environment; Hand gesture profiles; Context aware services | |||
| The Gestural Input System for Living Room Digital Devices | | BIBAK | Full-Text | 151-160 | |
| Wen-Shan Chang; Fong-Gong Wu | |||
| The focus of this research is to help users learn gesture symbols through
semantic perception. With the semantic perception as the basis, qualitative and
quantitative analysis will be conducted to analyze digital homes -- as
exemplified by the gesture symbols in the 3D space of the living room in order
to deduce the design principles of different perceptive semantics. Samples of
the case studies will be constructed in accordance with this design principle.
Inspections and assessments will also be conducted to demonstrate the accuracy
and feasibility of this principle. The findings in this research shall serve as
reference for the design of interfaces and gesture recognition systems in the
comprehensive surrounding. It shall also serve as the design standard for
relevant future designs that involve semantic perception and design gesture
symbols. Keywords: home audiovisual multimedia; gesture recognition; gesture symbol; cognition | |||
| Touchless Interaction-Novel Chances and Challenges | | BIBAK | Full-Text | 161-169 | |
| René de la Barré; Paul Chojecki; Ulrich Leiner; Lothar Mühlbach; Detlef Ruschin | |||
| Touchless or empty-handed gestural input has received considerable attention
during the last years because of such benefits as removing the burden of
physical contact with an interactive system and making the interaction
pleasurable. What is often overlooked is that those special forms of touchless
interaction which employ genuine gestures -- defined as movements that have a
meaning -- are associated with the danger of suffering from the same drawbacks
as command based interfaces do, which have been widely abandoned in favor of
direct manipulation interfaces. Touchless direct manipulation, however, is
about to reach maturity in certain application fields. In our paper we try to
point out why and under which conditions this is going to happen, and how we
are working to optimize the interfaces through user tests. Keywords: interaction; touchless; direct manipulation; gestures; hand tracking; user
experience | |||
| Did I Get It Right: Head Gestures Analysis for Human-Machine Interactions | | BIBA | Full-Text | 170-177 | |
| Jürgen Gast; Alexander Bannat; Tobias Rehrl; Gerhard Rigoll; Frank Wallhoff; Christoph Mayer; Bernd Radig | |||
| This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract head gestures by image interpretation techniques based on machine learning algorithms to have a nonverbal and familiar way of interacting with the system. Our experimental evaluation proofs the capability of the presented approach to work in real-time and reliable. | |||
| Interactive Demonstration of Pointing Gestures for Virtual Trainers | | BIBAK | Full-Text | 178-187 | |
| Yazhou Huang; Marcelo Kallmann | |||
| While interactive virtual humans are becoming widely used in education,
training and delivery of instructions, building the animations required for
such interactive characters in a given scenario remains a complex and time
consuming work. One of the key problems is that most of the systems controlling
virtual humans are mainly based on pre-defined animations which have to be
re-built by skilled animators specifically for each scenario. In order to
improve this situation this paper proposes a framework based on the direct
demonstration of motions via a simplified and easy to wear set of motion
capture sensors. The proposed system integrates motion segmentation, clustering
and interactive motion blending in order to enable a seamless interface for
programming motions by demonstration. Keywords: virtual humans; motion capture; interactive demonstration | |||
| Anthropometric Facial Emotion Recognition | | BIBAK | Full-Text | 188-197 | |
| Julia Jarkiewicz; Rafal Kocielnik; Krzysztof Marasek | |||
| The aim of this project is detection, analysis and recognition of facial
features. The system operates on grayscale images. For the analysis Haar-like
face detector was used along with anthropometric face model and a hybrid
feature detection approach. The system localizes 17 characteristic points of
analyzed face and, based on their displacements certain emotions can be
automatically recognized. The system was tested on a publicly available
database (Japanese Female Expression Database) JAFFE with ca. 77% accuracy for
7 basic emotions using various classifiers. Thanks to its open structure the
system can cooperate well with any HCI system. Keywords: emotion recognition; facial expression detection; affective computing | |||
| Real-Time Face Tracking and Recognition Based on Particle Filtering and AdaBoosting Techniques | | BIBAK | Full-Text | 198-207 | |
| Chin-Shyurng Fahn; Ming-Jui Kuo; Kai-Yi Wang | |||
| In this paper, a real-time face tracking and recognition system based on
particle filtering and AdaBoosting techniques is presented. Regarding the face
tracking, we develop an effective particle filter to locate faces in image
sequences. Since we have considered the hair color information of a human head,
the particle filter will keep tracking even if the person is back to the line
of sight of a camera. We further adopt both the motion and color cues as the
features to make the influence of the background as low as possible. A new
fashion of classification architecture trained with an AdaBoost algorithm is
also proposed to achieve face recognition rapidly. Compared to other machine
learning schemes, the AdaBoost algorithm can update training samples to deal
with comprehensive circumstances, but it need not spend much computational
cost. Experimental results reveal that the face tracking rate is more than 97%
in general situations and 89% when the face suffering from temporal occlusion.
As for the face recognition, the accuracy rate is more than 90%; besides this,
the efficiency of system execution is very satisfactory, which reaches 20
frames per second at least. Keywords: face tracking; face recognition; particle filter; AdaBoost algorithm | |||
| A Real-Time Hand Interaction System for Image Sensor Based Interface | | BIBAK | Full-Text | 208-215 | |
| SeIn Lee; Jonghoon Seo; Soon-Bum Lim; Yoon-Chul Choy; Tack-Don Han | |||
| Diverse sensors are available in ubiquitous computing of which resources is
inherent in environment. Among them, image sensor acquires necessary data using
camera without any extra devices, which is a different aspect from other
sensors. It can provide additional services and/or applications by using a
location of code and ID in real time image. Focusing on this, Intuitive
interface operating method in ubiquitous computing environment that has plenty
of image codes is suggested. GUI using image sensor was designed, which works
real-time interactive operation between user and the GUI without any additional
button or device. This interface method recognizes user's hand images in
real-time by learning them at a starting point. The method sets interaction
point, and operates the GUI through hand gestures defined previously. We expect
this study can be adopted to augmented reality area and real time interface
using user's hand. Keywords: HCI; Hand interface; Hand interaction; Augmented Reality | |||
| Gesture-Based Interface for Connection and Control of Multi-device in a Tabletop Display Environment | | BIBAK | Full-Text | 216-225 | |
| Hyunglae Lee; Heeseok Jeong; Joong-Ho Lee; Ki-Won Yeom; Ji-Hyung Park | |||
| In this paper, we propose a gesture-based interface for connection and
control of multiple devices in a ubiquitous computing environment. With simple
selection and pointing gestures, users can easily control connections between
multiple devices as well as manage information or data between them. Based on a
robust gesture recognition algorithm and a virtual network computing
technology, we implemented this concept in an intelligent meeting room
consisting of a tabletop, interactive wall displays, and other popular devices
such as laptops and mobile devices. We also performed a preliminary user study
in this environment, and the results show the usability of the suggested
interface. Keywords: Gesture-based interface; multi-device connection | |||
| Shadow Awareness: Bodily Expression Supporting System with Use of Artificial Shadow | | BIBAK | Full-Text | 226-235 | |
| Yoshiyuki Miwa; Shiroh Itai; Takabumi Watanabe; Koji Iida; Hiroko Nishi | |||
| As a supporting means for self-creating bodily expression, shadow media
system was developed with which shadow is artificially deformed into diverse
forms and such deformed shadow can be displayed on a screen without separating
it from the own body. With the system, it was found that the awareness was
established inside the body via deformed shadows; therefore it can support an
improvised creation of the images. Furthermore, by the subject-object
inseparable interaction through shadows, co-creation of images with others was
encouraged. Accordingly, it was indicated that this media system is promising
and effective to support co-creative expression and shows its potential
applicability as a creative archive technology to connect to past people. Keywords: Bodily Expression; image; awareness; co-creation; shadow media | |||
| An Approach to Glove-Based Gesture Recognition | | BIBA | Full-Text | 236-245 | |
| Farid Parvini; Dennis McLeod; Cyrus Shahabi; Bahareh Navai; Baharak Zali; Shahram Ghandeharizadeh | |||
| Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for Human-Computer Interaction rely on either the Neural Networks or Hidden Markov Models (HMMs). In this paper, we compare different approaches for gesture recognition and highlight the major advantages of each. We show that gestures recognition based on the Bio-mechanical characteristic of the hand provides an intuitive approach which provides more accuracy and less complexity. | |||
| cfHMI: A Novel Contact-Free Human-Machine Interface | | BIBA | Full-Text | 246-254 | |
| Tobias Rehrl; Alexander Bannat; Jürgen Gast; Gerhard Rigoll; Frank Wallhoff | |||
| In this paper we present our approach for a new contact-free Human-Machine Interface (cfHMI). This cfHMI is designed for controlling applications -- instruction presentation, robot control -- in the so-called "Cognitive Factory Scenario", introduced in [1]. However, the interface can be applied in other environments and areas of application as well. Due to its generic approach, this low-cost contact-free interface can be easily adapted to several independent applications, featuring individual menu-structures, etc. In addition, the modular software architecture facilitates the upgrades and improvements of the different software modules embedded in the cfHMI. | |||
| Fly! Little Me: Localization of Body-Image within Reduced-Self | | BIBAK | Full-Text | 255-260 | |
| Tatsuya Saito; Masahiko Sato | |||
| In conventional interface design, manipulated objects are visually
represented and an actor of manipulation is a user's physical body. Although it
is a bodily contact between the user and the virutla objects, these virtual
objects are detached from the user's physical body and are usually operated as
target objects through an interface device. We propose a new type of embodied
interaction based on visual-somatosensory integration to evoke localization of
a user's body-image within a visual object on a screen. The major difference
between conventional interaction and the proposed framework is whether the
center of the user's body-image is localized within the screen or outside of
the screen. When the user's body-image is outside of the screen, manipulation
of screen objects is transitive action, or target operation under a
subject-object structure. In contrast, when the user's body-image is localized
within a screen object, the operation of the object becomes intransitive action
and the user operates the screen object as if he moves a part of his body.
Although object manipulation as intransitive action indeed has a history as
long as interface design itself, it has not yet been exposed to thorough
inspection. To qualitatively analyze intransitive manipulation and effect of
body-image localization, which we think is a core factor, we implemented an
interactive system based on several proposed design principles and exhibited
the system at a gallery opened for the public to collect qualitative evidences
on the effect. Keywords: Embodied Interface; Spatialized Display; Localization of Body-Image; Visual
somatosensory Integration. Categories and Subject Descriptors: H.5.2 [User
Interfaces]: Evaluation/methodology; Interaction styles; User-centered design.
General terms: Performance; Design; Experimentation; Security; Human Factors;
Theory | |||
| New Interaction Concepts by Using the Wii Remote | | BIBAK | Full-Text | 261-270 | |
| Michael Schreiber; Margeritta von Wilamowitz-Moellendorff; Ralph Bruder | |||
| The interaction concept of the video game console Nintendo Wii has created a
furor in the interface design community due to its intuitive interface: the Wii
Remote. At the Institute of Ergonomics (IAD) of the Darmstadt University of
Technology, several projects investigated the potential of interaction concepts
with the Wii Remote, especially in nongaming contexts. In a first study an
interactive whiteboard according to [1] was recreated, modified and evaluated.
In this case, the Wii Remote is not the human-machine-interface but the sensor
that detects an infrared emitting (IR) pencil. A survey with 15 subjects was
conducted in which different IR pencils were evaluated. In a second study the
potential of a gesture based human-computer interaction with the help of the
Wii-Remote according to [2] was evaluated by using a multimedia software
application. In a survey with 30 subjects, the Wii gesture interaction was
compared to a standard remote control. Keywords: Wii Remote; Wii; gesture based interaction; interactive whiteboard | |||
| Wireless Data Glove for Gesture-Based Robotic Control | | BIBAK | Full-Text | 271-280 | |
| Nghia Xuan Tran; Hoa Van Phan; Vince V. Dinh; Jeffrey Ellen; Bryan Berg; Jason Lum; Eldridge Alcantara; Mike Bruch; Marion G. Ceruti; Charles Kao; Daniel Garcia; Sunny Fugate; LorRaine Duffy | |||
| A wireless data glove was developed to control a Talon robot. Sensors
mounted on the glove send signals to a processing unit, worn on the user's
forearm that translates hand postures into data. An RF transceiver, also
mounted on the user, transmits the encoded signals representing the hand
postures and dynamic gestures to the robot via RF link. Commands to control the
robot's position, camera, claw, and arm include "activate mobility," "hold on,"
"point camera," and "grab object." Keywords: Communications; conveying intentions; distributed environment; gestures;
human-computer interactions; human-robot interactions; military battlefield
applications; non-verbal interface; robot; wireless data glove; wireless motion
sensing | |||
| PALMbit-Silhouette: A User Interface by Superimposing Palm-Silhouette to Access Wall Displays | | BIBAK | Full-Text | 281-290 | |
| Goshiro Yamamoto; Huichuan Xu; Kazuto Ikeda; Kosuke Sato | |||
| In this paper, we propose a new type of user interface using
palm-silhouette, which realizes intuitive interaction on ubiquitous displays
located at far from user's location or interfered in direct operation. In the
area of augmented reality based user interface, besides the interface which
allows users to operate virtual objects directly, the interface which lets
users to interact remotely from a short-distant location is necessary,
especially in the environment where multi-displays are shared in public areas.
We propose two gesture-related functions using the palm-silhouette interface:
grasp-and-release operation and wrist rotating operation which represent
"selecting" and "adjustment" respectively. The usability of proposed
palm-silhouette interface was evaluated by experiment comparison with a
conventional arrowhead pointer. We studied and concluded the design rationale
to realize a rotary switch operation by utilizing pseudo-haptic visual cue. Keywords: user interface; shadow; pseudo-haptic | |||
| Potential Limitations of Multi-touch Gesture Vocabulary: Differentiation, Adoption, Fatigue | | BIBAK | Full-Text | 291-300 | |
| Wendy Yee | |||
| The majority of gestural interactions in consumer electronics currently
represent "direct" gestures related to the direct manipulation of onscreen
objects. As gestural interactions extend beyond consumer electronics and become
more prevalent in productivity applications, these gestures will need to
address more abstract or "indirect" actions. This paper addresses some of the
usability concerns associated with indirect gestures and their potential
limitations for the typical end-user. In addition, it outlines a number of
considerations for the integration of abstract gestures with productivity
workspaces. Keywords: Interaction design; multi-touch; software design; gestures | |||
| A Multimodal Human-Robot-Interaction Scenario: Working Together with an Industrial Robot | | BIBA | Full-Text | 303-311 | |
| Alexander Bannat; Jürgen Gast; Tobias Rehrl; Wolfgang Rösel; Gerhard Rigoll; Frank Wallhoff | |||
| In this paper, we present a novel approach for multimodal interactions
between humans and industrial robots.
The application scenario is situated in a factory, where a human worker is supported by a robot to accomplish a given hybrid assembly scenario, that covers manual and automated assembly steps. The robot is acting as an assistant as well as a fully autonomous assembly unit. For interacting with the presented system, the human is able to give his commands via three different input modalities (speech, gaze and the so-called soft-buttons). | |||
| Robotic Home Assistant Care-O-bot® 3 Product Vision and Innovation Platform | | BIBAK | Full-Text | 312-320 | |
| Birgit Graf; Christopher Parlitz; Martin Hägele | |||
| The development of a mobile robot to assist people in their home is a long
term goal of Fraunhofer IPA. In order to meet this goal, three generations of a
robotic home assistant "Care-O-bot®" have been developed so far. As a
vision of a future household product, Care-O-bot® 3 is equipped with the
latest industrial state-of-the art hardware components. It offers all modern
multi-media and interaction equipment as well as most advanced sensors and
control. It is able to navigate among humans, detect and grasp objects and pass
them safely to human users using its tray. Care-O-bot® 3 has been presented
to the public on several occasions where it distributed drinks to the visitors
of trade fairs and events. Keywords: robotic home assistant; Care-O-bot; product vision; navigation;
manipulation; object learning and detection; safe human-robot interaction;
fetch and carry tasks | |||
| Designing Emotional and Interactive Behaviors for an Entertainment Robot | | BIBAK | Full-Text | 321-330 | |
| Yo Chan Kim; Hyuk Tae Kwon; Wan Chul Yoon; Jong Cheol Kim | |||
| In the process of developing an entertainment robot, Mon-e, we represented
the robot's emotional and interactive behaviors in the form of scripts. A
unified model was established to manage all the different scripts. We designed
the personality profile to possess two dimensions of criteria for script
selection. Emotion variable was introduced to create a variety of robot
behavior according to the context. Reinforcing mechanisms of the personality
profile and the emotion variable were developed. Keywords: Human-Robot Interaction; Script-based Robot Behavior; Robot Personality;
Robot Emotion; Service Robots | |||
| Emotions and Messages in Simple Robot Gestures | | BIBAK | Full-Text | 331-340 | |
| Jamy Li; Mark H. Chignell; Sachi Mizobuchi; Michiaki Yasumura | |||
| Understanding how people interpret robot gestures will aid design of
effective social robots. We examine the generation and interpretation of
gestures in a simple social robot capable of head and arm movement using two
studies. In the first study, four participants created gestures with
corresponding messages and emotions based on 12 different scenarios provided to
them. The resulting gestures were then shown in the second study to 12
participants who judged which emotions and messages were being conveyed.
Knowledge (present or absent) of the motivating scenario (context) for each
gesture was manipulated as an experimental factor. Context was found to assist
message understanding while providing only modest assistance to emotion
recognition. While better than chance, both emotion (22%) and message
understanding (40%) accuracies were relatively low. The results obtained are
discussed in terms of implied guidelines for designing gestures for social
robots. Keywords: Human-Robot Interaction; Gestures; Social Robots; Emotion | |||
| Life with a Robot Companion: Video Analysis of 16-Days of Interaction with a Home Robot in a "Ubiquitous Home" Environment | | BIBAK | Full-Text | 341-350 | |
| Naoko Matsumoto; Hirotada Ueda; Tatsuya Yamazaki; Hajime Murai | |||
| This paper examines human-robot interaction within a home environment over
the course of a 16 day-long experiment. Its purpose is to describe the human
cognitive activities involving a symbiotic robot with dialogue interface within
domestic settings. The participants were a couple in their 60's. Analysis of
video data indicates that (a) the participants continuously used the robot's
dialogue interface, and persevered within this despite the performance of the
voice recognition system (b) the participants checked out the robot's function
as a tool and gradually came to regard the robot as a companion, (c) fixed
instruction expressions increased the use of the dialogue interface for the
light controls, while the participants employed both the dialogue interface and
the remote controller for the TV and light controls. Keywords: home robot; ubiquitous environment; dialogue interface; life experiment;
video analysis | |||
| Impression Evaluation of a Conversational Robot Playing RAKUGO | | BIBA | Full-Text | 351-360 | |
| Akihiro Ogino; Noritaka Moriya; Park Seung-Joon; Hirotada Ueda | |||
| The purpose of this paper is to detect a comfortable strength and timing of gestures of the robot for people. This paper describes the evaluation of individual impressions concerning RAKUGO of the traditional Japanese comic storytelling that is performed by a conversational robot. This paper shows that the gestures of the robot like professional actor of RAKUGO give smart and mature impressions to people than exaggerated gestures. In order to detect a tendency of impression based on individual personality, this paper also describes a relation between the individual impression to the gesture of the robot and Transactional Analysis that is one of psychology. We detect that there is a correlation between their impression to the gesture and their personality. | |||
| Performance Assessment of Swarm Robots | | BIBAK | Full-Text | 361-367 | |
| Ercan Öztemel; Cemalettin Kubat; Özer Uygun; Tuba Canvar; Tulay Korkusuz; Vinesh Raja; Anthony Soroka | |||
| Swarm intelligence is the emergent collective intelligence of groups of
simple autonomous agents which are autonomous subsystems that interact with
their environment. This paper presents a performance evaluation system for
swarm robots. The model proposed includes a set of performance assessment
criteria and performance assessment and monitoring system. The proposed
approach is developed for swarm robots developed for health system responsible
for delivery, guidance, monitoring, recognition, and delivery which a project
in European 6th Framework Research Program and carried out by several European
nations. Keywords: swarm robots; performance assessment | |||
| A Robotic Introducer Agent Based on Adaptive Embodied Entrainment Control | | BIBAK | Full-Text | 368-376 | |
| Mutsuo Sano; Kenzaburo Miyawaki; Ryohei Sasama; Tomoharu Yamaguchi; Keiji Yamada | |||
| The necessity for the individual and the individual's tying in real space
increases while the age of piece progresses. The robot is requested in that and
it is requested what role be able to be played. Here, we pursue the research on
the robot design to expand and to promote the group conversation. It proposes
the technique to advance the conversation of couple 1 of the first meeting
smoothly as the first stage. Especially, it is confirmed that nonverbal
interactions are more important than language interactions by a lot of
researches so that the individual and the individual may tie. We newly define
the communications activity based on embodied entrainment, and propose the
method to control the behavior of the robot dynamically according to the state
of communications. The active control method uses interaction timing learning
which depends on nonverbal communication channels. Our mechanism selects an
appropriate embodied robotic behavior by changing the communication strategy
based on the state transition of an introduction scene, and increases the
communication activity measured by sensing data. The action timing is learned
and controlled by a decision-tree. As the result, the real agent robot could
control communication situations similarly to a human. We became "Yes" by the
evaluation value of 82% for the question that communications had risen as a
result of doing the questionnaire survey to 20 university students. Moreover,
familiarity became a first meeting introduction robot with "Yes" for the
question about whether being possible to have it by the evaluation value of
85%. Therefore, the effectiveness of this proposal technique was verified.
Finally, the possibility that the circle of communications can be expanded to N
person's group is discussed based on this result. Keywords: Embodied Entrainment Nonverbal Communication Robotic Introducer Agent; Group
communication; Human's Action Learning | |||
| Robot Helps Teachers for Education of the C Language Beginners | | BIBAK | Full-Text | 377-384 | |
| Haruaki Tamada; Akihiro Ogino; Hirotada Ueda | |||
| In this paper, we propose a learning support framework for teacher and
learners that achieves the following three requirements: (A) to use unaware
system, (B) to support the teacher for educating learners, and (C) to support
learners for solving assignments. Based on the proposed framework, we
implemented a system using a robot which supports learners. Keywords: Robot; the C language; human/robot interaction | |||
| An Interactive Robot Butler | | BIBAK | Full-Text | 385-394 | |
| Yeow Kee Tan; Dilip Kumar Limbu; Ridong Jiang; Liyuan Li; Kah Eng Hoe; Xinguo Yu; Li Dong; Chern Yuen Wong; Haizhou Li | |||
| This paper describes a novel robotic butler, developed by a
multi-disciplinary team of researchers. The robotic butler is capable of
detecting and tracking human, recognize hand gestures, serving beverages and
performs dialog conversation with guest about their interests and their
preferences; and providing specific information on the facilities at
Fusionopolis building and various technologies used by the robot. The robot
employs an event driven dialogue management system (DMS) architecture, speech
recognition, ultra wideband, vision understanding and radio frequency
identification. All these components and agents that are integrated in the DMS
architecture are modular and can be re-used by other applications. In this
paper, we will first describe the design concept and the architecture of the
robotic butler. Secondly, we will describe in detail the workings of the speech
and vision technology as this paper mainly focuses on human-robot interaction
aspects of the social robot. Lastly, this paper will highlight some key
challenges that were faced during the implementation of speech and vision
technology into the robot. Keywords: speech recognition; gesture recognition; robot and dialog management system | |||
| A Study on Fundamental Information Transmission Characteristics of an Air-Jet Driven Tactile Display | | BIBAK | Full-Text | 397-406 | |
| Takafumi Asao; Hiroaki Hayashi; Masayoshi Hayashi; Kentaro Kotani; Ken Horii | |||
| There are many people with impaired vision as well as hearing. Tactile
displays can be useful to such people for communicating by means of characters
and shapes. Many devices for tactile displays such as oscillators and
electrocutaneous stimulators have been developed. However oscillators have two
drawbacks: physical stress tends to build up in actuators because of long term
exposure to oscillations, and they may transmit erroneous information because
of unstable contacts between magnetic pins and the skin. Moreover,
electrocutaneous stimulators cause discomfort to the user. In this study, we
have developed a tactile information presentation technique that uses air jet
stimulations and tactile phantom sensations induced by a complex combination of
tactile perceptions. The tactile display can transmit information to the skin
without physical contact and is free from the restriction of pitch size. In
this paper, we have examined its fundamental information transmission
characteristics. Keywords: Tactile Display; Air Jet Stimulation; Phantom Sensation | |||
| VersaPatch: A Low Cost 2.5D Capacitive Touch Sensor | | BIBAK | Full-Text | 407-416 | |
| Ray Bittner; Mike Sinclair | |||
| Capacitive input devices are becoming increasingly prevalent in consumer
devices. This paper presents the hardware and algorithms for the low cost
implementation of a capacitive 2.5D input device. The low cost and low power
consumption of the device make it suitable for use in portable devices such as
cellular phones. The electrical properties used are such that the pre-existing
snap dome technologies in such devices can be used as capacitive sensing
elements, further reducing the cost and size impact of the capacitive sensor. Keywords: 2D Pointing; Capacitive Sensing; Interface; Low Cost; Low Power | |||
| From Implicit to Touching Interaction by Identification Technologies: Towards Tagging Context | | BIBAK | Full-Text | 417-425 | |
| José Bravo; Ramón Hervás; Carmen Fuentes; Vladimir Villarreal; Gabriel Chavira; Salvador W. Nava; Jesús Fontecha; Gregorio Casero; Rocío Peña; Marcos Vergara | |||
| Intelligent environments need interactions capable of detecting users and
providing them with good-quality contextual information. In this sense we adapt
technologies, identifying and locating people for supporting their needs.
However, it is necessary to analyze some important features in order to compare
the implicit interaction, which is closer to the users and more natural, to a
new interaction by contact. In this paper we present the adaptability of two
technologies; Radio-frequency Identification (RFID) and Near Field
Communication (NFC). In the first one, the interaction is more appropriate
within intelligent environments but in the second one, the same RFID
technology, placed in mobile phones, achieves some advantages that we consider
to be an intermediate solution until the standardization of sensors arrives. Keywords: Ambient Intelligence; NFC; Touching Interaction; Location-based Services | |||
| VTouch: A Vision-Base Dual Finger Touched Inputs for Large Displays | | BIBAK | Full-Text | 426-434 | |
| Ching-Han Chen; Cun-Xian Nian | |||
| In this work, we present a complete architecture to implement a dual touch
function for large sized displays. The architecture includes the hardware
device and the software algorithm. The system has lower cost and smaller size
than previous works for large sized display. And the proposed approach
eliminates the requirement of special hardware. The system can be plugged to a
normal display and change the display to equip the dual touch function. At the
final of this paper, we give an experiment to demonstrate the system and show
its usability. Keywords: Human-computer interaction; dual touch screen; multi-touch screen; user
interface | |||
| Overview of Meta-analyses Investigating Vibrotactile versus Visual Display Options | | BIBAK | Full-Text | 435-443 | |
| Linda R. Elliott; Michael D. Coovert; Elizabeth S. Redden | |||
| The literature is replete with studies that investigated the effectiveness
of vibrotactile displays; however, individual studies in this area often yield
discrepant findings that are difficult to synthesize. In this paper, we provide
an overview of a comprehensive review of the literature and meta-analyses that
organized studies to enable comparisons of visual and tactile presentations of
information, to yield information useful to researchers and designers. Over six
hundred studies were initially reviewed and coded along numerous criteria that
determined appropriateness for meta-analysis categories. Comparisons were made
between conditions that compared (a) adding a tactile cue to a baseline
condition, (b) a visual cue with a multimodal (visual and tactile)
presentation, and (c) a visual cue with a tactile cue. In addition, we further
categorized within these comparisons with regard to type of information, that
ranged from simple alerts and single direction cues to more complex tactile
patterns representing spatial orientation or short communications. Keywords: Tactile; Visual; Display; Interface; Design; Military; Army; Attention
management; Spatial orientation; Navigation; Communication | |||
| Experimental Study about Effect of Thermal Information Presentation to Mouse | | BIBAK | Full-Text | 444-450 | |
| Shigeyoshi Iizuka; Sakae Yamamoto | |||
| Despite the many types of telecommunication systems that have been
developed, it can still be hard to convey various types of information
expressively to a remote partner. Our research focuses on the use of variations
in temperature to represent information expressively. We developed a mouse
device with thermal capabilities; the device becomes warmer or colder to a
user's palm when the user clicks "thermal" photographic images on a computer
screen. Each image has an associated thermal value. In this paper, we report
the results of an evaluation of the thermal performance of the device. We also
report results from a preliminary experiment that determine how the thermal
expression affects a user's impression of images. Keywords: thermal information; thermal mouse; warm sense; cold sense; pair comparisons | |||
| Preliminary Study on Vibrotactile Messaging for Sharing Brief Information | | BIBAK | Full-Text | 451-460 | |
| Teruaki Ito | |||
| Ubiquitous computing is spreading as a post-desktop model of human-computer
interaction, in which information processing is thoroughly integrated into
everyday objects and activities. This study focuses on the vibrotactile signals
for sharing and feeling the brief information among several users. The
information here is limited to brief information which does not include
detailed information but does cover the rough idea of the content of the
information, so that the data size of the information can be kept very small.
This paper presents a prototype device for vibrotactile messaging called
VIBRATO. Then it shows the experimental results regarding VIBRATO message under
different conditions and clarifies how to use VIBRATO effectively for
understanding vibrotactile messages. The simple vibration alert used in a
silent mode of mobile phones does not fully exploit the potential of vibration
as a means of communication. This experiment shows how to exploit this
potentiality of vibration. Keywords: Vibrotactile message; brief information; ubiquitous computing; human
interface | |||
| Orientation Responsive Touch Interaction | | BIBAK | Full-Text | 461-469 | |
| Jinwook Kim; Jong-gil Ahn; Heedong Ko | |||
| A novel touch based interaction method by use of orientation information of
a touch region is proposed. To capture higher dimensional information of touch
including a position and an orientation as well, we develop robust algorithms
to detect a contact shape and to estimate its orientation angle. Also we
suggest practical guidelines to use our method through experiments considering
various conditions and show possible service scenarios of aligning documents
and controlling a media player. Keywords: Touch Interaction; Interaction techniques; Touch direction; Touch
orientation; Tabletop; Media player controller | |||
| Representation of Velocity Information by Using Tactile Apparent Motion | | BIBAK | Full-Text | 470-478 | |
| Kentaro Kotani; Toru Yu; Takafumi Asao; Ken Horii | |||
| The objective of the study is to evaluate whether use of apparent motion can
be an effective presentation method for velocity information. Ten subjects
participated in the experiment where they perceived apparent motion generated
by air-jet and they controlled the speed of the moving object on the PC screen
to express their perceived velocity. As a result, perceived velocity decreased
significantly as the ISOI was increased. Perceived velocity changed with the
duration when the ISOI was between 30-70 ms and when apparent motion was
provided. Duration only affected perceived velocity when apparent motion is
provided to the subjects. In conclusion, apparent motion generated by stimuli
can be effectively used to transmit precise velocity information tactually. Keywords: Apparent motion; Velocity; Tactile perception; Air-jet | |||
| Tactile Spatial Cognition by the Palm | | BIBAK | Full-Text | 479-485 | |
| Misa Grace Kwok | |||
| The purpose of this research is to determine tactile spatial cognition
characteristics through basic experiments. Two experiments were conducted: 1)
distinctiveness of geometrical figure recognition, 2) haptic spatial
recognition. As for the results, difference characteristics between visual
spatial cognition and tactile spatial cognition were detected. The vertically
direction is more accurate and easier to understand than horizontally direction
to recognize spatial information. Keywords: tactile; spatial recognition; palm; vertical direction | |||
| A Study on Effective Tactile Feeling of Control Panels for Electrical Appliances | | BIBAK | Full-Text | 486-495 | |
| Miwa Nakanishi; Yusaku Okada; Sakae Yamamoto | |||
| This study focuses on the fact that tactile factors, compared to visual
factors, have not been effectively applied to enhance the usability of control
panels. It also evaluates the effectiveness of allocating a rough/smooth
feeling to the surface of each button in a control panel according to its
operational function. The first experiment reveals relationships between some
of the impressions concerning the operation of electrical appliances and the
rough/smooth feeling when touching the surface of buttons. Moreover, it
provides specific information on what degree of roughness/smoothness should be
applied to what types of functional buttons. The second experiment demonstrates
that the usability of control panels can be enhanced by providing a
rough/smooth feeling to each button, considering suitability with respect to
operation impressions. In addition, results indicate that users may feel
discomfort when the rough/smooth feeling does not correspond to operation
impressions. Keywords: Tactile feeling; operation impression; control panel | |||
| Facilitating the Design of Vibration for Handheld Devices | | BIBAK | Full-Text | 496-502 | |
| Taezoon Park; Jihong Hwang; Wonil Hwang | |||
| Vibrations are actively used both to supplement the multi-modal interactions
and to deliver information independently. Especially, vibration is adopted as a
feedback for touch-screen type interfaces for cell phone because touch-screen
interface lacks the tactile feedback. However, how to effectively design
vibration is not being investigated yet although different types of vibration
is used according to its application area. This study summarizes the
characteristics of vibrations especially in the view point of human perception,
and covers a couple of attempts to composing vibrations currently used.
Finally, a suggested design for developing software tools which facilitates the
design of effective vibrations is presented as a conclusion. Keywords: Vibration; Haptics; Vibration composer | |||
| Interaction Technique for a Pen-Based Interface Using Finger Motions | | BIBAK | Full-Text | 503-512 | |
| Yu Suzuki; Kazuo Misue; Jiro Tanaka | |||
| Our research goal is to improve stylus operability by utilizing the human
knowledge and skills applied when a user uses a pen. Such knowledge and skills
include, for example, the way a person holds a pen to apply a generous amount
of ink to draw a thick line with a brush pen. We propose a form of interaction,
Finger Action, which uses input operations applying such knowledge and skills.
Finger Action consists of five input operations: gripping, thumb tapping,
index-finger tapping, thumb rubbing, and index-finger rubbing. In this paper,
we describe Finger Action, a prototype pressure-sensitive stylus used to
realize Finger Action, an application of Finger Action, and an evaluation of
the practicality of Finger Action. Keywords: Pen-based Interface; Finger Motion; Pen Grip; Pressure Sensor | |||
| A Basic Study of Sensory Characteristics toward Interaction with a Box-Shaped Interface | | BIBAK | Full-Text | 513-522 | |
| Noriko Suzuki; Tosirou Kamiya; Shunsuke Yoshida; Sumio Yano | |||
| Our research focuses on the sensory characteristics of interacting with a
novel box-shaped interface device for facilitating transfer of a digital object
to another person. Such findings are important for constructing an
ultra-realistic communication system with shared reality. This paper presents
two kinds of pilot studies: (I) graspability of a box-shaped interface device
through controlling feedback timing from the device, and (II) the sense of
possessing a modality in information transfer between two devices. Both
psychological and behavioral evaluation results suggest that graspability
increases more from feedback of the device just after grasping it than from
that just before grasping it. Furthermore, psychological evaluation results
suggest that a touch-and-move method, i.e., the receiver of feedback changes
precisely from one user to the other after touching the two devices, increases
the sense of possessing a modality more than does a touch-and-copy method,
i.e., both users simultaneously receive feedback after touching. Keywords: Sensory characteristics; Box-shaped interface device; Ultra-realistic
communication system; Graspability; Sense of possessing a modality; Behavioral
evaluation; Psychological evaluation | |||
| TACTUS: A Hardware and Software Testbed for Research in Multi-Touch Interaction | | BIBAK | Full-Text | 523-532 | |
| Paul Varcholik; Joseph J., Jr. LaViola; Denise M. Nicholson | |||
| This paper presents the TACTUS Multi-Touch Research Testbed, a hardware and
software system for enabling research in multi-touch interaction. A detailed
discussion is provided on hardware construction, pitfalls, design options, and
software architecture to bridge the gaps in the existing literature and inform
the researcher on the practical requirements of a multi-touch research testbed.
This includes a comprehensive description of the vision-based image processing
pipeline, developed for the TACTUS software library, which makes surface
interactions available to multi-touch applications. Furthermore, the paper
explores the higher-level functionality and utility of the TACTUS software
library and how researchers can leverage the system to investigate multi-touch
interaction techniques. Keywords: Multi-Touch; HCI; Touch Screen; Testbed; API | |||
| Low Cost Flexible Wrist Touch UI Solution | | BIBAK | Full-Text | 533-541 | |
| Bin Wang; Chenguang Cai; Emilia Koskinen; Tang Zhenqi; Huayu Cao; Leon Xu; Antti O. Salo | |||
| Wrist device is an interesting and convenient user interaction method
between a user and a mobile communication device. This paper presents a low
cost flexible wrist device user interaction solution based on flexible touch
screen technology. We mainly focused on the 2 aspects: one is the user study to
understand what requirements of the wrist UI the users demand; and another is
how to realize the specific UI solutions that the users demand. Base on user
study report from NRC Helsinki and Beijing, we adopt 3x3 matrix resistance
touch panel module and 2-color flexible LCD module as the user interaction
hardware solution. Keywords: Wrist UI; resistance touch panel; accessory; user study | |||
| Grasping Interface with Photo Sensor for a Musical Instrument | | BIBAK | Full-Text | 542-547 | |
| Tomoyuki Yamaguchi; Shuji Hashimoto | |||
| This paper introduces a luminance intensity interface driven by grasping
forces, and its application as a musical instrument. In traditional musical
instruments, the relationship between the action and the generated sound is
determined by the physical structures of the instruments. Moreover, the freedom
of the musical performances is limited by the structures. We developed a
ball-shaped interface with handheld size. A photo diode is embedded in the
translucent rubber ball to reacts to the grasping force of the performer. The
grasping force is detected as the luminance intensity. The performer can use
the ball interface only by grasping directly but also by holding to the
environmental light or shading it by hands. The output of the interface is fed
to the sound generator and the relationship between the performer's action and
the generated sound is determined by the instrumental program to make the
universal musical instrument. Keywords: Grasping interface; Illumination control; Ball-shaped interface; Musical
performance | |||
| Ensemble SWLDA Classifiers for the P300 Speller | | BIBA | Full-Text | 551-557 | |
| Garett D. Johnson; Dean J. Krusienski | |||
| The P300 Speller has proven to be an effective paradigm for brain-computer interface (BCI) communication. Using this paradigm, studies have shown that a simple linear classifier can perform as well as more complex nonlinear classifiers. Several studies have examined methods such as Fisher's Linear Discriminant (FLD), Stepwise Linear Discriminant Analysis (SWLDA), and Support Vector Machines (SVM) for training a linear classifier in this context. Overall, the results indicate marginal performance differences between classifiers trained using these methods. It has been shown that, by using an ensemble of linear classifiers trained on independent data, performance can be further improved because this scheme can better compensate for response variability. The present study evaluates several offline implementations of ensemble SWLDA classifiers for the P300 speller and compares the results to a single SWLDA classifier for seven able-bodied subjects. | |||
| The I of BCIs: Next Generation Interfaces for Brain-Computer Interface Systems That Adapt to Individual Users | | BIBAK | Full-Text | 558-568 | |
| Brendan Z. Allison | |||
| Brain-computer interfaces (BCIs) have advanced rapidly in the last several
years, and can now provide many useful command and control features to a wide
variety of users -- if an expert is available to find, assemble, setup,
configure, and maintain the BCI. Developing BCI systems that are practical for
nonexperts remains a major challenge, and is the principal focus of the EU
BRAIN project and other work. This paper describes five challenges in BCI
interface development, and how they might be addressed with a hypothetical easy
BCI system called EZBCI. EZBCI requires a new interface that is natural,
intuitive, and easy to configure without expert help. Finally, two true
scenarios with severely disabled users highlight the impact that EZBCI would
have on users' lives. Keywords: Brain-computer interface; Brain-machine interface; BCI; BMI; expertise;
nonexpert; interface; adaptive; reliability; usability; flexibility; assistive
technology; smart homes; realworld; EZBCI | |||
| Mind-Mirror: EEG-Guided Image Evolution | | BIBAK | Full-Text | 569-578 | |
| Nima Bigdely Shamlo; Scott Makeig | |||
| We propose a brain-computer interface (BCI) system for evolving images in
real-time based on subject feedback derived from electroencephalography (EEG).
The goal of this system is to produce a picture best resembling a subject's
'imagined' image. This system evolves images using Compositional Pattern
Producing Networks (CPPNs) via the NeuroEvolution of Augmenting Topologies
(NEAT) genetic algorithm. Fitness values for NEAT-based evolution are derived
from a real-time EEG classifier as images are presented using rapid serial
visual presentation (RSVP). Here, we report the design and performance, for a
pilot training session, of a BCI system for real-time single-trial binary
classification of viewed images based on participant-specific brain response
signatures present in 128-channel EEG data. Selected training-session image
clips created by the image evolution algorithm were presented in 2-s bursts at
8/s. The subject indicated by subsequent button press whether or not each burst
included an image resembling two eyes. Approximately half the bursts included
such an image. Independent component analysis (ICA) was used to extract a set
of maximally independent EEG source time-courses and their 100
minimally-redundant low-dimensional informative features in the time and
time-frequency amplitude domains from the (94%) bursts followed by correct
manual responses. To estimate the likelihood that the post-image EEG contained
EEG 'flickers' of target recognition, we applied two Fisher discriminant
classifiers to the time and/or time-frequency features. The area under the
receiver operating characteristic (ROC) curve by tenfold cross-validation was
0.96 using time-domain features, 0.97 using time-frequency domain features, and
0.98 using both domain features. Keywords: human-computer interface (HCI); brain-computer interface (BCI); evolutionary
algorithms; genetic algorithms (GA); electroencephalography (EEG); independent
component analysis (ICA); rapid serial visual presentation (RSVP); genetic art;
evolutionary art | |||
| BEXPLORER: Computer and Communication Control Using EEG | | BIBAK | Full-Text | 579-587 | |
| Mina Mikhail; Marian Abdel-Shahid; Mina Guirguis; Nadine Shehad; Baher Soliman; Khaled El-Ayat | |||
| Humans are able to communicate in many rich and complex ways, with each
other, or increasingly, with digital devices. A brain-computer interface (BCI)
is a direct neural interface and a communication pathway between the human
brain and an external device such as a computer or artificial limb. With such
an interface, a severely handicapped person, such as an Amyotrophic Lateral
Sclerosis (ALS) patient, with severe muscle disorder may still communicate and
even control their environment relying solely on his brain activity. The system
introduced detects EEG signals arising from various eye-blinking activities and
applies the results to control various popular computer applications. Using
BCI2000 as a platform, the system allows handicapped patients to communicate
with a computer and initiate various computer commands, send instant messages,
and even browse the web. The application also enables the user to communicate
using mobile phone SMS messaging. Keywords: BCI; ALS; electroencephalography (EEG); blink artifact; assistive
technology; Human computer interface (HCI); information transfer rate;
classification | |||
| Continuous Control Paradigms for Direct Brain Interfaces | | BIBAK | Full-Text | 588-595 | |
| Melody Moore Jackson; Rudolph L., IV Mappus; Evan Barba; Sadir Hussein; Girish R. Venkatesh; Chetna Shastry; Amichai Israeli | |||
| Direct Brain Interfaces (DBIs) offer great possibilities for people with
severe disabilities to communicate and control their environments. However,
many DBI systems implement discrete selection, such as choosing a letter from
an alphabet, which offers limited control over certain tasks. Continuous
control is important for applications such as driving a wheelchair or drawing
for creative expression. This paper describes two projects currently underway
at the Georgia Tech BrainLab exploring continuous control interface paradigms
for an EEG-based approach centered on responses from visual cortex, and
functional near Infrared (fNIR) imaging of the language center of the brain. Keywords: Direct Brain Interfaces; Brain Computer Interfaces; Continuous Control;
SSVEP; functional near Infrared imaging | |||
| Constructive Adaptive User Interfaces Based on Brain Waves | | BIBA | Full-Text | 596-605 | |
| Masayuki Numao; Takayuki Nishikawa; Toshihito Sugimoto; Satoshi Kurihara; Roberto S. Legaspi | |||
| We demonstrate a method to locate relations and constraints between a music score and its impressions, by which we show that machine learning techniques may provide a powerful tool for composing music and analyzing human feelings. We examine its generality by modifying some arrangements to provide the subjects with a specified impression. This demonstration introduces some user interfaces, which are capable of predicting feelings and creating new objects based on seed structures, such as spectrums and their transition for sounds that have been extracted and are perceived as favorable by the test subject. This paper proposes to define knowledge components for the seed structure. | |||
| Development of Symbiotic Brain-Machine Interfaces Using a Neurophysiology Cyberworkstation | | BIBAK | Full-Text | 606-615 | |
| Justin C. Sanchez; Renato J. O. Figueiredo; José A. B. Fortes; José Carlos Príncipe | |||
| We seek to develop a new generation of brain-machine interfaces (BMI) that
enable both the user and the computer to engage in a symbiotic relationship
where they must co-adapt to each other to solve goal-directed tasks. Such a
framework would allow the possibility real-time understanding and modeling of
brain behavior and adaptation to a changing environment, a major departure from
either offline learning and static models or one-way adaptive models in
conventional BMIs. To achieve a symbiotic architecture requires a computing
infrastructure that can accommodate multiple neural systems, respond within the
processing deadlines of sensorimotor information, and can provide powerful
computational resources to design new modeling approaches. To address these
issues we present or ongoing work in the development of a neurophysiology
Cyberworkstation for BMI design. Keywords: Brain-Machine Interface; Co-Adaptive; Cyberworkstation | |||
| Sensor Modalities for Brain-Computer Interfacing | | BIBAK | Full-Text | 616-622 | |
| Gerwin Schalk | |||
| Many people have neuromuscular conditions or disorders that impair the
neural pathways that control muscles. Those most severely affected lose all
voluntary muscle control and hence lose the ability to communicate.
Brain-computer interfaces (BCIs) might be able to restore some communication or
control functions for these people by creating a new communication channel --
directly from the brain to an output device. Many studies over the past two
decades have shown that such BCI communication is possible and that it can
serve useful functions. This paper reviews the different sensor methodologies
that have been explored in these studies. Keywords: Brain-computer interface; BCI; Neural Engineering; Neural Prosthesis | |||
| A Novel Dry Electrode for Brain-Computer Interface | | BIBAK | Full-Text | 623-631 | |
| Eric W. Sellers; Peter J. Turner; William A. Sarnacki; Tobin McManus; Theresa M. Vaughan; Robert Matthews | |||
| A brain-computer interface is a device that uses signals recorded from the
brain to directly control a computer. In the last few years, P300-based
brain-computer interfaces (BCIs) have proven an effective and reliable means of
communication for people with severe motor disabilities such as amyotrophic
lateral sclerosis (ALS). Despite this fact, relatively few individuals have
benefited from currently available BCI technology. Independent BCI use requires
easily acquired, good-quality electroencephalographic (EEG) signals maintained
over long periods in less-than-ideal electrical environments. Conventional,
wet-sensor, electrodes require careful application. Faulty or inadequate
preparation, noisy environments, or gel evaporation can result in poor signal
quality. Poor signal quality produces poor user performance, system downtime,
and user and caregiver frustration. This study demonstrates that a hybrid dry
electrode sensor array (HESA) performs as well as traditional wet electrodes
and may help propel BCI technology to a widely accepted alternative mode of
communication. Keywords: Brain-computer interface; P300 event-related potential; dry electrode;
amyotrophic lateral sclerosis | |||
| Effect of Mental Training on BCI Performance | | BIBA | Full-Text | 632-635 | |
| Lee-Fan Tan; Ashok Jansari; Shian-Ling Keng; Sing-Yau Goh | |||
| This paper reports initial findings from a randomized controlled trial conducted on 9 subjects to investigate the effect of two mental training programs (a mindfulness meditation and learning to play a guitar) on their BCI performance. After 4 weeks of intervention, results show that subjects who had undergone a program of mindfulness meditation improved their BCI performance scores significantly compared to a no-treatment control group. Subjects who were learning to play a guitar also improved their BCI performance scores but not as much as the meditation group. | |||
| The Research on EEG Coherence Around Central Area of Left Hemisphere According to Grab Movement of Right Hand | | BIBAK | Full-Text | 636-642 | |
| Min Cheol Whang; Jincheol Woo; Jonghwa Kim | |||
| This study is to find significant EEG coherence for predict movement. Eight
students were asked to visuo-motor task and their EMG at flexor carpi radialis
of right hand and EEG at C3 and 4 orthogonal points 2.5 cm away from C3 were
measured. EEG coherence between non-motor area and motor area and between motor
areas were analyzed based on movement duration calculated from EMG activation,
movement delay and coherence delay. In the results, most participants showed
significant coherence between sensory or frontal and motor area discriminating
movement and non-movement. However, there were individual differences in areas
and frequency band showing significant coherence. Keywords: EEG Coherence; EMG; hand grab movement; movement delay | |||
| A Speech-Act Oriented Approach for User-Interactive Editing and Regulation Processes Applied in Written and Spoken Technical Texts | | BIBAK | Full-Text | 645-653 | |
| Christina Alexandris | |||
| A speech-act oriented approach for Controlled Language specifications is
presented for the implementation in a user-interactive HCI system for the
editing process and for the regulation of written and, subsequently, spoken
technical texts for Modern Greek. Sublanguage-specific and sublanguage
independent parameters are used targeting to "Precision", "Directness" and
"User-friendliness", based on the criteria of Moeller, 2005 for the success and
efficiency of spoken Human-Computer Interaction, on the Utterance Level, the
Functional Level and the Satisfaction Level. Keywords: Controlled Language; Speech Act; Technical texts; Task-oriented dialog;
prosodic modeling | |||
| Interacting with a Music Conducting System | | BIBAK | Full-Text | 654-663 | |
| Carlos Rene Argueta; Ching-Ju Ko; Yi-Shin Chen | |||
| Music conducting is the art of directing musical ensembles with hand
gestures to personalize and diversify a musical piece. The ability to
successfully perform a musical piece demands intense training and coordination
from the conductor, but preparing a practice session is an expensive and
time-consuming task. Accordingly, there is a need for alternatives to provide
adequate training to conductors at all skill levels; virtual reality technology
holds promise for this application. The goal of this research was to study the
mechanics of music conducting and develop a system capable of closely
simulating the conducting experience. After extensive discussions with
professional and nonprofessional conductors, as well as extensive research on
music conducting material, we identified several key features of conducting. A
set of lightweight algorithms exploring those features were developed to enable
tempo control and instrument emphasis, two core components of conducting. By
using position/orientation sensors and data gloves as the interface for
human-computer interaction, we developed a functional version of the system.
Evaluating the algorithms in real-world scenarios gave us promising results;
most users of the final system expressed satisfaction with the virtual
experience. Keywords: music conducting; human-computer interaction; virtual reality; tempo
control; instrument emphasis | |||
| Hierarchical Structure: A Step for Jointly Designing Interactive Software Dialog and Task Model | | BIBAK | Full-Text | 664-673 | |
| Sybille Caffiau; Patrick Girard; Laurent Guittet; Dominique L. Scapin | |||
| In order to design interactive applications, the first step is usually the
definition of user needs. While performing this step, activities may be modeled
using task models. Some task model components express scheduling information
that describes the task dynamics. According to a model-based approach, the
dynamics of applications (i.e.: the dialog) can be formalized using a dialog
model. Several approaches seek to exploit the task model information to perform
the dialog model. This paper aims to show that the use of the hierarchical
dialog model facilitates its design according to task model information during
the whole iterative design process. Keywords: task models; dialog; iterative design | |||
| Breaking of the Interaction Cycle: Independent Interpretation and Generation for Advanced Dialogue Management | | BIBAK | Full-Text | 674-683 | |
| David del Valle-Agudo; Francisco Javier Calle-Gómez; Dolores Cuadra Fernández; Jessica Rivero-Espinosa | |||
| This paper presents an architecture for Natural Interaction Systems in order
to make interpretation and generation processes independent, opposite to the
traditional development of the interaction, typically called Interaction Cycle.
With this aim a multi-agent platform is applied, together with a joint action
Dialogue Manager based on the Threads Model [1] and an advanced Presentation
Manager. The combination of these factors enables: a concurrent execution of
all processed involved in the Natural Interaction; the capability of adjust on
each moment new goals from both, user and system, into the state of
interaction; and organizing both processes depending on the need, opportunity
and obligation of generate new utterances, respectively. Keywords: Natural Interaction Systems; Turn Taking; Grounding; Independent
Interpretation and Generation; Threads Model | |||
| SimulSort: Multivariate Data Exploration through an Enhanced Sorting Technique | | BIBAK | Full-Text | 684-693 | |
| Inkyoung Hur; Ji Soo Yi | |||
| Sorting is one of the well-understood and widely-used interaction
techniques. Sorting has been adopted in many software applications and supports
various cognitive tasks. However, when used in analyzing multi-attribute data
in a table, sorting appears to be limited. When a table is sorted by a column,
it rearranges the whole table, so the insights gained through the previous
sorting arrangements of another column are often difficult to retain. Thus,
this study proposed an alternative interaction technique, called "SimulSort."
By sorting all of the columns simultaneously, SimulSort helps users see an
overview of the data at a glance. Additional interaction techniques, such as
highlighting and zooming, were also employed to alleviate the drawbacks of
SimulSort. A within-subject controlled study with 15 participants was conducted
to compare SimulSort and the typical sorting feature. The results showed
typical sorting and SimulSort work with comparable efficiency and effectiveness
for most of the tasks. Sorting more effectively supports understanding
correlation and reading corresponding values, and SimulSort shows the potential
to more effectively support tasks that need multi-attribute analyses. The
implications of the results and planned future work are discussed as well. Keywords: Sort; SimulSort; information visualization; multi-attribute data analysis;
tabular information; decision support system | |||
| WeMe: Seamless Active and Passive Liquid Communication | | BIBAK | Full-Text | 694-700 | |
| Nicolas Masson; Wendy E. Mackay | |||
| WeMe is designed to help remote families stay in touch, providing peripheral
awareness of each other combined with the possibility of more direct
interaction. WeMe's ferrofluid bubbles move in response to ambient sounds, both
local and distant. As many as three family members can generate patterns
intentionally, by moving their hands around the surface. WeMe acts as a
stand-alone sculpture, a passive indicator of remote activity and a source of
shared interaction. Keywords: Ambient display; peripheral awareness; communication appliances; presence | |||
| Study of Feature Values for Subjective Classification of Music | | BIBAK | Full-Text | 701-709 | |
| Masashi Murakami; Toshikazu Kato | |||
| In this research, we analyze how the sound and music relate to humans from
the aspect of Kansei engineering. We analyze what features of the sound humans
pay attention and how humans interpret sound. Therefore, we divide the signal
processing of sound that humans do into four levels. At the physiological
level, processing is done by the auditory characteristic. In this level, humans
don't interpret the image of the sound yet. There is no subjectivity for the
sound. By using auditory characteristic, we investigate the features which help
in the case that sound and music is analyzed. We consider that the processing
at early stage of auditory nervous system is to extract the change in power,
which is obtained from the segmentation of the sound-signals which is divided
by band of the frequency and time interval, and its contrast. We also consider
the features obtained by that extractation. Moreover, in the cognitive level,
we analyze the correlation of that features with the word of interpretation
that humans do subjectively. By these modelings, we develop the method of
retrieving the sound and music that having the similarity, or having the image
that is expressed by any subjective words. Keywords: Music; Hierarchical model of Kansei; Auditory characteristic | |||
| Development of Speech Input Method for Interactive VoiceWeb Systems | | BIBAK | Full-Text | 710-719 | |
| Ryuichi Nisimura; Jumpei Miyake; Hideki Kawahara; Toshio Irino | |||
| We have developed a speech input method called "w3voice" to build practical
and handy voice-enabled Web applications. It is constructed using a simple Java
applet and CGI programs comprising free software. In our website
(http://w3voice.jp/), we have released automatic speech recognition and spoken
dialogue applications that are suitable for practical use. The mechanism of
voice-based interaction is developed on the basis of raw audio signal
transmissions via the POST method and the redirection response of HTTP. The
system also aims at organizing a voice database collected from home and office
environments over the Internet. The purpose of the work is to observe actual
voice interactions of human-machine and human-human. We have succeeded in
acquiring 8,412 inputs (47.9 inputs per day) captured by using normal PCs over
a period of seven months. The experiments confirmed the user-friendliness of
our system in human-machine dialogues with trial users. Keywords: Voice-enabled Web; Spoken interface; Voice collection | |||
| Non-verbal Communication System Using Pictograms | | BIBA | Full-Text | 720-724 | |
| Makiko Okita; Yuki Nakaura; Hidetsugu Suto | |||
| A system is described that uses pictograms to support interactive non-verbal communication. While pictograms are typically used with objects, this system uses them for events as well. Moreover, whereas communication systems using pictograms are generally designed for people with disabilities, this system is designed for people in general. It can thus be used between a non-disabled person and a person with a disability, between an adult and a child, between a Japanese person and an American person, and so on. | |||
| Modeling Word Selection in Predictive Text Entry | | BIBAK | Full-Text | 725-734 | |
| Hamed H. Sad; Franck Poirier | |||
| Although word, or short phrase, selection from a list is an extensively used
task on different types of interfaces, there is no accepted model for its
execution time in the literature. We present a Keystroke Level Model (KLM)
adapted for this task on both desktop and handheld interfaces. The model is
built from the results of an empirical study for exploring the effect of the
list sorting and the number of words displayed to the user on the selection
time of a word at a specified position in the list. The resulting model is
integrated into an existing model for predicting the text entry speed of
word-disambiguation entry methods. The predicted text entry speed is then
compared to the published empirical results and to the theoretical results of
the model that doesn't take into account word selection search time. The error
between prediction and empirical-results is effectively reduced using the
modified model. Keywords: User interface; modeling; word selection; text entry; evaluation | |||
| Using Pictographic Representation, Syntactic Information and Gestures in Text Entry | | BIBAK | Full-Text | 735-744 | |
| Hamed H. Sad; Franck Poirier | |||
| With the increasing popularity of touch screen mobile devices, it is
becoming increasingly important to design fast and reliable methods for text
input on such devices. In this work, we exploit the capabilities of those
devices and a specific language model to enhance the efficiency of text entry
tasks. We will distribute the roles between the user and the device in a way
that allocates the tasks to the side where they can be efficiently done. The
user is not a good processor of syntactic and memory retrieval operations but
she/he is a highly efficient processor for handling semantic and pattern
recognition operations. The reverse is true for computational devices. These
facts are exploited in two designs for the entry of common words which
represent a high percentage of our written and spoken materials. A common word
is typed in two or three clicks, with or without a gesture on a touch screen. Keywords: mobile text entry; pictographs; pen gestures; syntactic information | |||
| Embodied Sound Media Technology for the Enhancement of the Sound Presence | | BIBAK | Full-Text | 745-751 | |
| Kenji Suzuki | |||
| In this paper, the paradigms of Embodied Sound Media (ESM) technology are
described with several case studies. The ESM is designed to formalize a musical
sound-space based on the conversion of free human movement into sounds. This
technology includes the measurement of human motion, processing, acoustic
conversion and output. The first idea was to introduce direct and intuitive
sound feedbacks within the context of not only embodied interaction between
humans and devices but also social interaction among humans. The developed
system is a sort of active aid for an embodied performance that allows the
users to get feedback for emotional stimuli in terms of sound surrounding the
users. The overviews of several devices developed in this scenario and the
potential applications to physical fitness, exercise, entertainment, assistive
technology and rehabilitation are also addressed. Keywords: Embodied sound media; wearable device; sound interface; sound conversion;
social-musical interaction | |||
| Compensate the Speech Recognition Delays for Accurate Speech-Based Cursor Position Control | | BIBAK | Full-Text | 752-760 | |
| Qiang Tong; Ziyun Wang | |||
| In this paper, we describe a back-compensate mechanism to improve the
precision of speech-based cursor control. Using this mechanism we can control
the cursor more easily to move to small on-screen targets during continuous
direction-based navigation despite the processing delays associated with speech
recognition. In comparison, using traditional speech-recognition systems, it is
difficult to move the cursor precisely to a desired position because of the
processing delays introduced by speech recognition. We also describe an
experiment in which we evaluated the two alternative solutions, one using the
traditional speech-based cursor control, and the other using the
back-compensate mechanism. We present the encouraging evaluation results at the
end of this paper and discuss future work. Keywords: Speech recognition; delays; navigation; mouse; cursor control | |||
| Effectiveness of the Text Display in Bilingual Presentation of JSL/JT for Emergency Information | | BIBA | Full-Text | 761-769 | |
| Shunichi Yonemura; Shin'ichiro Eitoku; Kazuo Kamata | |||
| This paper describes an experiment on the message transmission effectiveness achieved by adding Japanese text (JT) to a Japanese sign language (JSL) video. The transmission efficiency of information and the understanding of information are quantitatively measured. The situation assumed is that information about a vehicle accident is to be displayed in a railroad carriage to deaf people. Three information methods are examined JT, JSL, and JT+JSL. We show that JT and JT+JSL yield high correct answer rates; JSL yields low rates. Furthermore, the subjects' impressions of the three methods show that they responded favorably to JT. | |||
| Specifying the Representation of Non-geometric Information in 3D Virtual Environments | | BIBAK | Full-Text | 773-782 | |
| Kaveh Bazargan; Gilles Falquet | |||
| In 3D virtual environments (3DVE), we need to know what an object looks like
(i.e. geometric information) and what the object is, what are its properties
and characteristics and how it relates to other objects (i.e. non-geometric
information). Several interactive presentation techniques have been devised to
incorporate non-geometric information into 3DVEs. The relevance of a technique
depends on the context. Therefore, the choice of an appropriate representation
technique cannot be done once for all and must be adapted to the context. In
this paper, we first present a preliminary classification of representation
techniques for non-geometric information in 3DVE. Then we propose a formalism,
based on description logics, to describe the usability of a technique in a
given context. We show how these descriptions can be processed to select
appropriate techniques when automatically or semi-automatically generating a
3DVE. Keywords: Information-rich virtual environments; 3D interaction techniques; Human
computer interaction; Usability | |||
| Prompter "." Based Creating Thinking Support Communication System That Allows Hand-Drawing | | BIBAK | Full-Text | 783-790 | |
| Li Jen Chen; Jun Ohya; Shunichi Yonemura; Sven Forstmann; Yukio Tokunaga | |||
| Research into creative thinking-support tools and communication is commonly
focused on how to develop and share ideas between participants or with others.
In this paper, we proposes a creative thinking support method that utilizes
randomly generated visual prompter (black circle) image patterns (VP-patterns)
and free hand-drawing and writing functions. Concepts and ideas of the research
have been explained together with the development of the systems (CSP1 and
CSP2). Experiments have been conducted in order to evaluate the potentials and
effectiveness of the system. From the results, a tendency towards inspiring
creative ideas by participants has been observed. Keywords: Creative communication; visual stimuli; idea generation; creative thinking
support; self-expression; learn and develop creative ability | |||
| A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens | | BIBA | Full-Text | 791-800 | |
| Christian Geiger; Holger Reckter; Roman Dumitrescu; Sascha Kahl; Jan Berssenbrügge | |||
| We present the design, implementation and initial evaluation of a zoomable interface dedicated to present a large hierarchical design model of a complex mechatronic system. The large hierarchical structure of the model is illustrated by means of a visual notation and consists of over 800 elements. An efficient presentation of this complex model is realized by means of a zoomable user interface that is rendered on a large Virtual Reality wall with a high resolution (3860 x 2160). We assume that this visualization set-up combined with dedicated interaction techniques for selection and navigation reduces the cognitive workload of a passive audience and supports the understanding of complex hierarchical structures. To validate this assumption we have designed a small experiment that compares the traditional visualization techniques PowerPoint and paper sheets with this new presentation form. | |||
| Phorigami: A Photo Browser Based on Meta-categorization and Origami Visualization | | BIBAK | Full-Text | 801-810 | |
| Shuo Hsiu Hsu; Pierre Cubaud; Sylvie Jumpertz | |||
| Phorigami is a photo browser whose meta-interface visualizes photos by
groups according to the analysis of photo contexts. At the core of Phorigami,
we proposed a meta-categorization for photo regrouping. This categorization
method encompasses the scope of current or expected recognition technologies.
Two experiments are conducted by manual classification tasks to study the
pertinence of proposed categorization method. We then outline our
meta-interface by applying different interaction technique to feature each
photo group. Keywords: Categorization; digital photo collections; content management; interface
design | |||
| Sphere Anchored Map: A Visualization Technique for Bipartite Graphs in 3D | | BIBA | Full-Text | 811-820 | |
| Takao Ito; Kazuo Misue; Jiro Tanaka | |||
| Circular anchored maps have been proposed as a drawing technique to acquire knowledge from bipartite graphs, where nodes in one set are arranged on a circumference. However, the readability decreases when large-scale graphs are drawn. To maintain the readability in drawing large-scale graphs, we developed "sphere anchored maps," in which nodes in one set are arranged on a sphere. We describe the layout method for sphere anchored maps and the results of our user study. The results of our study revealed that more clusters of free nodes can be found using sphere anchored maps than using circular anchored maps. Thus, our maps have high readability, particularly around anchors. | |||
| Motion Stroke-A Tablet-Based Interface for Motion Design Tool Using Drawing | | BIBAK | Full-Text | 821-829 | |
| Haruki Kouda; Ichiroh Kanaya; Kosuke Sato | |||
| Conventional animation tools provide users with complicated operations which
require them to adjust too much variables to design a virtual model's 3D
motion. In the proposed system, "Motion Stroke", users can control these
variables only by their 2D drawings on a tablet surface. The proposed interface
allows users to create new motions through a trial-and-error process which is
nature in drawing of a picture on a paper. It offers a familiar form for
designing of a virtual object's motion. We conducted several evaluations and
confirmed that the proposed system allows users to explore their desired
motions flexibly. Keywords: Motion deign; Drawing; User interface | |||
| Tooling the Dynamic Behavior Models of Graphical DSLs | | BIBAK | Full-Text | 830-839 | |
| Tihamer Levendovszky; Tamás Mészáros | |||
| Domain-specific modeling is a powerful technique to describe complex systems
in a precise but still understandable way. Rapid creation of graphical
Domain-Specific Languages (DSLs) has been focused for many years. Research
efforts have proven that metamodeling is a promising way of defining the
abstract syntax of the language. It is also clear that DSLs can be developed to
describe the concrete syntax and the dynamic behavior. Previous research has
contributed a set of graphical DSLs to model the behavior ("animation") of
arbitrary graphical DSLs. This paper contributes practical techniques to
simplify our message handling method, automate the integration process, and
show where domain-specific model patterns can help to accelerate the simulation
modeling process. Keywords: Domain-Specific Modeling Languages; Metamodeling; Simulation | |||
| Pattern Recognition Strategies for Interactive Sketch Composition | | BIBAK | Full-Text | 840-849 | |
| Sébastien Macé; Éric Anquetil | |||
| The design of industrial pen-based systems for sketch composition and
interpretation is still a challenging problem when dealing with complex
domains. In this paper, we present a method that is based on an eager
interpretation of the user strokes, i.e. on an incremental process with visual
feedback to the user. We present the benefits of using such an interactive
approach in order to design efficient and robust sketch recognition. We focus
more specifically on the requirements in order to avoid as much as possible to
disturb the user. We show how we exploit pattern recognition strategies, for
instance the evaluation of adequacy measures thanks to the fuzzy set theory,
the exploitation of reject options, etc., to deal with each of these
difficulties. The DALI method that we present in this paper has been used to
design an industrial system for the composition of electrical sketches. We show
how the presented techniques are used in this system. Keywords: Hand-drawn shape recognition; pen-based interaction; visual languages and
grammars; fuzzy set theory; reject options | |||
| Specification of a Drawing Facility for Diagram Editors | | BIBA | Full-Text | 850-859 | |
| Sonja Maier; Mark Minas | |||
| The purpose of this paper is to give an overview of a drawing approach for the visualization of diagrams. The approach is tailored to editors for visual languages, which support structured editing as well as free-hand editing. In this approach, the editor developer visually specifies layout behavior. From this specification a drawing facility is generated. With the generated editor, the user may perform incremental diagram drawing at any time. When visualizing components, taking into account geometric dependencies between different components for layout computation is a challenging task. Therefore, we choose the visual languages Petri nets and GUI forms as running examples. Based on these examples, we show the applicability of our approach to graph-based and hierarchical visual languages. | |||
| A Basic Study on a Drawing-Learning Support System in the Networked Environment | | BIBAK | Full-Text | 860-868 | |
| Takashi Nagai; Mizue Kayama; Kazunori Itoh | |||
| The purpose of this study is to develop a support system in drawing-learning
within a networked environment. In this paper, we describe the results of
potential assessment for our system. Two assessment approaches are shown. One
is the possibility of a digital pen as a drawing-tool. The other approach is
the effectiveness of the drawing-learning support in the networked environment,
based on the reuse of the learner's and/or expert's drawing process. The
drawing process model for supporting individual drawing-learning is also
discussed. Keywords: Drawing-learning; Learning Support System; On-line Class; Drawing Process;
Digital Pen | |||
| Benefit and Evaluation of Interactive 3D Process Data Visualization for the Presentation of Complex Problems | | BIBAK | Full-Text | 869-878 | |
| Dorothea Pantförder; Birgit Vogel-Heuser; Karin Schweizer | |||
| The increasing complexity of industrial plants and more intelligent
equipment technology lead to a growing amount of process data. The approach of
interactive 3D process data visualization and associated training concepts for
processes without a process model can assists operators to analyze complex
processes. This paper describes a set of experiments that analyzed the benefits
of a 3D data presentation as part of an HMI in combination with different types
of operator training in process control. Keywords: 3D visualization; operator training | |||
| Modeling the Difficulty for Centering Rectangles in One and Two Dimensions | | BIBA | Full-Text | 879-888 | |
| Robert Pastel | |||
| Centering, positioning an object within specified bounds, is a common computer task, for example making selections using a standard mouse or on a touch screen using a finger. These experiments measured times for participants (n = 131) to position a rectangular cursor with various widths, p (10 px ≤ p ≤ 160 px), completely within rectangular targets with various widths, w, and tolerances, t = w-p (4 px ≤ t ≤ 160 px) in one and two dimensions. The analysis divides the movement time into two phases, the transport time and the centering time. Centering times are modeled well by 1/t. All models have high correlation, r² ≥ 0.95. | |||
| Composing Visual Syntax for Domain Specific Languages | | BIBA | Full-Text | 889-898 | |
| Luis Pedro; Matteo Risoldi; Didier Buchs; Bruno Barroca; Vasco Amaral | |||
| With the increasing interest in metamodeling techniques for Domain Specific
Modeling Languages (DSML) definition, there is a strong need to improve the
language modeling process. One of the problems to solve is language evolution.
Possible solutions include maximizing the reuse of metamodel patterns,
composing them to form new, more expressive DSMLs.
In this paper we improve the process of rapid prototyping of DSML graphical editors in meta-modeling tools, by defining composition rules for the graphical syntax layer. The goal is to provide formally defined operators to specify what happens to graphical mappings when their respective metamodels are composed. This improves reuse of Domain Specific Modeling Languages definitions and reduces development time. | |||
| The Effectiveness of Interactivity in Computer-Based Instructional Diagrams | | BIBA | Full-Text | 899-908 | |
| Lisa Whitman | |||
| This study investigates if interaction between a student and instructional diagrams displayed on a computer can be effective in significantly improving understanding of the concepts the diagrams represent over viewing animated or static instructional diagrams. Participants viewed either interactive, animated, or static versions of multimedia tutorials that taught how a simple mechanical system, a lock, worked and how a complex mechanical system, an automobile clutch, worked. Participants were tested on recall and comprehension to determine which presentation style; static, animated, or interactive; greater impacts learning, and whether that impact is mediated by the complexity of the mechanical system. Participants who studied from interactive multimedia presentations demonstrating how simple and complex mechanical systems work performed significantly better on comprehension tests for both mechanical systems than those who studied from static or animated presentations. However, all participants performed similarly on recall tests. Research on the effectiveness of computer learning environments and how to optimize their potential for effective instruction through improved multimedia design is important as computers are increasingly being used for training and education. | |||