| Development of a High Definition Haptic Rendering for Stability and Fidelity | | BIBAK | Full-Text | 3-12 | |
| Katsuhito Akahane; Takeo Hamada; Takehiko Yamaguchi; Makoto Sato | |||
| In this study, we developed and evaluated a 10kHz high definition haptic
rendering system which could display at real-time video-rate (60Hz) for general
VR applications. Our proposal required both fidelity and stability in a
multi-rate system, with a frequency ratio of approximately 160 times. To
satisfy these two criteria, there were some problems to be resolved. To achieve
only stability, we could use a virtual coupling method to link a haptic display
and a virtual object. However, due to its low coupling impedance, this method
is not good for realization of fidelity and quality of manipulation. Therefore,
we developed a multi-rate system with two level up-samplings for both fidelity
and stability of haptic sensation. The first level up-sampling achieved
stability by the virtual coupling, and the second level achieved fidelity by
10kHz haptic rendering to compensate for the haptic quality lost from the
coupling process. We confirmed that, with our proposed system, we could achieve
both stability and fidelity of haptic rendering through a computer simulation
and a 6DOF haptic interface (SPIDAR-G) with a rigid object simulation engine. Keywords: Haptic interface; High definition haptic; SPIDAR | |||
| Designing a Better Morning: A Study on Large Scale Touch Interface Design | | BIBAK | Full-Text | 13-22 | |
| Onur Asan; Mark Omernick; Dain Peer; Enid N. H. Montague | |||
| In this paper, we described the design process of an individual prototype as
it relates to Large Scale Public Touch Interface system design as a whole, and
examine ergonomic and usability concerns for Large Scale Public Touch Interface
(LSPTI) designs. The design process includes inspirational design, contextual
design, storyboarding, paper prototyping, video prototyping and a user testing
study. We examined the design process at each stage and proposed improvements
for LSPTIs. Results indicate that the 'color-field' interaction methodology
might be a good alternative to traditional 'tabbed-hyperlink' interaction in
LSPTI implementations. Keywords: Intelligent space; interactive design; touch screen | |||
| Experimental Evaluations of Touch Interaction Considering Automotive Requirements | | BIBAK | Full-Text | 23-32 | |
| Andreas Haslbeck; Severina Popova; Michael Krause; Katrina Pecot; Jürgen Mayer; Klaus Bengler | |||
| Three different usability studies present evaluation methods for
cross-domain human-computer-interaction. The first study compares different
input devices like touch screen, turn-push-controller or handwriting
recognition under regard of human error probability, input speed and subjective
usability assessment. The other experiments had a focus on typical automotive
issues: interruptibility and the influence of oscillations of the cockpit on
the interaction. Keywords: Touch; input medium; efficiency; effectiveness | |||
| More than Speed? An Empirical Study of Touchscreens and Body Awareness on an Object Manipulation Task | | BIBAK | Full-Text | 33-42 | |
| Rachelle Kristof Hippler; Dale S. Klopfer; Laura M. Leventhal; G. Michael Poor; Brandi A. Klein; Samuel D. Jaffee | |||
| Touchscreen interfaces do more than allow users to execute speedy
interactions. Three interfaces (touchscreen, mouse-drag, on-screen button) were
used in the service of performing an object manipulation task. Results showed
that planning time was shortest with touch screens, that touchscreens allowed
high action knowledge users to perform the task more efficiently, and that only
with touchscreens was the ability to rotate the object the same across all axes
of rotation. The concept of closeness is introduced to explain the potential
advantages of touchscreen interfaces. Keywords: Touchscreens; Reality based Interface Model; Cube Comparison Task; Mental
Rotation in Virtual Environments | |||
| TiMBA -- Tangible User Interface for Model Building and Analysis | | BIBAK | Full-Text | 43-52 | |
| Chih-Pin Hsiao; Brian R. Johnson | |||
| Designers in architectural studios, both in education and practice, have
worked to integrate digital and physical media ever since they began to utilize
digital tools in the design process [1]. Throughout the design process there
are significant benefits of working in the digital domain as well as benefits
of working physically; confronting architects with a difficult choice. We
believe emerging strategies for human-computer interaction such as tangible
user interfaces and computer vision techniques present new possibilities for
manipulating architectural designs. These technologies can help bridge between
the digital and physical worlds. In this paper, we discuss some of these
technologies, analyzes several current design challenges and present a
prototype that illustrates ways in which a broader approach to human-computer
interaction might resolve the problem. The ultimate goal of breaking down the
boundary between the digital and physical design platforms is to create a
unified domain of "continuous thought" for all design activities. Keywords: Tangible User Interfaces; Computer Vision; Architectural Design | |||
| Musical Skin: A Dynamic Interface for Musical Performance | | BIBAK | Full-Text | 53-61 | |
| Heng Jiang; Teng-Wen Chang; Cha-Lin Liu | |||
| Compared to pop music, the audience of classical music has decreased
dramatically. Reasons might be the way of communication between classic music
and its audience that depends on vocal expression such as timbre, rhythm and
melody in the performance. The fine details of classic music as well its
implied emotion among the notes become implicit to the audience. Thus, we apply
a new media called dynamic skin for building up the interface between
performers and audiences. Such interface is called "Musical Skin" is
implemented with dynamic skin design process with the results from gesture
analysis of performer/audience. Two skins-system of Musical Skin are
implemented with virtual visualization/actuators/sensible spaces. The
implementation is tested using scenario and interviews. Keywords: Dynamic skin; Sensing technology; Musical performance; Scenario | |||
| Analyzing User Behavior within a Haptic System | | BIBAK | Full-Text | 62-70 | |
| Steven L. Johnson; Yueqing Li; Chang Soo Nam; Takehiko Yamaguchi | |||
| Haptic technology has the potential to enhance education, especially for
those with severe visual impairments (those that are blind or who have low
vision), by presenting abstract concepts through the sense of touch. Despite
the advances in haptic research, little research has been conducted in the area
of haptic user behavior toward the establishment of haptic interface
development and design conventions. To advance haptic research closer to this
goal, this study examines haptic user behavior data collected from 9
participants utilizing a haptic learning system, the Heat Temperature Module.
ANOVA results showed that differences in the amount of haptic feedback result
in significant differences in user behavior, indicating that higher levels of
haptic friction feedback result in higher user interaction proportions of data.
Results also suggested that minimal thresholds of friction haptic feedback can
be established for a desired level of minimum user interaction data
proportions, however; more research is needed to establish such thresholds. Keywords: Haptic User Interface; Thermal Device; User Behavior; Inclusive Design | |||
| Usability Testing of the Interaction of Novices with a Multi-touch Table in Semi Public Space | | BIBA | Full-Text | 71-80 | |
| Markus Jokisch; Thomas Bartoschek; Angela Schwering | |||
| Touch-sensitive devices are becoming more and more common. Many people use touch interaction, especially on handheld devices like iPhones or other mobile phones. But the question is, do people really understand the different gestures, i.e., do they know which gesture is the correct one for the intended action and do they know how to transfer the gestures to bigger devices and surfaces? This paper reports the results of usability tests which were carried out in semi public space to explore peoples' ability to find gestures to navigate on a virtual globe. The globe is presented on a multi-touch-table. Furthermore, the study investigated which additional gestures people use intuitively as compared to the ones which are implemented. | |||
| Niboshi for Slate Devices: A Japanese Input Method Using Multi-touch for Slate Devices | | BIBA | Full-Text | 81-89 | |
| Gimpei Kimioka; Buntarou Shizuki; Jiro Tanaka | |||
| We present Niboshi for slate devices, an input system that utilizes a multi-touch interface. Users hold the device with both hands and use both thumbs to input a character in this system. Niboshi for slate devices has four features that improve the performance of inputting text to slate devices: it has a multi-touch input, enables the device to be firmly held with both hands while text is input, can be used without visual confirmation of the input buttons, and has a large text display area with a small interface. The Niboshi system will enable users to type faster and requires less user attention to typing than existing methods. | |||
| An Investigation on Requirements for Co-located Group-Work Using Multitouch-, Pen-Based- and Tangible-Interaction | | BIBAK | Full-Text | 90-99 | |
| Karsten Nebe; Tobias Müller; Florian Klompmaker | |||
| Cooperation and coordination is crucial to solve many of our everyday tasks.
Even though many computerized tools exist, there is still a lack of effective
tools that support co-located group work. There are promising technologies that
can add to this, such as tabletop systems, multitouch, tangible and pen-based
interaction. There also exist general requirements and principles that aim to
support this kind of work. However these requirements are relatively vague and
are not focused on concrete usage scenarios. In this study a user centered
approach has been applied in order to develop a co-located group work system
based on those general requirements but also on a real use case. The
requirements are transformed into concepts and a running prototype that was
evaluated with users. As a result not only the usability of the system has been
proven but also a catalogue of even more specific requirements for co-located
group work systems could be derived. Keywords: multitouch; tangible interaction; collaboration; cooperation; co-located
group work; requirements; user centered design | |||
| Exploiting New Interaction Techniques for Disaster Control Management Using Multitouch-, Tangible- and Pen-Based-Interaction | | BIBAK | Full-Text | 100-109 | |
| Karsten Nebe; Florian Klompmaker; Helge Jung; Holger Fischer | |||
| This paper shows the proceedings and results of an user centered design
process that has been applied in order to analyze how processes of management
in disaster control can be optimized while using new interaction techniques
like multitouch, tangible and pen-based interaction. The study took part in
cooperation with the German Federal Agency for Technical Relief. Its statutory
tasks include the provision of technical assistance at home and humanitarian
aid abroad. Major focus of this work is the IT-support for coordination and
management tasks. As result we introduce our prototype application, the
software- and hardware requirements towards it as well as the interaction
design that was influenced by the outcome of the user centered design process. Keywords: Interaction techniques; multitouch; tangible interaction; pen interaction;
disaster control management; THW; user centered design | |||
| Saving and Restoring Mechanisms for Tangible User Interfaces through Tangible Active Objects | | BIBAK | Full-Text | 110-118 | |
| Eckard Riedenklau; Thomas Hermann; Helge Ritter | |||
| In this paper we present a proof of concept for saving and restoring
mechanisms for Tangible User Interfaces (TUIs). We describe our actuated
Tangible Active Objects (TAOs) and explain the design which allows equal user
access to a dial-based fully tangible actuated menu metaphor. We present a new
application extending an existing TUI for interactive sonification of process
data with saving and restoring mechanisms and we outline another application
proposal for family therapists. Keywords: Human-Computer Interaction; Tangible User Interfaces; actuated Tangible
Objects; Tangible Active Objects; Save and Restore; Menus | |||
| Needle Insertion Simulator with Haptic Feedback | | BIBAK | Full-Text | 119-124 | |
| Seungjae Shin; Wanjoo Park; Hyunchul Cho; Se Hyung Park; Laehyun Kim | |||
| We introduce a novel injection simulator with haptic feedback which provides
realistic physical experience to the medical user. Needle insertion requires
very dexterous hands-on skills and fast and appropriate response to avoid
dangerous situations for patients. In order to train the injection operation,
the proposed injection simulator has been designed to generate delicate force
feedback to simulate the needle penetration into various tissues such as skin,
muscle, and blood vessels. We have developed and evaluated the proposed
simulator with medical doctors and realized that the system offers very
realistic haptic feedback with dynamic visual feedback. Keywords: Needle insertion; Medical simulation; Haptic feedback | |||
| Measurement of Driver's Distraction for an Early Prove of Concepts in Automotive Industry at the Example of the Development of a Haptic Touchpad | | BIBAK | Full-Text | 125-132 | |
| Roland Spies; Andreas Blattner; Christian Lange; Martin Wohlfarter; Klaus Bengler; Werner Hamberger | |||
| This contribution shows how it is possible to integrate the user's behavior
in the development process in a very early stage of concept. Therefore
innovative applied methodologies for objectifying human behavior such as eye
tracking or video observation like the Dikablis/ DLab environment in the Audi
driving simulator are necessary. A demonstrative example therefore is the
predevelopment of a touchpad with an adjustable haptic surface as a concept
idea for infotainment interaction with the Audi MMI. First an overview of the
idea of capturing human behavior for evaluating concept ideas in a very early
stage of the development process is given and how it is realized with the
Dikablis and DLab environment. Furthermore the paper describes the concept idea
of the innovative control element of the haptic touchpad as well as the
accompanied upcoming demands for research and how these questions were
clarified. At the end some example results are given. Keywords: Eye Tracking; haptic feedback; touchpad; interaction; driving simulator | |||
| A Tabletop-Based Real-World-Oriented Interface | | BIBAK | Full-Text | 133-139 | |
| Hiroshi Takeda; Hidetoshi Miyao; Minoru Maruyama; David K. Asano | |||
| In this paper, we propose a Tangible User Interface which enables users to
use applications on a PC desktop in the same way as a paper and pen on a desk
in the real world. Also, the proposed system is cheaper to implement and can be
easily setup anywhere. By using the proposed system, we found that it was
easier to use than normal application user interfaces. Keywords: Tangible user interface; DigitalDesk | |||
| What You Feel Is What I Do: A Study of Dynamic Haptic Interaction in Distributed Collaborative Virtual Environment | | BIBAK | Full-Text | 140-147 | |
| Sehat Ullah; Xianging Liu; Samir Otmane; Paul Richard; Malik Mallem | |||
| In this paper we present the concept of "What You Feel Is What I Do
(WYFIWID)". The concept is fundamentally based on a haptic guide that allows an
expert to control the hand of a remote trainee. When haptic guide is active
then all movements of the expert's hand (via input device) in the 3D space are
haptically reproduced by the trainee's hand via a force feedback device. We use
haptic guide to control the trainee's hand for writing alphabets and drawing
geometrical forms. Twenty subjects participated in the experiments to evaluate. Keywords: Haptic guide; CVE; Virtual reality; Human performance | |||
| A Framework Interweaving Tangible Objects, Surfaces and Spaces | | BIBAK | Full-Text | 148-157 | |
| Andy Wu; Jayraj Jog; Sam Mendenhall; Ali Mazalek | |||
| In this paper, we will introduce the ROSS framework, an integrated
application development toolkit that extends across different tangible
platforms such as multi-user interactive tabletop displays, full-body
interaction spaces, RFID-tagged objects and smartphones with multiple sensors.
We will discuss how the structure of the ROSS framework is designed to
accommodate a broad range of tangible platform configurations and illustrate
its use on several prototype applications for digital media content interaction
within education and entertainment contexts. Keywords: tangible interaction; API; interactive surfaces | |||
| The Effect of Haptic Cues on Working Memory in 3D Menu Selection | | BIBAK | Full-Text | 158-166 | |
| Takehiko Yamaguchi; Damien Chamaret; Paul Richard | |||
| We investigated the effect of haptic cues on working memory in 3D menu
selection. We conducted a 3D menu selection task in two different conditions:
visual only and visual with haptic. For the visual condition, participants were
instructed to select 3D menu items and memorize the order of selection. For the
visual with haptic condition, we used magnetic haptic effect on each 3D menu
item. Results showed that participants needed less number of trials for
memorizing the selection sequence in visual with haptic condition than in
visual only condition. Subjective data, collected from a questionnaire,
indicated that visual with haptic condition was more suitable for selection and
memorization. Keywords: Virtual reality; haptic interaction; 3D menu; selection; learning | |||
| Face Recognition Using Local Graph Structure (LGS) | | BIBAK | Full-Text | 169-175 | |
| Eimad E. A. Abusham; Housam K. Bashir | |||
| In this paper, a novel algorithm for face recognition based on Local Graph
Structure (LGS) has been proposed. The features of local graph structures are
extracted from the texture in a local graph neighborhood then it's forwarded to
the classifier for recognition. The idea of LGS comes from dominating set
points for a graph of the image. The experiments results on ORL face database
images demonstrated the effectiveness of the proposed method. The advantages of
LGS, very simple, fast and can be easily applied in many fields, such as
biometrics, pattern recognition, and robotics as preprocessing. Keywords: Algorithm; Feature evaluation and selection; Pattern Recognition; Pattern
Recognition | |||
| Eye-gaze Detection by Image Analysis under Natural Light | | BIBAK | Full-Text | 176-184 | |
| Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama | |||
| We have developed an eye-gaze input system for people with severe physical
disabilities, such as amyotrophic lateral sclerosis (ALS). The system utilizes
a personal computer and a home video camera to detect eye-gaze under natural
light. Our practical eye-gaze input system is capable of classifying the
horizontal eye-gaze of users with a high degree of accuracy. However, it can
only detect three directions of vertical eye-gaze. If the detection resolution
in the vertical direction is increased, more indicators will be displayed on
the screen. To increase the resolution of vertical eye-gaze detection, we apply
a limbus tracking method, which is also the conventional method used for
horizontal eye-gaze detection. In this paper, we present a new eye-gaze
detection method by image analysis using the limbus tracking method. We also
report the experimental results of our new method. Keywords: Eye-gaze detection; Image analysis; Natural light; Limbus tracking method;
Welfare device | |||
| Multi-user Pointing and Gesture Interaction for Large Screen Using Infrared Emitters and Accelerometers | | BIBAK | Full-Text | 185-193 | |
| Leonardo Angelini; Maurizio Caon; Stefano Carrino; Omar Abou Khaled; Elena Mugellini | |||
| This paper presents PlusControl, a novel multi-user interaction system for
cooperative work with large screen. This system is designed for a use with
economic deictic and control gestures in air and it allows free mobility in the
environment to the users. PlusControl consists in light worn devices with
infrared emitters and Bluetooth accelerometers. In this paper the architecture
of the system is presented. A prototype has been developed in order to test and
evaluate the system performances. Results show that PlusControl is a valuable
tool in cooperative scenarios. Keywords: human computer interaction; large screen; gesture recognition; visual
tracking; computer supported cooperative work; economic gestures | |||
| Gesture Identification Based on Zone Entry and Axis Crossing | | BIBA | Full-Text | 194-203 | |
| Ryosuke Aoki; Yutaka Karatsu; Masayuki Ihara; Atsuhiko Maeda; Minoru Kobayashi; Shingo Kagami | |||
| Hand gesture interfaces have been proposed as an alternative to the remote controller, and products with such interfaces have appeared in the market. We propose the vision-based unicursal gesture interface (VUGI) as an extension of our unicursal gesture interface (UGI) for TV remotes with touchpads. Since UGI allows users to select an item on a hierarchical menu comfortably, it is expected that VUGI will yield easy-to-use hierarchical menu selection. Moreover, gestures in the air such as VUGI offer an interface area that is larger than that provided by touchpads. Unfortunately, since the user loses track of his/her finger position, it is not easy to input commands continuously using VUGI. To solve this problem, we propose the dynamic detection zone and the detection axes. An experiment confirms that subjects can input VUGI commands continuously. | |||
| Attentive User Interface for Interaction within Virtual Reality Environments Based on Gaze Analysis | | BIBAK | Full-Text | 204-213 | |
| Florin Barbuceanu; Csaba Antonya; Mihai Duguleana; Zoltán Rusák | |||
| Eye movements can carry a rich set of information about someone's
intentions. In the case of physically impaired people gaze can be the only
communication channel they can use. People with severe disabilities are usually
assisted by helpers during everyday life activity, which in time can lead to a
development of an effective visual communication protocol between helper and
disabled. This protocol allows them to communicate at some extent only by
glancing one towards the other. Starting from this premise, we propose a new
model of attentive user interface featured with some of the visual
comprehension abilities of a human helper. The purpose of this user interface
is to be able to identify user's intentions, and so to assist him/her in the
process of achieving simple interaction goals (i.e. object selection, task
selection). Implementation of this attentive interface is accomplished by way
of statistical analysis of user's gaze data, based on a hidden Markov model. Keywords: gaze tracking; eye tracking; attentive user interface; hidden Markov model;
disabled people | |||
| A Low-Cost Natural User Interaction Based on a Camera Hand-Gestures Recognizer | | BIBAK | Full-Text | 214-221 | |
| Mohamed-Ikbel Boulabiar; Thomas Burger; Franck Poirier; Gilles Coppin | |||
| The search for new simplified interaction techniques is mainly motivated by
the improvements of the communication with interactive devices. In this paper,
we present an interactive TVs module capable of recognizing human gestures
through the PS3Eye low-cost camera. We recognize gestures by the tracking of
human skin blobs and analyzing the corresponding movements. It provides means
to control a TV in an ubiquitous computing environment. We also present a new
free gestures icons library created to allow easy representation and
diagramming. Keywords: natural gesture interaction; low-cost gesture recognition; interactive TV
broadcast; ubiquitous computing | |||
| Head-Computer Interface: A Multimodal Approach to Navigate through Real and Virtual Worlds | | BIBAK | Full-Text | 222-230 | |
| Francesco Carrino; Julien Tscherrig; Elena Mugellini; Omar Abou Khaled; Rolf Ingold | |||
| This paper presents a novel approach for multimodal interaction which
combines user mental activity (thoughts and emotions), user facial expressions
and user head movements. In order to avoid problems related to computer vision
(sensitivity to lighting changes, reliance on camera position, etc.), the
proposed approach doesn't make use of optical techniques. Furthermore, in order
to make human communication and control smooth, and avoid other environmental
artifacts, the used information is non-verbal. The head's movements (rotations)
are detected by a bi-axial gyroscope; the expressions and gaze are identified
by electromyography and electrooculargraphy; the emotions and the thoughts are
monitored by electroencephalography. In order to validate the proposed approach
we developed an application where the user can navigate through a virtual world
using his head. We chose Google Street View as virtual world. The developed
application was conceived for a further integration with a electric wheelchair
in order to replace the virtual world with a real world. A first evaluation of
the system is provided. Keywords: Gesture recognition; Brain-Computer Interface; multimodality; navigation
through real and virtual worlds; human-computer interaction;
psycho-physiological signals | |||
| 3D-Position Estimation for Hand Gesture Interface Using a Single Camera | | BIBAK | Full-Text | 231-237 | |
| Seung-Hwan Choi; Ji-Hyeong Han; Jong-Hwan Kim | |||
| The hand gesture interface is the state of the art technology to provide the
better human-computer interaction. This paper proposes two methods to estimate
the 3D-position of the hand for hand gesture interface using a single camera.
By using the methods in the office environment, it shows that the camera is not
restricted to a fixed position in front of the user and can be placed at any
position facing the user. Also, the reliability and usefulness of the proposed
methods are demonstrated by applying them to the mouse gesture recognition
software system. Keywords: Position Estimation; Hand Gesture Interface; Human Computer Interaction | |||
| Hand Gesture for Taking Self Portrait | | BIBAK | Full-Text | 238-247 | |
| Shaowei Chu; Jiro Tanaka | |||
| We present a new interaction technique enabling user to manipulate digital
camera when taking self-portrait pictures. User can control camera's functions
such as pan, tilt, and shutter using hand gesture. The preview of camera and
GUIs are shown on a large display. We developed two interaction techniques.
First one is a hover button that triggers camera's shutter. Second one is cross
motion interface that controls pan and tilt. In this paper, we explain
algorithms in detailed manner and show the preliminary experiment for
evaluating speed and accuracy of our implementation. Finally, we discuss
promising applications using proposed technique. Keywords: Hand gesture; self-portrait; human computer interaction; skin color; hand
detection; fingertip detection; optical-flow | |||
| Hidden-Markov-Model-Based Hand Gesture Recognition Techniques Used for a Human-Robot Interaction System | | BIBAK | Full-Text | 248-258 | |
| Chin-Shyurng Fahn; Keng-Yu Chu | |||
| In this paper, we present part of a human-robot interaction system that
recognizes meaningful gestures composed of continuous hand motions in real time
based on hidden Markov models. This system acting as an interface is used for
humans making various kinds of hand gestures to issue specific commands for
conducting robots. To accomplish this, we define four basic types of directive
gestures made by a single hand, which are moving upward, downward, leftward,
and rightward individually. They serve as fundamental conducting gestures.
Thus, if another hand is incorporated to making gestures, there are at most
twenty-four kinds of compound gestures by the combination of the directive
gestures using both hands. At present, we prescribe eight kinds of compound
gestures employed in our developed human-robot interaction system, each of
which is assigned a motion or functional control command, including moving
forward, moving backward, turning left, turning right, stop, robot following,
robot waiting, and ready, so that users can easily operate an autonomous robot.
Experimental results reveal that our system can achieve an average gesture
recognition rate of 96% at least. It is very satisfactory and encouraged. Keywords: hand gesture recognition; hidden Markov model; human-robot interaction;
directive gesture; compound gesture | |||
| Manual and Accelerometer Analysis of Head Nodding Patterns in Goal-oriented Dialogues | | BIBAK | Full-Text | 259-267 | |
| Masashi Inoue; Toshio Irino; Nobuhiro Furuyama; Ryoko Hanada; Takako Ichinomiya; Hiroyasu Massaki | |||
| We studied communication patterns in face-to-face dialogues between people
for the purpose of identifying conversation features that can be exploited to
improve human-computer interactions. We chose to study the psychological
counseling setting as it provides good examples of task-oriented dialogues. The
dialogues between two participants, therapist and client, were video recorded.
The participants' head movements were measured by using head-mounted
accelerometers. The relationship between the dialogue process and head nodding
frequency was analyzed on the basis of manual annotations. The segments where
nods of the two participants correlated were identified on the basis of the
accelerometer data. Our analysis suggests that there are characteristic nodding
patterns in different dialogue stages. Keywords: Dialogue analysis; Face-to-face; Head nodding; Accelerometer | |||
| Facial Expression Recognition Using AAMICPF | | BIBAK | Full-Text | 268-274 | |
| Jun-Sung Lee; Chi-Min Oh; Chil-Woo Lee | |||
| Recently, many interests have been focused on the facial expression
recognition research because of its importance in many applications area. In
the computer vision area, the object recognition and the state recognition are
very important and critical. Variety of researches have been done and proposed
but those are very difficult to solve. We propose, in this paper, to use Active
Appearance Model (AAM) with Particle filter for facial expression recognition
system. AAM is very sensitive about initial shape. So we improve accuracy using
Particle filter which is defined by the initial state to particles. Our system
recognizes the facial expression using each criteria expression vector. We find
better result than using basic AAM and 10% improvement has been made with
AAA-IC. Keywords: Facial expression recognition; Active Appearance Model; Particle Filter | |||
| Verification of Two Models of Ballistic Movements | | BIBAK | Full-Text | 275-284 | |
| Jui-Feng Lin; Colin G. Drury | |||
| The study of ballistic movement time and ballistic movement variability can
help us understand how our motor system works and further predict the
relationship of speed-accuracy tradeoffs while performing complex hand-control
movements. The purposes of this study were (1) to develop an experiment for
measuring ballistic movement time and variability and (2) to utilize the
measured data to test the application of the two models for predicting the two
types of ballistic movement data. In this preliminary study, four participants
conducted ballistic movements of specific amplitudes, by using a personal
computer, a drawing tablet and a self-developed experimental program. The
results showed that (1) the experiment successfully measured ballistic movement
time and two types of ballistic movement variability, (2) the two models
described well the measured data, and (3) a modified model was proposed to fit
better the variable error in the direction of the direction of the movement. Keywords: Fitts' law; aiming movement; ballistic movement; hand-control movement;
movement time; end-point variability | |||
| Gesture Based Automating Household Appliances | | BIBAK | Full-Text | 285-293 | |
| Wei Lun Ng; Ng Chee Kyun; Nor Kamariah Noordin; Borhanuddin Mohd Ali | |||
| Smart homes can be a potential application which provides unobtrusive
support for the elderly or disabled that promote independent living. In
providing ubiquitous service, specially designed controller is needed. In this
paper, a simple gesture based automating controller for various household
appliances that includes simple lightings to complex electronic devices is
introduced. The system uses the gesture-based recognition system to read
messages from the signer and sends command to respective appliances through the
household appliances sensing system. A simple server has been constructed to
perform simple deterministic algorithm on the received messages to execute
matching exercise which in turn triggers specific events. The proposed system
offers a new and novel approach in smart home controller system by utilizing
gesture as a remote controller. The adapted method of this innovative approach,
allows user to flexibly and conveniently control multiple household appliances
with simple gestures. Keywords: Gesture; smart home; stand-alone server; flex sensor; deterministic
algorithm; remote controller | |||
| Upper Body Gesture Recognition for Human-Robot Interaction | | BIBA | Full-Text | 294-303 | |
| Chi-Min Oh; Md. Zahidul Islam; Jun-Sung Lee; Chil-Woo Lee; In-So Kweon | |||
| This paper proposes a vision-based human-robot interaction system for mobile robot platform. A mobile robot first finds an interested person who wants to interact with it. Once it finds a subject, the robot stops in the front of him or her and finally interprets her or his upper body gestures. We represent each gesture as a sequence of body poses and the robot recognizes four upper body gestures: "Idle", "I love you", "Hello left", and "Hello right". A key pose-based particle filter determines the pose sequence and key poses are sparsely collected from the pose space. Pictorial Structure-based upper body model represents key poses and these key poses are used to build an efficient proposal distribution for the particle filtering. Thus, the particles are drawn from key pose-based proposal distribution for the effective prediction of upper body pose. The Viterbi algorithm estimates the gesture probabilities with a hidden Markov model. The experimental results show the robustness of our upper body tracking and gesture recognition system. | |||
| Gaze-Directed Hands-Free Interface for Mobile Interaction | | BIBAK | Full-Text | 304-313 | |
| Gie-seo Park; Jong-gil Ahn; Gerard Jounghyun Kim | |||
| While mobile devices have allowed people to carry out various computing and
communication tasks everywhere, it has generally lacked the support for task
execution while the user is in motion. This is because the interaction schemes
of most mobile applications are centered around the device visual display and
when in motion (with the important body parts, such as the head and hands,
moving), it is difficult for the user to recognize the visual output on the
small hand-carried device display and respond to make the timely and proper
input. In this paper, we propose an interface which allows the user to interact
with the mobile devices during motion without having to look at it or use one's
hands. More specifically, the user interacts, by gaze and head motion gestures,
with an invisible virtual interface panel with the help of a head-worn gyro
sensor and aural feedback. Since the menu is one of the most prevailing methods
of interaction, we investigate and focus on the various forms of menu
presentation such as the layout and the number of comfortably selectable menu
items. With head motion, it turns out 4x2 or 3x3 grid menu is more effective.
The results of this study can be further extended for developing a more
sophisticated non-visual oriented mobile interface. Keywords: Mobile interface; Gaze; Head-controlled; Hands-free; Non-visual interface | |||
| Eye-Movement-Based Instantaneous Cognition Model for Non-verbal Smooth Closed Figures | | BIBAK | Full-Text | 314-322 | |
| Yuzo Takahashi; Shoko Koshi | |||
| This study attempts to perform a comprehensive investigation of non-verbal
instantaneous cognition of images through the "same-different" judgment
paradigm using non-verbal smooth closed figures, which are difficult to
memorize verbally, as materials for encoding experiments. The results suggested
that the instantaneous cognition of non-verbal smooth closed figures is
influenced by the contours' features (number of convex parts) and
inter-stimulus intervals. In addition, the results of percent correct
recognitions suggested that the accuracy of the "same-different" judgment may
be influenced by the differences between the points being gazed when memorizing
and recognizing and factors involved in the visual search process when
recognizing. The results may have implications for the interaction design
guideline about some instruments for visualizing a system state. Keywords: non-verbal information; cognition model; eye movements; same-different
judgment paradigm | |||
| VOSS -- A Voice Operated Suite for the Barbadian Vernacular | | BIBAK | Full-Text | 325-330 | |
| David Byer; Colin Depradine | |||
| Mobile devices are rapidly becoming the default communication device of
choice. The rapid advances being experienced in this area has resulted in
mobile devices undertaking many of the tasks once restricted to desktop
computers. One key area is that of voice recognition and synthesis. Advances in
this area have produced new voice-based applications such as visual voice mail
and voice activated search. The rise in popularity of these types of
applications has resulted in the incorporation of a variety of major languages,
ensuring a more global use of the technology. Keywords: Interfaces; mobile; Java; phone; Android; voice; speech; Windows Phone 7 | |||
| New Techniques for Merging Text Versions | | BIBAK | Full-Text | 331-340 | |
| Darius Dadgari; Wolfgang Stuerzlinger | |||
| Versioning helps users to keep track of different sets of edits on a
document. Version merging methods enable users to determine which parts of
which version they wish to include in the next or final version. We explored
several existing and two new methods (highlighting and overlay) in single and
multiple window settings. We present the results of our quantitative user
studies, which show that the new highlighting and overlay techniques are
preferred for version merging tasks. The results suggest that the most useful
methods are those which clearly and easily present information that is likely
important to the user, while simultaneously hiding less important information.
Also, multi window version merging is preferred over single window merging. Keywords: Versioning; version merging; document differentiation; text editing; text
manipulation | |||
| Modeling the Rhetoric of Human-Computer Interaction | | BIBAK | Full-Text | 341-350 | |
| Iris K. Howley; Carolyn Penstein Rosé | |||
| The emergence of potential new human-computer interaction styles enabled
through technological advancements in artificial intelligence, machine
learning, and computational linguistics makes it increasingly more important to
formalize and evaluate these innovative approaches. In this position paper, we
propose a multi-dimensional conversation analysis framework as a way to expose
and quantify the structure of a variety of new forms of human-computer
interaction. We argue that by leveraging sociolinguistic constructs referred to
as authoritativeness and heteroglossia, we can expose aspects of novel
interaction paradigms that must be evaluated in light of usability heuristics
so that we can approach the future of human-computer interaction in a way that
preserves the usability standards that have shaped the state-of-the-art that is
tried and true. Keywords: computational linguistics; dialogue analysis; usability heuristics | |||
| Recommendation System Based on Interaction with Multiple Agents for Users with Vague Intention | | BIBAK | Full-Text | 351-357 | |
| Itaru Kuramoto; Atsushi Yasuda; Mitsuru Minakuchi; Yoshihiro Tsujino | |||
| We propose an agent-based recommendation system interface for users with
vague intention based on interaction with multiple character agents, which are
talking each other about their recommendations. This interface aims the user to
make his/her intentions and/or potential opinions clear with hearing agents'
conversation about recommendations. Whenever the user hits on any opinion,
he/she can naturally join the conversation for getting more favorite
recommendation. According to the result of experimental evaluation, the system
with proposed interface can introduce more recommendations without any
additional frustrations than the conventional recommendation systems with
single agent. Keywords: Vague intention; Character agent; Recommendation; Natural conversation | |||
| A Review of Personality in Voice-Based Man Machine Interaction | | BIBAK | Full-Text | 358-367 | |
| Florian Metze; Alan W. Black; Tim Polzehl | |||
| In this paper, we will discuss state-of-the-art techniques for
personality-aware user interfaces, and summarize recent work in automatically
recognizing and synthesizing speech with "personality". We present an overview
of personality "metrics", and show how they can be applied to the perception of
voices, not only the description of personally known individuals. We present
use cases for personality-aware speech input and/ or output, and discuss
approaches at defining "personality" in this context. We take a
middle-of-the-road approach, i.e. we will not try to uncover all fundamental
aspects of personality in speech, but we'll also not aim for ad-hoc solutions
that serve a single purpose, for example to create a positive attitude in a
user, but do not generate transferable knowledge for other interfaces. Keywords: voice user interface; paralinguistic information; speech processing | |||
| Can Indicating Translation Accuracy Encourage People to Rectify Inaccurate Translations? | | BIBAK | Full-Text | 368-377 | |
| Mai Miyabe; Takashi Yoshino | |||
| The accuracy of machine translation affects how well people understand each
other when communicating. Translation repair can improve the accuracy of
translated sentences. Translation repair is typically only used when a user
thinks that his/her message is inaccurate. As a result, translation accuracy
suffers, because people's judgment in this regard is not always accurate. In
order to solve this problem, we propose a method that provides users with an
indication of the translation accuracy of their message. In this method, we
measure the accuracy of translated sentences using an automatic evaluation
method, providing users with three indicators: a percentage, a five-point
scale, and a three-point scale. We verified how well these indicators reduce
inaccurate judgments, and concluded the following: (1) the indicators did not
significantly affect the inaccurate judgments of users; (2) the indication
using a five-point scale obtained the highest evaluation, and that using a
percentage obtained the second highest evaluation. However, in this experiment,
the values we obtained from automatically evaluating translations were not
always accurate. We think that incorrect automatic-evaluated values may have
led to some inaccurate judgments. If we improve the accuracy of an automatic
evaluation method, we believe that the indicators of translation accuracy can
reduce inaccurate judgments. In addition, the percentage indicator can
compensate for the shortcomings of the five-point scale. In other words, we
believe that users may judge translation accuracy more easily by using a
combination of these indicators. Keywords: multilingual communication; machine translation; back translation | |||
| Design of a Face-to-Face Multilingual Communication System for a Handheld Device in the Medical Field | | BIBAK | Full-Text | 378-386 | |
| Shun Ozaki; Takuo Matsunobe; Takashi Yoshino; Aguri Shigeno | |||
| In the medical field, a serious problem exists with regard to communication
between hospital staff and foreign patients. For example, medical translators
cannot provide support in cases in which round-the-clock support is required
during hospitalization. We propose the use of a multilingual communication
support system called the Petit Translator between people speaking different
languages in the hospital setting. From the results of experiments performed in
such a setting, we found the following: (1) by clicking the conversation scene,
the interface can retrieve the parallel text more efficiently than the paper
media, and (2) when a questioner appropriately limits the type of reply for a
respondent, prompt conversation can occur. Keywords: parallel text; machine translation; handheld device; speech-to-speech
translation; medical communication | |||
| Computer Assistance in Bilingual Task-Oriented Human-Human Dialogues | | BIBAK | Full-Text | 387-395 | |
| Sven Schmeier; Matthias Rebel; Renlong Ai | |||
| In 2008, the percentage of people with a migration background in Germany had
already reached more than 15% (12 Million people). Among that 15%, the ratio of
seniors aged 50 years or older was 30% [1]. In most cases, their competence of
the German language is adequate for dealing with everyday situations. However
sometimes in emergency or medical situations, their knowledge of German is not
sufficient to communicate with medical professionals and vice versa. These
seniors are part of the main target group within the German Ministry of
Research and Education (BMBF) research project SmartSenior [2] and we have
developed a software system that assists multilingual doctor-patient
conversations to overcome language and cultural barriers. The main requirements
of such a system are robustness, accurate translations in respect to context
and mobility, adaptability to new languages and topics and of course an
appropriate user interface. Furthermore, we have equipped the system with
additional information to convey cultural facts about different countries. In
this paper, we present the architecture and ideas behind the system as a whole
as well as related work in the area of computer aided translation and a first
evaluation of the system. Keywords: language barriers; human-human dialogue system; health care | |||
| Developing and Exploiting a Multilingual Grammar for Human-Computer Interaction | | BIBAK | Full-Text | 396-405 | |
| Xian Zhang; Rico Andrich; Dietmar Rösner | |||
| How to build a grammar that can accept as many as possible user inputs is
one of the central issues in human-computer interaction. In this paper, we
report about a corpus-based multilingual grammar, which has the aim to parse
naturally occurring utterances that are used frequently by subjects in a
domain-specific spoken dialogue system. The goal is achieved by the following
approach: utterance classification, syntax analysis, and grammar formulation. Keywords: NLU; HCI; grammar; multilinguality; GF | |||
| Dancing Skin: An Interactive Device for Motion | | BIBAK | Full-Text | 409-416 | |
| Sheng-Han Chen; Teng-Wen Chang; Sheng-Cheng Shih | |||
| Dynamic skin with its complex and dynamic characteristics provides valuable
interaction device for different context. The main cause is the motion design
and its corresponded structure/material. Starting with an understanding of
skin/thus dynamic skin, we move to motion samples for case studies for
unleashing the design process of motion in dynamic skin. The problem is to find
a pattern of motion in dynamic skin. How to penetrate architectonic to cause
the cortex to produce motion and we penetrates various types of street dance
movement for motion design. This systemic skin construction can be a reference
for building basic structure of folding form type skin and joint, developing
motion which it needs, also provides dancer an interface that can interchange
with other far-ended dancer through the Internet, regarding as a new
manifestation and perform way for the street dance and its dancers. Keywords: Dynamic skin; Motion; Folding form type; Street dance | |||
| A Hybrid Brain-Computer Interface for Smart Home Control | | BIBAK | Full-Text | 417-426 | |
| Günter Edlinger; Clemens Holzner; Christoph Guger | |||
| Brain-computer interfaces (BCI) provide a new communication channel between
the human brain and a computer without using any muscle activities.
Applications of BCI systems comprise communication, restoration of movements or
environmental control. Within this study we propose a combined P300 and
steady-state visually evoked potential (SSVEP) based BCI system for controlling
finally a smart home environment. Firstly a P300 based BCI system was developed
and tested in a virtual smart home environment implementation to work with a
high accuracy and a high degree of freedom. Secondly, in order to initiate and
stop the operation of the P300 BCI a SSVEP based toggle switch was implemented.
Results indicate that a P300 based system is very well suitable for
applications with several controllable devices and where a discrete control
command is desired. A SSVEP based system is more suitable if a continuous
control signal is needed and the number of commands is rather limited. The
combination of a SSVEP based BCI as a toggle switch to initiate and stop the
P300 selection yielded in all subjects very high reliability and accuracy. Keywords: Brain-Computer Interface; Smart Home; P300; SSVEP; electroencephalogram | |||
| Integrated Context-Aware and Cloud-Based Adaptive Home Screens for Android Phones | | BIBAK | Full-Text | 427-435 | |
| Tor-Morten Grønli; Jarle Hansen; Gheorghita Ghinea | |||
| The home screen in Android phones is a highly customizable user interface
where the users can add and remove widgets and icons for launching
applications. This customization is currently done on the mobile device itself
and will only create static content. Our work takes the concept of Android home
screen [3] one step further and adds flexibility to the user interface by
making it context-aware and integrated with the cloud. Overall results
indicated that the users have a strong positive bias towards the application
and that the adaptation helped them to tailor the device to their needs by
using the different context aware mechanisms. Keywords: Android; cloud computing; user interface tailoring; context; context-aware;
mobile; ubiquitous computing; HCI | |||
| Evaluation of User Support of a Hemispherical Sub-display with GUI Pointing Functions | | BIBAK | Full-Text | 436-445 | |
| Shinichi Ike; Saya Yokoyama; Yuya Yamanishi; Naohisa Matsuuchi; Kazunori Shimamura; Takumi Yamaguchi; Haruya Shiba | |||
| In this paper, we discuss the effectiveness of a new human interface device
for PC user support. Recently, as the Internet utilization rate has increased
every year, the usage of PCs by elderly people has also increased in Japan.
However, the digital divide between elderly people and PC beginners has
widened. To eliminate this digital divide, we consider improving the users'
operability and visibility as our goal. We propose a new hemispherical
human-computer-interface device for PCs, which integrates a hemispherical
sub-display and a pointing device. Then we evaluate the interface device in
terms of its effectiveness of operability and visibility. As seen from the
analyses of a subjective evaluation, our interface device obtained good
impressions results for both elderly people and PC beginners. Keywords: PC User Support; Human-Computer-Interface Device; GUI Pointing Device;
Hemispherical Display | |||
| Uni-model Human System Interface Using sEMG | | BIBAK | Full-Text | 446-453 | |
| Srinivasan Jayaraman; Venkatesh Balasubramanian | |||
| Today's high-end computer systems contain technologies that only few
individuals could have imagined a few years ago. However the conscious input
device ergonomics design is still lagging; for example, the extensive usage of
computer mouse results in various upper extremity musculoskeletal disorders.
This endower towards the developed of HSI system, that act as an alternative or
replacement device for computer mouse; thereby one could avoid musculoskeletal
disorders. On the other hand, the developed system can also act as an aid tool
for individuals with upper extremity disabled. The above issue can be addressed
by developing a framework for Human System Interface (HSI) using biological
signal as an input signal. The objective of this paper is to develop the
framework for HSI system using Surface Electromyogram for individuals with
various degrees of upper extremity disabilities. This framework involves the
data acquisition of muscle activity, translator algorithm that analysis and
translate the EMG as control signal and a platform independent tool to provide
mouse cursor control. Thus developed HSI system is validate on applications
like web-browsing, simple arithmetic calculation with the help of GUI tool
designed. Keywords: EMG; HSI; Computer Cursor Control | |||
| An Assistive Bi-modal User Interface Integrating Multi-channel Speech Recognition and Computer Vision | | BIBAK | Full-Text | 454-463 | |
| Alexey Karpov; Andrey Ronzhin; Irina S. Kipyatkova | |||
| In this paper, we present a bi-modal user interface aimed both for
assistance to persons without hands or with physical disabilities of
hands/arms, and for contactless HCI with able-bodied users as well. Human being
can manipulate a virtual mouse pointer moving his/her head and verbally
communicate with a computer, giving speech commands instead of computer input
devices. Speech is a very useful modality to reference objects and actions on
objects, whereas head pointing gesture/motion is a powerful modality to
indicate spatial locations. The bi-modal interface integrates a tri-lingual
system for multi-channel audio signal processing and automatic recognition of
voice commands in English, French and Russian as well as a vision-based head
detection/tracking system. It processes natural speech and head pointing
movements in parallel and fuses both informational streams in a united
multimodal command, where each modality transmits own semantic information:
head position indicates 2D head/pointer coordinates, while speech signal yields
control commands. Testing of the bi-modal user interface and comparison with
contact-based pointing interfaces was made by the methodology of ISO 9241-9. Keywords: Multi-modal user interface; assistive technology; speech recognition;
computer vision; cognitive experiments | |||
| A Method of Multiple Odors Detection and Recognition | | BIBAK | Full-Text | 464-473 | |
| Dong-Kyu Kim; Yong-Wan Roh; Kwang-Seok Hong | |||
| In this paper, we propose a method to detect and recognize multiple odors,
and implement a multiple odor recognition system. Multiple odor recognition
technology has not yet been developed, since existing odor recognition
techniques which have been researched and developed by components analysis and
pattern recognition techniques only deal with single odors at a time. Multiple
odors represent a dynamic odor change from no odor to a single odor and
multiple odors, which is the most common situation in a real-world environment.
Therefore, it is necessary to sense and recognize techniques for dynamic odor
changes. To recognize multiple odors, the proposed method must include odor
data acquisition using a smell sensor array, odor detection using entropy,
feature extraction using Principal Component Analysis, recognition candidate
selection using Tree Search, and recognition using Euclidean Distance. To
verify the validity of this study, a performance evaluation was conducted using
a 132 odor database. As a result, the odor detection rate is approximately
95.83% and the odor recognition rate is approximately 88.97%. Keywords: Dynamic Odor Change; Multiple Odors; Odor Detection and recognition | |||
| Report on a Preliminary Study Using Breath Control and a Virtual Jogging Scenario as Biofeedback for Resilience Training | | BIBAK | Full-Text | 474-480 | |
| Jacquelyn Ford Morie; Eric Chance; J. Galen Buckwalter | |||
| Alternative methods of treating psychological stress are needed to treat
some veterans of recent military conflicts. The use of virtual world
technologies is one possible platform for treatment that is being explored by
the "Coming Home" project at the University of Southern California's Institute
for Creative Technologies (ICT). One of the novel ways ICT is attempting to
mitigate stress via virtual worlds is with a virtual jogging scenario, where
the movement of an avatar is controlled via rhythmic breathing into a standard
microphone. We present results from a preliminary study of 27 participants that
measured the mood and arousal effects produced by engaging in this scenario. Keywords: Breathing; Virtual World; Second Life; stress relief | |||
| Low Power Wireless EEG Headset for BCI Applications | | BIBAK | Full-Text | 481-490 | |
| Shrishail Patki; Bernard Grundlehner; Toru Nakada; Julien Penders | |||
| Miniaturized, low power and low noise circuits and systems are instrumental
in bringing EEG monitoring to the home environment. In this paper, we present a
miniaturized, low noise and low-power EEG wireless platform integrated into a
wearable headset. The wireless EEG headset achieves remote and wearable
monitoring of up to 8 EEG channels. The headset can be used with dry or gel
electrodes. The use of the headset as a brain computer interface is
demonstrated and evaluated. In particular, the capability of the system in
measuring P300 complexes is quantified. Applications of this prototype are
foreseen in the clinical, lifestyle and entertainment domains. Keywords: Brain computer interface; EEG; headset; low power; wireless; wearable; ASIC | |||
| Virtual Mouse: A Low Cost Proximity-Based Gestural Pointing Device | | BIBAK | Full-Text | 491-499 | |
| Sheng Kai Tang; Wen Chieh Tseng; Wei Wen Luo; Kuo Chung Chiu; Sheng Ta Lin; Yen Ping Liu | |||
| Effectively addressing the portability of a computer mouse has motivated
researchers to generate diverse solutions. Eliminating the constraints of mouse
form factor by adopting vision-based techniques has recognized as an effective
approach. However, current solutions cost significant computing power and
require additional learning, thus making them inapplicable in industry. This
work presents the Virtual Mouse, a low-cost proximity-based pointing device,
consisting of 10 IR transceivers, a multiplexer, a microcontroller and pattern
recognition rules. With this embedded device on the side of a laptop computer,
a user can drive the cursor and activate related mouse events intuitively.
Preliminary testing results prove the feasibility, and issues are also reported
for future improvements. Keywords: Proximity based device; IR sensor; pointing device; virtual mouse | |||
| Innovative User Interfaces for Wearable Computers in Real Augmented Environment | | BIBAK | Full-Text | 500-509 | |
| Yun Zhou; Bertrand David; René Chalon | |||
| To be able to move freely in an environment, the user needs a wearable
configuration that is composed of a set of interaction devices, which allows
the interaction at least one hand free. Taking into account the location
(physical, geographical or logical) and the aimed activities of the user, the
interaction style and devices must be in appropriate relation with the context.
In this paper, we present our design approach and a series of real proposals of
wearable user interfaces. Our research is investigating innovative environment
dependent and environment independent interfaces. We describe these interfaces,
their configurations, real examples of use and the evaluation of selected
techniques. Keywords: One-hand interaction; wearable interface; augmented reality; pico projector;
context awareness; finger tracking | |||
| Influence of Prior Knowledge and Embodiment on Human-Agent Interaction | | BIBAK | Full-Text | 513-522 | |
| Yugo Hayashi; Victor V. Kryssanov; Kazuhisa Miwa; Hitoshi Ogawa | |||
| An experiment was conducted to capture characteristics of Human-Agent
Interactions in a collaborative environment. The goal was to explore the
following two issues: (1) Whether the user's emotional state is more stimulated
when the user has a human schema, as opposed to a computer agent schema, and
(2) Whether the user's emotional state is more stimulated when the user
interacts with a human-like ECA (Embodied Conversational Agent), as opposed to
a non human-like ECA or when there is no ECA. Results obtained in the
experiment suggest that: (a) participants with a human schema produce higher
ratings, compared to those with a computer agent schema, on the emotional
(interpersonal stress and affiliation emotion) scale of communication; (b) A
human-like interface is associated with higher ratings, compared to the cases
of a robot-like interface and a no ECA interface, on the emotional (e.g.,
interpersonal stress and affiliation emotion) scale of communication. Keywords: Embodied Conversational Agent; Human-Computer Interaction; User Interface | |||
| The Effect of Physical Embodiment of an Animal Robot on Affective Prosody Recognition | | BIBAK | Full-Text | 523-532 | |
| Myounghoon Jeon; Infantdani A. Rayan | |||
| Difficulty understanding or expressing affective prosody is a critical issue
for people with autism. This study was initiated with a question, how to
improve emotional communications of children with autism with technological
aids. Researchers have encouraged the use of robots as new intervention tools
for children with autism, but there was no study to empirically evaluate a
robot compared to a traditional computer in the interventions. From these
backgrounds, this study investigated the potentials of an animal robot for
affective prosody recognition compared to a traditional PC simulator. For this
pilot study, however, only neurotypical students participated. Participants
recognized Ekman's basic emotions from both a dinosaur Robot, "Pleo" and a
virtual simulator of the Pleo. The physical Pleo showed more promising
recognition tendencies and was clearly favored over the virtual one. With this
promising result, we may be able to leverage the other advantages of the robot
in interventions for children with autism. Keywords: Affective prosody recognition; children with autism; animal robot | |||
| Older User-Computer Interaction on the Internet: How Conversational Agents Can Help | | BIBAK | Full-Text | 533-536 | |
| Wi-Suk Kwon; Veena Chattaraman; Soo In Shim; Hanan Alnizami; Juan E. Gilbert | |||
| Using a qualitative study employing a role-playing approach with human
agents, this study identifies the potential roles of conversational agents in
enhancing older users' computer interactions on the Internet in e-commerce
environments. Twenty-five participants aged 65 or older performed a given
shopping task with a human agent playing the role of a conversational agent.
The activity computer screens were video-recorded and the participant-agent
conversations were audio-recorded. Through navigation path analysis as well as
content analysis of the conversations, three major issues hindering older
users' Internet interaction are identified: (1) a lack of prior computer
knowledge, (2) a failure to locate information or buttons, and (3) confusions
related to meanings of information. The navigation path analysis also suggests
potential ways conversational agents may assist older users to optimize their
search strategies. Implications and suggestions for future studies are
discussed. Keywords: Conversational agent; older users; Internet; interaction | |||
| An Avatar-Based Help System for Web-Portals | | BIBAK | Full-Text | 537-546 | |
| Helmut Lang; Christian Mosch; Bastian Boegel; David Michel Benoit; Wolfgang Minker | |||
| In this paper we present an avatar-based help system for web-portals that
should provide various kinds of user assistance. Along with helping users on
individual elements of a web page, it is also capable to offer step-by-step
guidance supporting users to complete specific tasks. Furthermore users can
input free text questions in order to get additional information on related
topics. Thus the avatar features a single point of reference, when the user
feels the need for assistance. Different to typical systems based on dedicated
help sections consisting of standalone HTML pages, help is instantly available
and displayed directly at the element the user is currently working on. Keywords: HCI; Computer Assisted Learning; Grid Computing | |||
| mediRobbi: An Interactive Companion for Pediatric Patients during Hospital Visit | | BIBAK | Full-Text | 547-556 | |
| Szu-Chia Lu; Nicole Blackwell; Ellen Yi-Luen Do | |||
| Young children often feel terribly anxious while visiting a doctor. We
designed mediRobbi, an interactive robotic companion, to help pediatric
patients feel more relaxed and comfortable in hospital visits. mediRobbi can
guide and accompany the pediatric patients through their medical procedures.
The sensors and servomotors enable mediRobbi to respond to its environmental
inputs and the reactions from young children as well. The ultimate goal of this
study is to transform an intimidating medical situation into a joyful adventure
game for the pediatric patients. Keywords: Pediatric care; robotics; Children behavior | |||
| Design of Shadows on the OHP Metaphor-Based Presentation Interface Which Visualizes a Presenter's Actions | | BIBAK | Full-Text | 557-564 | |
| Yuichi Murata; Kazutaka Kurihara; Toshio Mochizuki; Buntarou Shizuki; Jiro Tanaka | |||
| We describe the design of shadows of an overhead projector (OHP)
metaphor-based presentation interface that visualizes a presenter's action. Our
interface work with graphics tablet devices. It superimposes a pen-shaped
shadow based on position, altitude and azimuth of a pen. A presenter can easily
point the slide with the shadow. Moreover, an audience can observe the
presenter's actions by the shadow. We performed two presentations using a
prototype system and gather feedback from them. We decided on the design of the
shadows on the basis of the feedback. Keywords: presentation; graphics tablet devices; digital ink; shadow | |||
| Web-Based Nonverbal Communication Interface Using 3DAgents with Natural Gestures | | BIBAK | Full-Text | 565-574 | |
| Toshiya Naka; Toru Ishida | |||
| In this paper, we assumed that the nonverbal communication by using 3DAgents
with natural gestures had various advantages compared with only the traditional
voice and video communication, and we developed the IMoTS ( Interactive Motion
Tracking System) to verify this hypothesis. The features of this system are
that the natural gestures of 3DAgents are captured easily by using interactive
GUI from the 2D video images in which some characteristic human behaviors are
captured, transmitted, and reproduced by natural gestures of 3DAgents. From the
experimental results, we showed that the accuracy of captured gestures which
often used in web communications was within the level of detectable limit. And
we found that human behaviors could be characterized by the mathematical
formula, and some of the information could be transmitted, especially some
personalities such as quirks and likeness had the predominant effects of
impressions and memories of human. Keywords: Computer graphics; Agent; Virtual reality; Nonverbal Communication | |||
| Taking Turns in Flying with a Virtual Wingman | | BIBAK | Full-Text | 575-584 | |
| Pim Nauts; Willem A. van Doesburg; Emiel Krahmer; Anita H. M. Cremers | |||
| In this study we investigate miscommunications in interactions between human
pilots and a virtual wingman, represented by our virtual agent Ashley. We made
an inventory of the type of problems that occur in such interactions using
recordings of Ashley in flight briefings with pilots and designed a perception
experiment to find evidence of human pilots providing cues on the occurrence of
miscommunications. In this experiment, stimuli taken from the recordings are
rated by naive participants on successfulness. Results show the largest part of
miscommunications concern floor management. Participants are able to correctly
assess the success of interactions, thus indicating cues for such judgment are
present, though successful interactions are better recognized. Moreover, we see
stimulus modality (audio, visual or combined) does not influence the ability of
participants to judge the success of the interactions. From these results, we
present recommendations for further developing virtual wingmen. Keywords: Human-machine interaction; turn-taking; floor management; training;
simulation; embodied conversational agents; virtual humans | |||
| A Configuration Method of Visual Media by Using Characters of Audiences for Embodied Sport Cheering | | BIBAK | Full-Text | 585-592 | |
| Kentaro Okamoto; Michiya Yamamoto; Tomio Watanabe | |||
| In sports bars, where people watch live sports on TV, it is not possible to
experience the atmosphere of the stadium. In this study, we focus on the
importance of embodiment in sport cheering, and we develop a prototype of an
embodied cheering support system. A stadium-like atmosphere can be created by
arraying crowds of audience characters in a virtual stadium, and users can
perceive a sense of unity and excitement by cheering with embodied motions and
interacting with the audience characters. Keywords: Embodied media; embodied interaction; sports cheering | |||
| Introducing Animatronics to HCI: Extending Reality-Based Interaction | | BIBAK | Full-Text | 593-602 | |
| G. Michael Poor; Robert J. K. Jacob | |||
| As both software and hardware technologies have been improved during the
past two decades, a number of interfaces have been developed by HCI
researchers. As these researchers began to explore the next generation of
interaction styles, it was inevitable that they use a lifelike robot (or
animatronic) as the basis for interaction. However, the main use up to this
point for animatronic technology had been "edutainment." Only recently was
animatronic technology even considered for use as an interaction style. In this
research, various interaction styles (conventional GUI, AR, 3D graphics, and
introducing an animatronic user interface) were used to instruct users on a 3D
construction task which was constant across the various styles. From this
experiment the placement, if any, of animatronic technology in the
reality-based interaction framework will become more apparent. Keywords: Usability; Animatronics; Lifelike Robotics; Reality-Based Interaction;
Interaction Styles | |||
| Development of Embodied Visual Effects Which Expand the Presentation Motion of Emphasis and Indication | | BIBAK | Full-Text | 603-612 | |
| Yuya Takao; Michiya Yamamoto; Tomio Watanabe | |||
| Although visual presentation software typically has a pen function, it tends
to remain unused by most users. In this paper, we propose a concept of embodied
visual effects that expresses emphasis and indication of presentation motions
using a pen display. First, we measured the timing of presentation motions of
pen use achieved while in sitting and standing positions. Next, we evaluated
the timing of underlining and explanation through a synthesis analysis from the
viewpoint of the attendees. Then, on the basis of the results of our
measurements and evaluation, we developed several visual effects. These visual
effects, which express the embodied motions and control the embodied timing,
are implemented as system prototypes. Keywords: Human interaction; presentation; timing control; visual effect | |||
| Experimental Study on Appropriate Reality of Agents as a Multi-modal Interface for Human-Computer Interaction | | BIBAK | Full-Text | 613-622 | |
| Kaori Tanaka; Tatsunori Matsui; Kazuaki Kojima | |||
| Although humanlike robots and computer agents are fundamentally recognized
as familiar, considerable similar external representation occasionally reduces
their familiarities. We experimentally investigated relationships between the
similarities and the familiarities of multi-modal agents which had face and
voice representation, with the results indicating that similarities of the
agents didn't simply increase their familiarities. The results in our
experiments implied that external representation of computer agents for
communicative interactions should not be very similar to human but
appropriately similar in order to gain familiarities. Keywords: Multi-modal agent; face; voice; similarity; familiarity; uncanny valley | |||