HCI Bibliography Home | HCI Conferences | HCII Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCII Tables of Contents: 09-411-111-211-311-411-511-613-113-213-313-413-513-613-714-114-214-314-414-515-115-2

HCI International 2013: 15th International Conference on HCI, Part IV: Interaction Modalities and Techniques

Fullname:HCI International 2013: 15th International Conference on Human-Computer Interaction, Part IV: Interaction Modalities and Techniques
Editors:Masaaki Kurosu
Location:Las Vegas, Nevada
Dates:2013-Jul-21 to 2013-Jul-26
Volume:4
Publisher:Springer Berlin Heidelberg
Series:Lecture Notes in Computer Science 8007
Standard No:DOI: 10.1007/978-3-642-39330-3 hcibib: HCII13-4; ISBN: 978-3-642-39329-7 (print), 978-3-642-39330-3 (online)
Papers:81
Pages:768
Links:Online Proceedings | Conference Home Page
  1. HCII 2013-07-21 Volume 4
    1. Speech, Natural Language and Auditory Interfaces
    2. Gesture and Eye-Gaze Based Interaction
    3. Touch-Based Interaction
    4. Haptic Interaction
    5. Graphical User Interfaces and Visualisation

HCII 2013-07-21 Volume 4

Speech, Natural Language and Auditory Interfaces

Controlling Interaction in Multilingual Conversation BIBAKFull-Text 3-12
  Christina Alexandris
The present approach targets to provide a framework for facilitating multilingual interaction in online business meetings with an agenda as well as in similar applications in the service sector where there is a less task-oriented form of interaction. A basic problem to be addressed is the control of the topics covered during the interaction and the expression of opinion. In the proposed template-based approach, the System is proposed to act as a mediator to control the dialog flow, within the modeled framework of the sublanguage-specific and pragmatically related design.
Keywords: Templates; Simple Interlinguas; Non Task-related Speech Acts; Skype; subtitles
Linguistic Processing of Implied Information and Connotative Features in Multilingual HCI Applications BIBAKFull-Text 13-22
  Christina Alexandris; Ioanna Malagardi
Implied information and connotative features may not always be easily detected or processed in multilingual Human-Computer Interaction Systems for the International Public, especially in applications related to the Service Sector. The proposed filter concerns the detection of implied information and connotative features in HCI applications processing online texts and may be compatible with Interlinguas including the signalization of connotative features, if necessary. The proposed approach combines features detected in the lexical and morpho-syntactic level, and in the prosodic and paralinguistic levels.
Keywords: Gricean Cooperativity Principle; online texts; Interlinguas; Morphology; prosodic and paralinguistic features
Investigating the Impact of Combining Speech and Earcons to Communicate Information in E-government Interfaces BIBAKFull-Text 23-31
  Dimitrios Rigas; Badr Almutairi
This research investigates the use of multimodal metaphors to communicate information in the interface of an e-government application in order to reduce complexity in the visual communication by incorporating auditory stimuli. These issues are often neglected in the interfaces of e-government applications. This paper investigates the possibility of using multimodal metaphors to enhance the usability and increase the trust between the user and the application using an empirical comparative study. The multimodal metaphors investigated include text, earcons and recorded speech. More specifically, this experiment aims to investigate the usability in terms of efficiency, effectiveness and user satisfaction in the context of a multimodal e-government interface, as opposed to a typical text with graphics based interface. This investigation was evaluated by 30 users and comprised two different interface versions in each experimental e-government tool. The obtained results demonstrated the usefulness of the tested metaphors to enhance e-government usability and to enable users to attain better communicating performance. In addition empirically derived guidelines showed that the use of multimodal metaphors in an e-government system could significantly contribute to enhance the usability and increase trust between a user and an e-government interface. These results provide a paradigm of a design framework for the use of multimodal metaphors in e-government interfaces.
Keywords: e-government; Recorded Speech; Earcons; Multimodal; Trust; HC1
Evaluation of WikiTalk -- User Studies of Human-Robot Interaction BIBAKFull-Text 32-42
  Dimitra Anastasiou; Kristiina Jokinen; Graham Wilcock
The paper concerns the evaluation of Nao WikiTalk, an application that enables a Nao robot to serve as a spoken open-domain knowledge access system. With Nao WikiTalk the robot can talk about any topic the user is interested in, using Wikipedia as its knowledge source. The robot suggests some topics to start with, and the user shifts to related topics by speaking their names after the robot mentions them. The user can also switch to a totally new topic by spelling the first few letters. As well as speaking, the robot uses gestures, nods and other multimodal signals to enable clear and rich interaction. The paper describes the setup of the user studies and reports on the evaluation of the application, based on various factors reported by the 12 users who participated. The study compared the users' expectations of the robot interaction with their actual experience of the interaction. We found that the users were impressed by the lively appearance and natural gesturing of the robot, although in many respects they had higher expectations regarding the robot's presentation capabilities. However, the results are positive enough to encourage research on these lines.
Keywords: Evaluation; multimodal human-robot interaction; gesturing; Wikipedia
Robust Multi-Modal Speech Recognition in Two Languages Utilizing Video and Distance Information from the Kinect BIBAKFull-Text 43-48
  Georgios Galatas; Gerasimos Potamianos; Fillia Makedon
We investigate the performance of our audio-visual speech recognition system in both English and Greek under the influence of audio noise. We present the architecture of our recently built system that utilizes information from three streams including 3-D distance measurements. The feature extraction approach used is based on the discrete cosine transform and linear discriminant analysis. Data fusion is employed using state-synchronous hidden Markov models. Our experiments were conducted on our recently collected database under a multi-speaker configuration and resulted in higher performance and robustness in comparison to an audio-only recognizer.
Keywords: Audio-visual automatic speech recognition; multi-sensory fusion; languages; linear discriminant analysis; depth information; Microsoft Kinect
The Ecological AUI (Auditory User Interface) Design and Evaluation of User Acceptance for Various Tasks on Smartphones BIBAKFull-Text 49-58
  Myounghoon Jeon; Ju-Hwan Lee
With the rapid development of the touch screen technology, some usability issues of smartphones have been reported [1]. To tackle those user experience issues, there has been research on the use of non-speech sounds on the mobile devices [e.g., 2, 3-7]. However, most of them have focused on a single specific task of the device. Given the varying functions of the smartphone, the present study designed plausibly integrated auditory cues for diverse functions and evaluated user acceptance levels from the ecological interface design perspective. Results showed that sophisticated auditory design could change users' preference and acceptance of the interface and the extent depended on usage contexts. Overall, participants gave significantly higher scores on the functional satisfaction and the fun scales in the sonically-enhanced smartphones than in the no-sound condition. The balanced sound design may free users from auditory pollution and allow them to use their devices more pleasantly.
Keywords: Auditory user interface; ecological user interface design; smartphones; user acceptance
Speech-Based Text Correction Patterns in Noisy Environment BIBAFull-Text 59-66
  Ladislav Kunc; Tomáš Macek; Martin Labský; Jan Kleindienst
We present a study focused on observation of methods of dictation and error correction between humans in a noisy environment. The purpose of this study is to gain insight to natural communication patterns which can then be applied to human -- machine interaction. We asked 10 subjects to conduct the standard Lane Change Test (LCT) while dictating messages to a human counterpart who had to note down the message texts. Both parties were located in separate rooms and communicated over Skype. Both were exposed to varying types and levels of noise, which made their communication difficult and forced the subjects to deal with misunderstandings. Dictation of both short and longer messages was tested. We observed how the subjects behaved and we analyzed their communication patterns. We identified and described more then 20 elementary observations related to communication techniques such as synchronization and grounding of parties, error checking and error correction. We also report frequencies of use for each communication pattern and provide basic characteristics of driving distraction during the test.
Multimodal Smart Interactive Presentation System BIBAKFull-Text 67-76
  Hoang-An Le; Khoi-Nguyen C. Mac; Truong-An Pham; Vinh-Tiep Nguyen; Minh-Triet Tran
The authors propose a system that allows presenters to control presentations in a natural way by their body gestures and vocal commands. Thus a presentation no longer follows strictly a rigid sequential structure but can be delivered in various flexible and content adapted scenarios. Our proposed system fuses three interaction modules: gesture recognition with Kinect 3D skeletal data, key concepts detection by context analysis from natural speech, and small-scaled hand gesture recognition with haptic data from smart phone sensors. Each module can process in realtime with the accuracy of 95.0%, 91.2%, and 90.1% respectively. The system uses events generated from the three modules to trigger pre-defined scenarios in a presentation to enhance the exciting experience for audiences.
Keywords: Smart environment; presentation system; natural interaction; gesture recognition; speech recognition
Multimodal Mathematical Expressions Recognition: Case of Speech and Handwriting BIBAKFull-Text 77-86
  Sofiane Medjkoune; Harold Mouchere; Simon Petitrenaud; Christian Viard-Gaudin
In this work, we propose to combine two modalities, handwriting and speech, to build a mathematical expression recognition system. Based on two sub-systems which process each modality, we explore various fusion methods to resolve ambiguities which naturally occur independently. The results that are reported on the HAMEX bimodal database show an improvement with respect to a mono-modal based system.
Keywords: Multimodality; graphical languages; data fusion; handwriting; speech
'Realness' in Chatbots: Establishing Quantifiable Criteria BIBAKFull-Text 87-96
  Kellie Morrissey; Jurek Kirakowski
The aim of this research is to generate measurable evaluation criteria acceptable to chatbot users. Results of two studies are summarised. In the first, fourteen participants were asked to do a critical incident analysis of their transcriptions with an ELIZA-type chatbot. Results were content analysed, and yielded seven overall themes. In the second, these themes were made into statements of an attitude-like nature, and 20 participants chatted with five winning entrants in the 2011 Chatterbox Challenge and five which failed to place. Latent variable analysis reduced the themes to four, resulting in four subscales with strong reliability which discriminated well between the two categories of chatbots. Content analysis of freeform comments led to a proposal of four dimensions along which people judge the naturalness of a conversation with chatbots.
Keywords: Chatbot; user-agent; intelligent assistant; naturalness; convincing; usability; evaluation; quantitative; questionnaire; Turing; Chatterbox
Grounding and Turn-Taking in Multimodal Multiparty Conversation BIBAKFull-Text 97-106
  David Novick; Iván Gris
This study explores the empirical basis for multimodal conversation control acts. Applying conversation analysis as an exploratory approach, we attempt to illuminate the control functions of paralinguistic behaviors in managing multiparty conversation. We contrast our multiparty analysis with an earlier dyadic analysis and, to the extent permitted by our small samples of the corpus, contrast (a) conversations where the conversants did or did not have an artifact, and (b) conversations in English among Americans with conversations in Spanish among Mexicans. Our analysis suggests that speakers tend not to use gaze shifts to cue nodding for grounding and that the presence of an artifact reduced listeners' gaze at the speaker. These observations remained relatively consistent across the two languages.
Keywords: Dialog; proxemics; gaze; turn-taking; multicultural; multiparty
Situated Multiparty Interaction between Humans and Agents BIBAFull-Text 107-116
  Aasish Pappu; Ming Sun; Seshadri Sridharan; Alex Rudnicky
A social agent such as a receptionist or an escort robot encounters challenges when communicating with people in open areas. The agent must know not to react to distracting acoustic and visual events and it needs to appropriately handle situations that include multiple humans, being able to focus on active interlocutors and appropriately shift attention based on the context. We describe a multiparty interaction agent that helps multiple users arrange a common activity. From the user study we conducted, we found that the agent can discriminate between active and inactive interlocutors well by using the skeletal and azimuth information. Participants found the addressee much clearer when an animated talking head was used.
Enhancing Human Computer Interaction with Episodic Memory in a Virtual Guide BIBAKFull-Text 117-125
  Felix Rabe; Ipke Wachsmuth
Have you ever found yourself in front of a computer and asking it aloud: "Why?" We have constructed a cognitively motivated episodic memory system that enables a virtual guide to respond to this question. The guide, a virtual agent based on a belief -- desire -- intention (BDI) architecture, is employed in a Virtual Reality (VR) scenario where he accompanies a human visitor on a tour through a city. In this paper we explain how the agents memorizes events and episodes according to an event-indexing model and how the interaction is enhanced by using these memories. We argue that due to the cognitively motivated nature of the event-indexing model every interaction situation can be described, memorized, recalled and explained by the agent.
Keywords: Episodic Memory; Event Indexing; Virtual Guide
System of Generating Japanese Sound Symbolic Expressions Using Genetic Algorithm BIBAFull-Text 126-134
  Yuichiro Shimizu; Tetsuaki Nakamura; Maki Sakamoto
Japanese has a large number of sound symbolic words, onomatopoeia, which associates between sounds and sensory experiences. According to previous studies, a quantification of relationship between phonemes and images enables to predict the images evoked by onomatopoeia and to estimate meanings of onomatopoeia. In this study, we applied the quantification method and developed a system for generating Japanese onomatopoeias using genetic algorithm (GA). Our method uses 90 SD scales for expressing various impressions and genes for genetic algorithm which denote each phonological symbol in Japanese. Through genetic algorithm, the system generates and proposes onomatopoeias appropriate for impressions inputted by users. From the evaluation of our system, impressions of onomatopoeias generated by our method were similar to inputted impressions to generate onomatopoeias.
A Knowledge Elicitation Study for Collaborative Dialogue Strategies Used to Handle Uncertainties in Speech Communication While Using GIS BIBAKFull-Text 135-144
  Hongmei Wang; Ava Gailliot; Douglas Hyden; Ryan Lietzenmayer
Existing speech enabled Geographical Information Systems (GIS) needs to have capabilities to handle uncertainties that are inherent in natural language communication. The system must have an appropriate knowledge base to hold such capabilities so that it can effectively handle various uncertainty problems in speech communication. The goal of this study is to collect knowledge about how humans use collaborative dialogues to solve various uncertainty problems while using GIS. This paper describes a knowledge elicitation study that we designed and conducted toward this goal. The knowledge collected can be used to develop the knowledge base of a speech enabled GIS or other speech based information systems.
Keywords: GIS; Knowledge elicitation study; Uncertainties; Human-GIS Communication; Collaborative dialogue strategies

Gesture and Eye-Gaze Based Interaction

Context-Based Bounding Volume Morphing in Pointing Gesture Application BIBAKFull-Text 147-156
  Andreas Braun; Arthur Fischer; Alexander Marinc; Carsten Stocklöw; Martin Majewski
In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user's intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a prototypical system we have proven the usability of such a system in a qualitative evaluation.
Keywords: Multimodal Interaction; Speech Recognition; Goal-based Interaction; Gesture Recognition
Gesture vs. Gesticulation: A Test Protocol BIBAKFull-Text 157-166
  Francesco Carrino; Antonio Ridi; Rolf Ingold; Omar Abou Khaled; Elena Mugellini
In the last years, gesture recognition has gained increased attention in Human-Computer Interaction community. However, gesture segmentation, which is one of the most challenging tasks in gesture recognition applications, is still an open issue. Gesture segmentation has two main objectives: first, detecting when a gesture begins and ends; second, recognizing whether a gesture is meant to be meaningful for the machine or is a non-command gesture (such as gesticulation). This paper proposes a novel test protocol for the evaluation of different techniques separating command gestures from non-command gestures. Finally, we show how we adapted adopted our test protocol to design a touchless, always available interaction system, in which the user communicates directly with the computer through a wearable and "intimate" interface based on electromyographic signals.
Keywords: Gesture segmentation; gesture interaction; test protocol; muscle-computer interface; system evaluation and interaction
Functional Gestures for Human-Environment Interaction BIBAKFull-Text 167-176
  Stefano Carrino; Maurizio Caon; Omar Abou Khaled; Rolf Ingold; Elena Mugellini
In this paper, we describe an opportunistic model for human-environment interaction. Such model is conceived to adapt the expressivity of a small lexicon of gestures through the use of generic functional gestures lowering the cognitive load on the user and reducing the system complexity. An interactive entity is modeled as a finite-state machine. A functional gesture is defined as the semantic meaning of an event that triggers a state transition and not as the movement to be performed. An interaction scenario has been designed in order to evaluate the features of the proposed model and to investigate how its application can enhance a post-WIMP human-environment interaction.
Keywords: natural interaction; functional gestures; pervasive computing; human-computer interaction
A Dynamic Fitting Room Based on Microsoft Kinect and Augmented Reality Technologies BIBAKFull-Text 177-185
  Hsien-Tsung Chang; Yu-Wen Li; Huan-Ting Chen; Shih-Yi Feng; Tsung-Tien Chien
In recent years, more and more researchers try to make Microsoft Kinect and Augmented Reality (AR) into real lives. In this paper, we try to utilize both Kinect and AR to build a dynamic fitting room. We can automatically measure the clothes size of a user in popular brands or different country standards. A user can utilize gesture to select cloths for fitting. Our proposed system will project the video dynamically of dressing selected clothes in accordance with the captured video from Kinect. This system can be utilized in clothing store, e-commerce of clothes shopping, and at your home when you are confusing choosing a clothes to wear. This can greatly reduce the time you fitting clothes.
Keywords: Dynamic Fitting Room; Kinect; Augmented Reality
Gesture-Based Applications for Elderly People BIBAKFull-Text 186-195
  Weiqin Chen
According to the literature, normal ageing is associated with a decline in sensory, perceptual, motor and cognitive abilities. When designing applications for elderly people, it is crucial to take into consideration the decline in functions. For this purpose, gesture-based applications that allow for direct manipulations can be useful, as they provide natural and intuitive interactions. This paper examines gesture-based applications for the elderly and studies that have investigated these applications, and it identifies opportunities and challenges in designing such applications.
Keywords: Gesture; elderly; direct manipulation; accessibility
MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device BIBAKFull-Text 196-204
  Enkhbat Davaasuren; Jiro Tanaka
When people collaborate with multiple large screens, gesture interactions will be used widely. However, in conventional methods of gesture interaction, when there are multiple users, simultaneous interaction is difficult. In this study we have proposed a method using a wearable mobile device which enables multi-user and hand gestures only interactions. In our system, the user wears a camera-equipped mobile device like a pendant, and interacts with a large screen.
Keywords: Gesture; Gestural Interface; Large Screen; Mobile; Wearable Device; Multi-User
Head-Free, Remote Gaze Detection System Based on Pupil-Corneal Reflection Method with Using Two Video Cameras -- One-Point and Nonlinear Calibrations BIBAKFull-Text 205-214
  Yoshinobu Ebisawa; Kiyotaka Fukumoto
We developed a pupil-corneal reflection method-based gaze detection system, which allows head movements and achieves easy gaze calibration. The proposed gaze detection theory determines gaze points on a PC screen from the vector from the corneal reflection to pupil center, 3D pupil position, two cameras position, etc. In a gaze calibration procedure, after a user is asked to gaze one specific calibration target at a center of a PC screen, the nonlinear characteristic of the eyes has been automatically corrected while the user is using this gaze system. The experimental results show that the proposed calibration method improved the precision of gaze detection during browsing web pages. In addition, the average gaze error in the visual angle is less than 0.6 degree for the nine head positions.
Keywords: Gaze detection; Gaze calibration; Head movement; Pupil
Design and Usability Analysis of Gesture-Based Control for Common Desktop Tasks BIBAKFull-Text 215-224
  Farzin Farhadi-Niaki; S. Ali Etemad; Ali Arya
We have designed and implemented a vision-based system capable of interacting with user's natural arm and finger gestures. Using depth-based vision has reduced the effect of ambient disturbances such as noise and lighting condition. Various arm and finger gestures are designed and a system capable of detection and classification of gestures is developed and implemented. Finally the gesture recognition routine is linked to a simplified desktop for usability and human factor studies. Several factors such as precision, efficiency, ease-of-use, pleasure, fatigue, naturalness, and overall satisfaction are investigated in detail. Through different simple and complex tasks, it is concluded that finger-based inputs are superior to arm-based ones in the long run. Furthermore, it is shown that arm gestures cause more fatigue and appear less natural than finger gestures. However, factors such as time, overall satisfaction, and easiness were not affected by selecting one over the other.
Keywords: Usability study; human factors; arm/finger gestures; WIMP
Study of Eye-Glance Input Interface BIBAKFull-Text 225-234
  Dekun Gao; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
Optical measurement devices for eye movements are generally expensive and it is often necessary to restrict user head movements when various eye-gaze input interfaces are used. Previously, we proposed a novel eye-gesture input interface that utilized electrooculography amplified via an AC coupling that does not require a head mounted display[1]. Instead, combinations of eye-gaze displacement direction were used as the selection criteria. When used, this interface showed a success rate approximately 97.2%, but it was necessary for the user to declare his or her intention to perform an eye gesture by blinking or pressing an enter key. In this paper, we propose a novel eye-glance input interface that can consistently recognize glance behavior without a prior declaration, and provide a decision algorithm that we believe is suitable for eye-glance input interfaces such as small smartphone screens. In experiments using our improved eye-glance input interface, we achieved a detection rate of approximately 93% and a direction determination success rate of approximately 79.3%. A smartphone screen design for use with the eye-glance input interface is also proposed.
Keywords: Eye gesture; eye-glance; AC-EOG; smartphone; Screen design
Multi-User Interaction with Shadows BIBAFull-Text 235-242
  Tomomi Gotoh; Takahiro Kida; Munehiro Takimoto; Yasushi Kambayashi
Recent mobile devices such as smart phones exhibit performance as good as desktop PCs, and can be used more intuitively than PCs by using fingers. On the other hand, the defect of such a device is its small size. Its display is just big enough for single user, but is too small for interaction of multi-users. In order to overcome the defect, the research of projecting the display with a handheld projector has expanded. Most of the researches, however, do not allow users to manipulate the projected image in a direct manner. In this paper, we propose operations of projected images through shadows. We can create a shadow by shading the light of the projector with a finger. The shadow can be easily scaled by adjusting the distance between the finger and the projector. Also, since the shadow makes good contrast with the white light of the projector, it can be easily recognized through a camera. Using these properties of the shadow, we have implemented a series of operations required on the desktop, and file transfer as a basic multi-users interaction. We show that the users can perform these operations intuitively with the shadow of two fingertips as if they handle a tablet PC through multi-touches.
Intent Capturing through Multimodal Inputs BIBAKFull-Text 243-251
  Weimin Guo; Cheng Cheng; Mingkai Cheng; Yonghan Jiang; Honglin Tang
Virtual manufacturing environments need complex and accurate 3D human-computer interaction. One main problem of current virtual environments (VEs) is the heavy overloads of the users on both cognitive and motor operational aspects. This paper investigated multimodal intent delivery and intent inferring in virtual environments. Eye gazing modality is added into virtual assembly system. Typical intents expressed by dual hands and eye gazing modalities are designed. The reliability and accuracy of eye gazing modality is examined through experiments. The experiments showed that eye gazing and hand multimodal cooperation has a great potential to enhance the naturalness and efficiency of human-computer interaction (HCI).
Keywords: Eye tracking; multimodal input; virtual environment; human-computer interaction; virtual assembly; intent
Robust Hand Tracking in Realtime Using a Single Head-Mounted RGB Camera BIBAFull-Text 252-261
  Jan Hendrik Hammer; Jürgen Beyerer
In this paper novel 2D-hand tracking algorithms used in a system for hand gesture interaction are presented. New types of head-mounted Augmented-Reality devices offer the possibility to visualize digital content in the user's field of view. To interact with these head-mounted devices hand gestures are an intuitive modality. Generally, the recognition of hand gestures consists of two main steps: The first one is hand tracking and the second step gesture recognition. This paper concentrates on the first step: Hand tracking. Due to the wearing comfort of the glasses-like systems these only use a single camera to capture the field of view of the user. Therefore new algorithms for hand tracking without depth data are presented and compared to state-of-the-art algorithms by utilizing a thorough evaluation methodology for comparing trajectories.
Multimodal Feedback in First Encounter Interactions BIBAKFull-Text 262-271
  Kristiina Jokinen
Human interactions are predominantly conducted via verbal communication which allows presentation of sophisticated propositional content. However, much of the interpretation of the utterances and the speaker's attitudes are conveyed using multimodal cues such as facial expressions, hand gestures, head movements and body posture. This paper reports some observations on multimodal communication and feedback giving activity in first encounter interactions, and discusses how head, hand, and body movements are used in conversational interactions as means of visual interaction management, i.e. unobtrusive ways to control the interaction and construct shared understanding among the interlocutors. The observations and results contribute to the models for coordinating communication in human-human conversations as well as in interactions between humans and intelligent situated agents.
Keywords: multimodal interaction; feedback; nodding; head movements
Keyboard Clawing: Input Method by Clawing Key Tops BIBAKFull-Text 272-280
  Toshifumi Kurosawa; Buntarou Shizuki; Jiro Tanaka
We present a directional and quantitative input method by clawing key tops, Keyboard Clawing. The method allows a user to input a direction and quantity at the same time without moving his/her hands much from the keyboard's home position. As a result, the user can seamlessly continue typing before and after inputting the direction and quantity. We found that clawing direction is classified using clawing sounds with an accuracy of 68.2% and that our method can be used to input rough quantity.
Keywords: keyboard; acoustic sensing; gesture; input method
Finger Controller: Natural User Interaction Using Finger Gestures BIBAKFull-Text 281-290
  Unseok Lee; Jiro Tanaka
We present a new natural user interaction technique using finger gesture recognition and finger identification with Kinect depth data. We developed a gesture version drawing, multi-touch and mapping on 3d space interactions. We implemented three type interfaces using their interaction such as air-drawing, image manipulation and video manipulation. In this paper, we explain finger gesture recognition method, finger identification method and natural user interactions in detail. We show the preliminary experiment for evaluating accuracy of finger identification and finger gesture recognition accuracy, evaluating user questionnaire for interaction satisfaction. Finally, we discuss the result of evaluation and our contributions.
Keywords: NUI; Human Computer Interaction; Finger Gesture Recognition; Finger Identification
A Method for Single Hand Fist Gesture Input to Enhance Human Computer Interaction BIBAFull-Text 291-300
  Tao Ma; William Wee; Chia Yung Han; Xuefu Zhou
The study of detecting and tracking hand gestures in general has been widely explored, yet the focus on fist gesture in particular has been neglected. Methods for processing fist gesture would allow more natural user experience in human-machine interaction (HMI), however, it requires a deeper understanding of fist kinematics. For the purpose of achieving grasping-moving-rotating activity with single hand (SH-GMR), the extraction of fist rotation is necessary. In this paper, a feature-based Fist Rotation Detector (FRD) is proposed to bring more flexibility to interactions with hand manipulation in the virtual world. By comparing to other candidate methods, edge-based methods are shown to be a proper way to tackle the detection. We find a set of "fist lines" that can be easily extracted and be used steadily to determine the fist rotation. The proposed FRD is described in details as a two-step approach: fist shape segmentation and fist rotation angle retrieving process. A comparison with manually measured ground truth data shows that the method is robust and accurate. A virtual reality application using hand gesture control with the FRD shows that the hand gesture interaction is enhanced by the SH-GMR.
Kinect©, as Interaction Device with a Tiled Display BIBAFull-Text 301-311
  Amilcar Meneses Viveros; Erika Hernández Rubio
The use of high resolution tiled display has become popular in the scientific community. User interaction with these devices depends on the hardware configuration and the software in use. The variety of hardware configurations and software generates various types of execution modes and interaction in the tiled display, this diversity has resulted in not having a standard for human computer interaction. This paper shows the results of the interaction between users and the tiled display using the Kinect©. The results help us find improvements in hardware configurations of this arrays of displays, applications design and try to find standards in defining user-defined motion gestures.
Study on Cursor Shape Suitable for Eye-gaze Input System BIBAKFull-Text 312-319
  Atsuo Murata; Raku Uetsugi; Takehito Hayami
The aim of this study was to identify the cursor shape suitable for eye-gaze interfaces. The conventional arrow shape was, irrespective of the number of targets in the display, not suitable for an eye-gaze input system from the perspective of task completion time, number of errors, and subjective rating on usability. It is recommended that the cursor shape of an eye-gaze input system should be cross or ellipse. When the distance between targets is wider, the ellipse type is proper.
Keywords: cursor shape; speed; accuracy; eye-gaze input system
Study on Character Input Methods Using Eye-gaze Input Interface BIBAKFull-Text 320-329
  Atsuo Murata; Kazuya Hayashi; Makoto Moriwaka; Takehito Hayami
Four character input methods for eye-gaze input interface were compared from the viewpoints of input speed, input accuracy, and subjective rating on ease of input and fatigue. Four input methods included (1) I-QGSM (vertical), (2) I-QGSM (circle), (3) eye-fixation method, and (4) screen button. While the eye-fixation method (3) led to faster input, the I-QGSM (vertical) led to fewer errors. In conclusion, it is difficult to develop character input method that satisfies both speed and accuracy.
Keywords: Character input; eye-gaze input system; I-QGSM; eye-fixation; speed; accuracy
Proposal of Estimation Method of Stable Fixation Points for Eye-gaze Input Interface BIBAKFull-Text 330-339
  Atsuo Murata; Takehito Hayami; Keita Ochi
As almost all of existing eye-gaze input devices suffers from fine and frequent shaking of fixation points, an effective and stable estimation method of fixation points has been proposed so that the obtained stable fixation points enabled users to point even to a smaller target easily. An estimation algorithm was based on the image processing technique (Hough transformation). An experiment was carried out to verify the effectiveness of eye-gaze input system that made use of the proposed estimation method of fixation point. From both evaluation measures, the proposed method was found to assure more stable cursor movement than the traditional and commercial method.
Keywords: Eye-gaze input; fixation point; stabilization; task completion time; pointing error
Modeling Situation-Dependent Nonverbal Expressions for a Pair of Embodied Agent in a Dialogue Based on Conversations in TV Programs BIBAKFull-Text 340-347
  Keita Okuuchi; Koh Kakusho; Takatsugu Kojima; Daisuke Katagami
Mathematical model for controlling nonverbal expressions of a pair of embodied agents designed for presenting various information through their dialogue is discussed. Nonverbal expressions of a human during conversation with others depend on those of them as well as the situation of the conversation. The proposed model represents the relationship between nonverbal expressions of a pair of embodied agents in different situations of conversation by a constraint function, so that the nonverbal expression of each agent reproduces the characteristic of nonverbal expressions observed in human conversation with various situations in TV programs by minimizing the function.
Keywords: Embodied agent; Human-agent interaction; Nonverbal expression
Research on a Large Digital Desktop Integrated in a Traditional Environment for Informal Collaboration BIBAKFull-Text 348-357
  Mariano Perez Pelaez; Ryo Suzuki; Ikuro Choh
We are building a digital desktop system designed to support the tasks that are usually performed around the traditional desktop. Tabletop platforms are not new environments, especially as a research topic, but most of the existent systems try to adapt the computer work style or only serve as platform for experimenting with new features. In contrast our targets are to support the traditional work flow around desktops, not forcing the users to modify theirs methods and to build the system as a complete tool for everyday tasks We want to provide a usable environment with computer-support features for raising productivity and enhancing the user experience. For doing this we realized a field study about the traditional desktop activities and with this knowledge we designed new tools and features that fit the user real needs and environment.
Keywords: Natural interface; interaction design; workgroup support; collaborative environment
Using Kinect for 2D and 3D Pointing Tasks: Performance Evaluation BIBAKFull-Text 358-367
  Alexandros Pino; Evangelos Tzemis; Nikolaos Ioannou; Georgios Kouroupetroglou
We present a study to comparatively evaluate the performance of computer-based 2D and 3D pointing tasks. In our experiments, based on the ISO 9241-9 standard methodology, a Microsoft Kinect device and a mouse were used by seven participants. For the 3D experiments we introduced a novel experiment layout, supplementing the ISO. We examine the pointing devices' conformance to Fitts' law and we measure a number of extra parameters that describe more accurately the cursor movement trajectories. Throughput, measured in bits per second is the most important performance measure. For the 2D tasks using Microsoft Kinect, Throughput is almost 39% lower than using the mouse, Target Re-Entry is 10 times up and Missed Clicks count is almost 50% higher. However, for the 3D tasks the mouse has a 9% lower Throughput than the Kinect, while Target Re-Entry and Missed Clicks are almost identical. Our results are also compared to older studies, and we finally show that the Kinect, operated by the user's hand and voice, is a suitable and effective input method for pointing and clicking, especially in 3D tasks.
Keywords: Fitts' law; 3D pointing; ISO 9241-9; Microsoft Kinect; Gesture User Interface
Conditions of Applications, Situations and Functions Applicable to Gesture Interface BIBAKFull-Text 368-377
  Taebeum Ryu; Jaehong Lee; Myung Hwan Yun; Ji Hyoun Lim
Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. This study developed a hierarchy of conditions of applications (devices), situations and functions which are applicable to gesture interface. This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of the gesture interface.
Keywords: Gesture interface; Applicability; Gesture application; Situation; Functions
Communication Analysis of Remote Collaboration System with Arm Scaling Function BIBAKFull-Text 378-387
  Nobuchika Sakata; Tomoyuki Kobayashi; Shogo Nishida
This research focuses on the remote collaboration in which a local worker works with real objects by a remote instructor. In this research area, there are some systems which consist of the ProCam system consisting of a camera and a projector at the work environment and the tabletop system consisting of a display, a depth sensor and a camera at remote instructor environment. As the function enhancement, the system using the scaling method of the embodiment exists. The system makes it possible for instructor to instruct smoothly even to small objects and has an effect on task completion time in the user study of putting smaller block clusters than the size of fingers. We first analyzed the movie of previous experiment again, and then find out the problems the previous work could not solve, and proposed their solution.
Keywords: Remote collaboration; Scaling Method and Video Analysis
Two Handed Mid-Air Gestural HCI: Point + Command BIBAKFull-Text 388-397
  Matthias Schwaller; Simon Brunner; Denis Lalanne
This paper presents work aimed at developing and evaluating various two-handed mid-air gestures to operate a computer accurately and with little effort. The main idea driving the design of these gestures is that one hand is used for pointing, and the other hand for four standard commands: selection, drag & drop, rotation and zoom. Two chosen gesture vocabularies are compared in a user evaluation. The paper further presents a novel evaluation methodology and the application developed to evaluate the four commands first separately and then together. In our user evaluation, we found significant differences for the rotation and zooming gestures. The iconic gesture vocabulary had better performance and was better rated by the users than the technological gesture vocabulary.
Keywords: Gestural interfaces; Two-hand interaction; User evaluation
Experimental Study Toward Modeling of the Uncanny Valley Based on Eye Movements on Human/Non-human Faces BIBAKFull-Text 398-407
  Yoshimasa Tawatsuji; Kazuaki Kojima; Tatsunori Matsui
In the research field of human-agent interaction, it is a crucial issue to clarify the effect of agent appearances on human impressions. The uncanny valley is one crucial topic. We hypothesize that people can perceive a humanlike agent as human at an earlier stage in interaction even if they finally notice it as non-human and such contradictory perceptions are related to the uncanny valley. We conducted an experiment where participants were asked to judge whether faces presented on a PC monitor were human or not. The faces were a doll, a CG-modeled human image fairly similar to real human, an android robot, another image highly similar and a person. Eyes of the participants were recorded during watching the faces and changes in observing the faces were studied. The results indicate that eye data did not initially differ between the person and CG fairly similar, whereas differences emerged after several seconds. We then proposed a model of the uncanny valley based on dual pathway of emotion.
Keywords: The uncanny valley; eye movements; dual pathway of emotion; humanlike agent
Multi-party Human-Machine Interaction Using a Smart Multimodal Digital Signage BIBAKFull-Text 408-415
  Tony Tung; Randy Gomez; Tatsuya Kawahara; Takashi Matsuyama
In this paper, we present a novel multimodal system designed for smooth multi-party human-machine interaction. HCI for multiple users is challenging because simultaneous actions and reactions have to be consistent. Here, the proposed system consists of a digital signage or large display equipped with multiple sensing devices: a 19-channel microphone array, 6 HD video cameras (3 are placed on top and 3 on the bottom of the display), and two depth sensors. The display can show various contents, similar to a poster presentation, or multiple windows (e.g., web browsers, photos, etc.). On the other hand, multiple users positioned in front of the panel can freely interact using voice or gesture while looking at the displayed contents, without wearing any particular device (such as motion capture sensors or head mounted devices). Acoustic and visual information processing are performed jointly using state-of-the-art techniques to obtain individual speech and gaze direction. Hence displayed contents can be adapted to users' interests.
Keywords: multi-party; human-machine interaction; digital signage; multimodal system
A Remote Pointing Technique Using Pull-out BIBAKFull-Text 416-426
  Takuto Yoshikawa; Yuusaku Mita; Takuro Kuribara; Buntarou Shizuki; Jiro Tanaka
Reaching objects displayed on the opposite side of a large multi-touch tabletop with hands is difficult. This forces users to move around the tabletop. We present a remote pointing technique we call HandyPointing. This technique uses pull-out, a bimanual multi-touch gesture. The gesture allows users to both translate the cursor position and change control-display (C-D) ratio dynamically. We conducted one experiment to measure the quantitative performance of our technique, and another to study how users selectively use the technique and touch input (i.e., tap and drag).
Keywords: bimanual interaction; multi-touch; gesture; tabletop

Touch-Based Interaction

Human Centered Design Approach to Integrate Touch Screen in Future Aircraft Cockpits BIBAKFull-Text 429-438
  Jérôme Barbé; Marion Wolff; Régis Mollard
This research aimed at developing new types of Human-Machine interaction for future Airbus aircraft cockpit. Touch interaction needs to be studied because it brings some advantages for pilots. However, it is necessary to redefine pilot's workspace to optimize touch interaction according to pilot population characteristics and human physical capabilities. This paper presents the touch interaction area model and the tactile assessment carried out to validate our hypothesis, leading to rules/guidelines for cockpit layout and HMI designers.
Keywords: Human Centered Design; interaction design; anthropometry; touch screen interaction; guidelines
Evaluating Devices and Navigation Tools in 3D Environments BIBAKFull-Text 439-448
  Marcela Câmara; Priscilla Fonseca de Abreu Braz; Ingrid Monteiro; Alberto Raposo; Simone Diniz Junqueira Barbosa
3D environments have been used in many applications. Besides the use of keyboard and mouse, best suited for desktop environments, other devices emerged for specific use in immersive environments. The lack of standardization in the use and in the control mapping of these devices makes the design task more challenging. We performed an exploratory study involving beginners and advanced users in the use of three devices in 3D environments: Keyboard-Mouse, Wiimote and Flystick. The navigation in this kind of environment is done through three tools: Fly, Examine and Walk. The study results showed how the interaction in virtual reality environments is affected by the navigation mechanism, the device, and the user's previous experience. The results may be used to inform the future design of virtual reality environments.
Keywords: 3D environments; evaluation; navigation tools; user experience
Computational Cognitive Modeling of Touch and Gesture on Mobile Multitouch Devices: Applications and Challenges for Existing Theory BIBAKFull-Text 449-455
  Kristen K. Greene; Franklin P. Tamborello; Ross J. Micheals
As technology continues to evolve, so too must our modeling and simulation techniques. While formal engineering models of cognitive and perceptual-motor processes are well-developed and extensively validated in the traditional desktop computing environment, their application in the new mobile computing environment is far less mature. ACT-Touch, an extension of the ACT-R 6 (Adaptive Control of Thought-Rational) cognitive architecture, seeks to enable new methods for modeling touch and gesture in today's mobile computing environment. The current objective, the addition of new ACT-R interaction command vocabulary, is a critical first-step to support modeling users' multitouch gestural inputs with greater fidelity and precision. Immediate practical application and validation challenges are discussed, along with a proposed path forward for the larger modeling community to better measure, understand, and predict human performance in today's increasingly complex interaction landscape.
Keywords: ACT-R; ACT-Touch; cognitive architectures; touch and gesture; computational cognitive modeling; modeling and simulation; movement vocabulary; gestural input; mobile handheld devices; multitouch tablets; model validation; Fitts' Law
A Page Navigation Technique for Overlooking Content in a Digital Magazine BIBAKFull-Text 456-461
  Yuichiro Kinoshita; Masayuki Sugiyama; Kentaro Go
Although electronic book readers have became popular in recent years, page navigation techniques used for these readers are not necessarily appropriate for all kinds of books. In this study, an observation experiment is conducted to investigate how people read paper-based magazines. Based on the findings in the experiment, the authors propose new page navigation techniques specialized for digital magazines. The techniques adopt the operation of flipping through the pages. A user study confirms that the techniques are useful for overlooking content in a digital magazine and able to support readers to find articles that meet their interests.
Keywords: Digital book; electronic book reader; overlooking content; page navigation; turning pages
Effect of Unresponsive Time for User's Touch Action of Selecting an Icon on the Video Mirror Interface BIBAKFull-Text 462-468
  Kazuyoshi Murata; Masatsugu Hattori; Yu Shibuya
Contactless input methods implementing body motion allow users to control computer systems easily and enjoyably. We focus on the "video mirror interface" as an example of these methods. A user of the video mirror interface can operate the computer system by selecting virtual objects on a screen with his/her hand. However, if a selection operation is completed as soon as the user touches the virtual object, erroneous selections will frequently occur. Therefore, it is necessary to insert a certain period of unresponsiveness after a user's touch action to prevent selection error. We evaluate effects of the unresponsive time when selecting a virtual object in a video mirror interface. The result of an experimental evaluation indicates that an acceptable range for the unresponsive time is 0.3 to 0.5 s.
Keywords: Video mirror interface; unresponsive time; touch action
Evaluation of a Soft-Surfaced Multi-touch Interface BIBAKFull-Text 469-478
  Anna Noguchi; Toshifumi Kurosawa; Ayaka Suzuki; Yuichiro Sakamoto; Tatsuhito Oe; Takuto Yoshikawa; Buntarou Shizuki; Jiro Tanaka
"WrinkleSurface", which we developed by attaching a gel sheet to a FTIR-based touchscreen, enables a user to perform novel touch motions such as Push, Thrust, and Twist_CW (clockwise), and Twist_CCW (counterclockwise). Our research is focused on the evaluation of this soft-surfaced multi-touch interface. Specifically, to examine how a user can input our novel input methods precisely, we evaluated the user's performance of each method by two to nine levels of target acquisition task. As a result, we found some points to be improved in our recognition algorithm in order to increase the success rate of Push and Thrust. In addition, a user can input Twist before the level of six because the success rate of Twist was high up to that level.
Keywords: Touchscreen; tabletop; haptic interface; FTIR; tangential force sensing; pressure sensing
Recognition of Multi-touch Drawn Sketches BIBAKFull-Text 479-490
  Michael Schmidt; Gerhard Weber
We present concepts and possible realizations for the classification of multi-touch drawn sketches. A gesture classifier is modified and integrated into a sketching tool. The applied routines are highly scalable and provide the possibilities of domain independent sketching. Classification rates are feasible without exploiting the full potential of the scheme. We demonstrate that the classifier is capable of identifying common basic primitives and gestures as well as complex drawings. Users define sketches per templates in their individual style and link them to constructed primitives. A pilot evaluation is conducted and results regarding sketching techniques of users and classification rates are discussed.
Keywords: Sketch; recognition; classifier; survey; gestures; multi-touch
A Web Browsing Method on Handheld Touch Screen Devices for Preventing from Tapping Unintended Links BIBAKFull-Text 491-496
  Yu Shibuya; Hikaru Kawakatsu; Kazuyoshi Murata
In recent years, it is common thing to browse Web pages with mobile devices, such as smart phones. However, users sometimes tap the wrong link when they scroll or zoom web pages because of the relatively small display area of mobile device and sensitivity of touch screen. In such case, it is necessary to stop of loading the page or back to the previous page after the page changed. It seems that above unintended operation might increase the total browsing time and user's frustration. In this study, we aimed to prevent users from tapping unintended links for effectively web browsing with touch-screen mobile devices. The proposed method has two kind of operation mode. They are a tapping mode and non-tapping mode. With the tapping mode, users can tap the link and change the mode only. On the other hand, with the non-tapping mode, users can do swipe, pinch, and mode change operation but they cannot tap any links. Furthermore, mode change operation, we adopt the Bezel Swipe operation, is intuitive and efficient.
   The results of the experimental evaluation showed that the rate of tapping the unintended links with the proposed method was lower than that with conventional method. However, the task completion time with proposed method is longer than that with conventional method.
Keywords: Mobile interaction; web browsing; unintentional tap; Bezel Swipe
Real Time Mono-vision Based Customizable Virtual Keyboard Using Finger Tip Speed Analysis BIBAKFull-Text 497-505
  Sumit Srivastava; Ramesh Chandra Tripathi
User interfaces are a growing field of research around the world specifically for PDA's, mobile phones, tablets and other such gadgets. One of the many challenges involved are their adaptability, size, cost and ease of use. This paper presents a novel mono-vision based touch and type method on customizable keyboard drawn, printed or projected on a surface. The idea is to let the user decide the size, orientation, language as well as the position of the keys, a fully user customized keyboard. Proposed system also takes care of keyboard on uneven surfaces. Accurate results are found by the implementation of the proposed real time mono-vision based customizable virtual keyboard system. This paper uses a phenomenal idea that the finger tip intended to type must be moving fastest relative to other fingers until it does a hit on a surface.
Keywords: Virtual Keyboard; Image Processing; Single camera; mono vision; Edge Detection; Quadrilateral extraction; Character Recognition; Hand Segmentation; Fingertip extraction; Customizable keyboard
Human Factor Research of User Interface for 3D Display BIBAKFull-Text 506-512
  Chih-Hung Ting; Teng-Yao Tsai; Yi-Pai Huang; Wen-Jun Zeng; Ming-Hui Lin
The user interface for the observer to interact with 3D image has been discussed. The appropriate touching range and suitable size of the 3D image are relative to depth (disparity) of the 3D image. According to experimental results, when disparity of the 3D image is large, size of the 3D image is necessary to be larger to let the observer precisely judge that finger tip is touching the 3D image or not.
Keywords: User Interface; Interaction with 3D image; Appropriate Touching Range; Suitable Size of 3D Image
Collaborative Smart Virtual Keyboard with Word Predicting Function BIBAKFull-Text 513-522
  Chau Thai Truong; Duy-Hung Nguyen-Huynh; Minh-Triet Tran; Anh-Duc Duong
The authors propose a table-top with virtual keyboards for multi-users to work in a collaborative environment. The proposed system has two main modules: a system for virtual keyboards with touch event detection from depth data of a Kinect and a word predicting module based on the idea of Hidden Markov Model and Trie data structure. The system can replace physical keyboards, improve the accuracy of a virtual keyboard, and increase the typing speed of users. Our experimental results show that our system archives an accuracy of 94.416% with the virtual keyboard, saves 11-22% of keystrokes, and corrects 89.02% of typing mistakes.
Keywords: table top; virtual keyboard; word prediction; 3D interaction
The Implementation of Multi-touch Table to Support the Military Decision Making through Critical Success Factors (CSFs) BIBAKFull-Text 523-529
  Norshahriah Wahab; Halimah Badioze Zaman
In this paper, we present the implementation of Multi-touch Table (MTT) to support the Military Decision Making. The need of the multi-touch table technology is essential for effective and efficient outcome especially in the Malaysian Environment Army. The decision making process is also integral to successful performance of the battlefield. The military decision making process emphasized on timely decision making, the understanding between commander's intent and staff besides the clear responsibility of the commander and staff. Therefore, the crux of this paper is on how to optimize the military decision making process through the Critical Success Factors (CSFs) that have been identified from preliminary study. By adapting the Critical Success Factors (CSFs), all the concepts, ideas and arguments can be brainstorm clearly and effectively around the Multi-touch Table which further gives advantages in visualizing, organizing and manipulating the data/information amongst military officers. The adaptation of the elements in Critical Success Factors (CSFs) also will promote the communication between commander and staff in the activities that involved visualizing the battle-space, describing the visualization to subordinates/staff, directing action in terms of the battlefield operating system and leading the unit to mission accomplishment. This paper also will present the findings and results obtained from series of questionnaires and interviews amongst Subject Matter Experts (SME) in the domain of Military Decision Making. Based on preliminary study indicated that the Criticality of elements in Critical Success Factors (CSFs) in supporting the process of military decision making. One big issue or dilemma in planning and execution of military decision making is the Commanding Officer (CO) need to rely fully on the subordinate officers' coordination ability and to understand effectively of the consequences each 'Course of Action' (COA) suggested by subordinates officers. The application of Multi-touch Table will be benefited in term of the medium used in supporting the discussion and brainstorming session between the Commanding Officer (CO) and the subordinate staff. Decision makers will refer to the shared display together at the same time with different orientations. Multi-touch Table is interactive table that becoming affordable in commonplaces such as in offices, universities and homes. This technology offers the world possibilities such as task engagement, face-to-face communication, social interaction dynamics and simultaneous input contribution. In the nut shell, the appropriate medium such as Multi-touch Table will put the positive impact towards the process in military decision making and addition to this point the adaptation of Critical Success Factors (CSFs) may give a lot of advantages specifically in planning and execution of military decision making.
Keywords: Multi-Touch Table; Military Decision Making; Critical Success Factors (CSFs); Command and Control (C2)
Design of a Visual Query Language for Geographic Information System on a Touch Screen BIBAKFull-Text 530-539
  Siju Wu; Samir Otmane; Guillaume Moreau; Myriam Servières
This paper presents two spatial query methods for a Geographic Information System (GIS) that runs on a touch screen. On conventional GIS interfaces SQL is used to construct spatial queries. However keyboard typing proves to be inefficient on touch screens. Furthermore, SQL is not an easy-learning language, especially for novices to GIS. To simplify query construction, firstly we have designed a map interaction based query method (MIBQM). This method allows users to make simple queries by selecting necessary layers, features and query operators directly on the interface. To allow users to construct complex queries, a sketch drawing based query method (SDBQM) is proposed. Spatial query concepts can be represented by sketches of some symbolic graphical objects. It is possible to add spatial conditions and non-spatial conditions to describe query concepts more precisely. An evaluation has been made to compare SQL and MIBQM. We have found that for simple queries, MIBQM takes less time and proves to be more user-friendly.
Keywords: GIS; Touchable Interface; Visual Query Language; Spatial Query
Target Orientation Effects on Movement Time in Rapid Aiming Tasks BIBAKFull-Text 540-548
  Yugang Zhang; Bifeng Song; Wensheng Min
An attempt was made to investigate the effect of the target orientation on pointing performance. An experiment was accomplished in which 10 subjects performed three-dimensional aiming tasks under the manipulation of target orientation, distance to target and direction to target. Results show that target orientation affects the duration of three-dimensional movements significantly. As a result, the conventional movement model did not satisfactorily explain the variance in the movement times produced. The conventional model was employed by incorporating an oriented parameter into the model. The modified model was shown to better fit the data than the conventional model, in terms of r² between the measured movement time and the value predicted by model fit.
Keywords: Human movement; Pointing performance; Fitts' Law; Index of difficulty; Target orientation

Haptic Interaction

Comparison of Enhanced Visual and Haptic Features in a Virtual Reality-Based Haptic Simulation BIBAKFull-Text 551-560
  Michael Clamann; Wenqi Ma; David B. Kaber
An experiment was conducted to compare the learning effects following motor skill training using three types of virtual reality simulations. Training and testing were presented using virtual reality (VR) and standardized forms of existing psychomotor tests, respectively. The VR training simulations included haptic, visual and a combination of haptic and visual assistance designed to accelerate training. A comparison of performance test results prior to and following training revealed conditions providing haptic assistance to yield lower scores related to fine motor skill training than the visual-only aiding condition. Similarly, training in the visual condition resulted in comparatively lower cognitive skill scores. The present investigation incorporating healthy subjects was designed as part of an ongoing research effort to provide insight on the design of VR simulations for rehabilitation of motor skills in patients with a history of mTBI.
Keywords: haptics; virtual reality; rehabilitation
Influence of Haptic Feedback on a Pointing Task in a Haptically Enhanced 3D Virtual Environment BIBAKFull-Text 561-567
  Brendan Corbett; Takehiko Yamaguchi; Shijing Liu; Lixiao Huang; Sangwoo Bahn; Chang S. Nam
To gain a better view of the value of haptic feedback, human performance and preference in a pointing style task in a three-dimensional virtual environment was explored. Vibration and haptic attractive force were selected as two simple cases of feedback, each with two levels. These types of feedback were compared to a no-feedback condition to better understand how human performance changes under these conditions. The study included 8 undergraduate students. A Novint Falcon haptic controller was used in a simulated three-dimensional virtual environment. Analysis was conducted on how each type of feedback effects the movement time (MT) of users. The results showed that vibration was perceived negatively and had a slight negative impact on performance. The haptic attractive force significantly improved performance and was strongly preferred by subjects.
Keywords: Haptic; assistive technology; virtual environments; human performance; force feedback; vibration; assistive feedback
Design of a Wearable Haptic Vest as a Supportive Tool for Navigation BIBAKFull-Text 568-577
  Anak Agung Gede Dharma; Takuma Oami; Yuhki Obata; Li Yan; Kiyoshi Tomimatsu
We propose an alternative way to display haptic feedback in ubiquitous computing. We develop a haptic vest that can display detailed haptic feedbacks by utilizing 5x12 arrays of vibrotactile actuators. We conducted a preliminary user testing on 34 stimuli (with four different directions) to measure the effectiveness of various vibrotactile patterns. We have discovered that each stimulus within a given direction has different properties in terms of their apprehensibility and comfort.
Keywords: Wearable computing; haptic rendering; haptic perception
Mapping Texture Phase Diagram of Artificial Haptic Stimuli Generated by Vibrotactile Actuators BIBAKFull-Text 578-586
  Anak Agung Gede Dharma; Kiyoshi Tomimatsu
We propose a classification method of tactile sensations elicited by artificial haptic stimuli by using Japanese onomatopoeias/adjectives. This method classifies adjectives based on user subjective perception and plot basic components of artificial haptic stimuli. The comparison of perceived tactile sensations from artificial haptic stimuli and genuine physical materials is also discussed in this paper.
Keywords: Touch perception; artificial haptic stimuli; Japanese onomatopoeia; Principal Component Analysis
Preliminary Design of Haptic Icons from Users BIBAKFull-Text 587-593
  Wonil Hwang; Dongsoo Kim
Haptic icons are useful for blind people as well as normal people to perceive information from their environments. Thus, lots of efforts were given to designing usable haptic icons, but not much progress was made in designing haptic icons so far, in terms of variety and intuitiveness. The purpose of this study is to investigate how to match vibrotactile stimuli with representational information or abstract concepts to design a variety of and intuitive haptic icons. We employed the bi-directional approach to ask users about their association between representational information/abstract concepts and perceived vibrotactile stimuli. Two-staged experiments were conducted with forty participants. From the experiments, verbal descriptions corresponding to each of 36 vibrotactile stimuli and drawings of vibration corresponding to each of 27 representational information/abstract concepts in the context of human-computer interaction were collected. We can conclude that the associations that users described from these experiments would provide the foundation for designing more intuitive haptic icons in enough variety.
Keywords: Haptic icons; vibrotactile stimuli; representative information; abstract concepts; intuitiveness
Assessing the Effectiveness of Vibrotactile Feedback on a 2D Navigation Task BIBAKFull-Text 594-600
  Wooram Jeon; Yueqing Li; Sangwoo Bahn; Chang S. Nam
The effect of vibrotactile parameters were investigated on a 2D navigation task. Participants performed a simple navigation task reproducing directional information presented by a series of vibrotactile stimuli consisting of different levels of amplitude and frequency. Task completion time and degree of annoyance were measured. The results demonstrated that both frequency and amplitude had a significant effect on the responses. In addition, interaction effects between the two parameters were found on the responses. It was concluded that user performance and comfort are significantly affected by frequency and amplitude. The results give some insight into designing navigating information presented by vibrotactile display for visually impaired people. More studies with people with visual impairment and manipulation of other vibrotactile parameters are recommended to be applicable to the potential research.
Keywords: Tactile display; vibrotactile; haptic; navigation
Magnetic Field Based Near Surface Haptic and Pointing Interface BIBAKFull-Text 601-609
  Kasun Karunanayaka; Sanath Siriwardana; Chamari Edirisinghe; Ryohei Nakatsu; Ponnampalam Gopalakrishnakone
Magnetic field based Near Surface Haptic and Pointing Interface is a new type of pointing interface which provides mouse interactions, haptic feedback and other enhanced features. It could also be configured as a haptic display, where users can feel the basic geometrical shapes in the GUI by moving the finger on top of the device surface. These functionalities are attained by tracking 3D position of a neodymium magnet, using Hall Effect sensors grid and generating like polarity haptic feedback using an electromagnet array.
Keywords: Pointing interface; haptic mouse; near surface haptic feedback; tactile display
Use of Reference Frame in Haptic Virtual Environments: Implications for Users with Visual Impairments BIBAKFull-Text 610-617
  Ja Young Lee; Sangwoo Bahn; Chang S. Nam
Reference frame is key in explaining the relationship between two objects. This paper focused on the orientation parameter of a reference frame in use of projective spatial terms, and its use by visually impaired participants using a haptic device to explore a haptic virtual environment. A total of nine visually impaired participants between 12 and 17 years of age participated in this study. After exploring the 3D virtual environment with a haptic device, participants answered questions about the frame they had utilized. Overall results indicated that the participants used relative frame of reference slightly more than the intrinsic frame of reference. This inclination was especially clear when both the target object and the reference object were on the horizontal plane. Only when objects were on horizontal plane but intrinsically vertical to the reference object, the intrinsic frame of reference was preferred. We also found evidence that participants used a reflective subtype of the relative frame, and vertically aligned objects were easy to be perceived with the relative reference frame. We concluded that the virtual environment and haptic input had influence on the result by separating the user from the computer, only allowing one point of contact. Thus it would be possible to apply the result of this study to the development and assessment of assistive technology for people with visual impairment, especially in regard to how spatial information between the systems and the user is communicated.
Keywords: Reference frame; relative frame; intrinsic frame; projective spatial terms; visual impairments
Behavioral Characteristics of Users with Visual Impairment in Haptically Enhanced Virtual Environments BIBAKFull-Text 618-625
  Shijing Liu; Sangwoo Bahn; Heesun Choi; Chang S. Nam
This study investigated behavioral characteristics of users with visual impairments and tested effect of factors regarding the layout of virtual environments (VEs). Various three-dimensional (3D) VEs were simulated with two different factors: number of objects and layout type (random, symmetric). Using a Novint Falcon haptic device, users with visual impairments were required to complete an object recognition task in 3D VEs with different levels of number of object and layout. The characteristics of their movements (speed, applied force, location, direction, etc.) were recorded, and participants evaluated perceived difficulty after they completed each trial. We analyzed their recorded movements and their rating on perceived difficulty. Results showed that 1) number of objects in 3D VE had significant impact on visually impaired users' behavior; 2) different layout had not showed significant influence on their movement; 3) increased number of objects in 3D VE made the task more difficult; 4) visualized results implied that different users had significant different behavior preference in the same 3D VE. It is expected that the results of this study can improve behavioral understanding of users with visual impairments and guidance for assistive technology development for users with visual impairments.
Keywords: Haptic; 3D virtual environment; behavioral pattern

Graphical User Interfaces and Visualisation

A Situation Awareness Assistant for Human Deep Space Exploration BIBAKFull-Text 629-636
  Guy Andre Boy; Donald Platt
This paper presents the development and testing of a Virtual Camera (VC) system to improve astronaut and mission operations situation awareness while exploring other planetary bodies. In this embodiment, the VC is implemented using a tablet-based computer system to navigate through interactive database application. It is claimed that the advanced interaction media capability of the VC can improve situation awareness as the distribution of human space exploration roles change in deep space exploration. The VC is being developed and tested for usability and capability to improve situation awareness. Work completed thus far as well as what is needed to complete the project will be described. Planned testing will also be described.
Keywords: Situation Awareness (SA); Augmented Reality; Human-Computer Interaction (HCI); Tablet Computing; Usability Testing; Space Exploration
My-World-in-My-Tablet: An Architecture for People with Physical Impairment BIBAKFull-Text 637-647
  Mario Caruso; Febo Cincotti; Francesco Leotta; Massimo Mecella; Angela Riccio; Francesca Schettini; Luca Simione; Tiziana Catarci
Mobile computing, coupled with advanced types of input interfaces, such as Brain Computer Interfaces (BCIs), and smart spaces can improve the quality of life of persons with disabilities. In this paper, we describe the architecture and the prototype of an assistive system, which allows users to express themselves and partially preserve their independence in controlling electrical devices at home. Even in absence of muscular functions, the proposed system would still allow the user some communication and control capabilities, by relying on non-invasive BCIs. Experiments show how the fully-software realization of the system guarantees effective use with BCIs.
Keywords: Brain Computer Interfaces (BCIs); tablet; home appliances; communication capabilities; software architecture
AHPM as a Proposal to Improve Interaction with Air Traffic Controllers BIBAKFull-Text 648-657
  Leonardo L. B. V. Cruciol; Li Weigang
Air Traffic Management (ATM) involves a complex decision-making process that involves several entities as short time to analyze risk situations and many attributes to verify before take an action. So, Decision Support System (DSS) is a great way to air traffic controllers achieve better results in their work. A well implemented DSS must provide a simple Human-Computer Interaction (HCI) to achieve great results. Even a system can provide all functionalities for a specialist, it must achieve his expectations and results by other requirements, i.e., maybe a right answer with delay or hard to find will become a wrong or unnecessary answer. The proposed approach by Air Holding Problem Module (AHPM) has a sub module responsible for forecasting airspace scenarios and another responsible to support decision-making process by an interaction with air traffic controller. Thus, it is possible that air traffic controller interacts with the system and carries out his activities faster and more informed by a simple screen which contains knowledge necessary. The AHPM achieved a great human-computer interaction level because the interaction is very simple and all mandatory information to do great analysis is presented in a same screen by a clean and objective organization.
Keywords: Human-Computer Interaction; Decision Support System; Air Traffic Management; Artificial Intelligence
Decision Space Visualization: Lessons Learned and Design Principles BIBAKFull-Text 658-667
  Jill L. Drury; Mark S. Pfaff; Gary L. Klein; Yikun Liu
While the situation space consists of facts about what is currently happening, the decision space consists of analytical information that supports comparing the relative desirability of one decision option versus another. We have focused on new approaches to display decision space information that aids cognition and confidence. As a result of our earlier empirical work, we have developed a set of principles for visualizing decision space information. This paper describes those principles and illustrates their use.
Keywords: Decision space; situation space; option awareness; situation awareness; cognitive engineering; design principles; visualization
The Language of Motion: A Taxonomy for Interface BIBAKFull-Text 668-677
  Elaine Froehlich; Brian Lucid; Heather Shaw
This project presents a taxonomic tool for designing with motion. Paul Klee dedicated his life to the study and teaching of motion. "I should like to create an order from feeling and, going still further, from motion."[1] The natural state of interaction with digitized information includes motion. Our human brains have evolved physiological systems and organic structures that respond instinctively, tuned to organic motion. This human bias toward organic, natural motion presents opportunities for the use of motion in interfaces. Using motion in computing devices inspired by the natural world will create deeper, more emotionally engaging experiences. This study focuses on understanding the basic elements of motion in order to use it as a component in the design of digital interfaces. It presents a taxonomy of motion with the goal of describing fundamental qualities of motion used in the 2-dimensional, framed space of a screen: screen position, direction, principles, attributes and the resulting behaviors that can be created using them. The documentation presented defines a language for motion in interface. The taxonomy was built on discrete gestural motion videos taken from nature. The video segments are limited to short motions that show a complete but definable idea. The videos tend to be a few seconds in length though a few of them take several seconds to complete their motion idea.
Keywords: Dynamic media; motion design; motion; interface; screen area; direction; principles; attributes; behavior; taxonomy
Adaptive Consoles for Supervisory Control of Multiple Unmanned Aerial Vehicles BIBAKFull-Text 678-687
  Christian Fuchs; Sérgio Ferreira; João Sousa; Gil Gonçalves
With the prevailing increase of complex operational scenarios, involving multiple unmanned aerial vehicles (UAV), the concerns with the natural increase of operator workload and reduction of situational awareness have become paramount in order to safeguard operational security and objective completion. These challenges can be tackled through alterations of the autonomy levels of the vehicles, however this paper explores how these issues can also be mitigated by changing the way information is presented to the human operator. Relying upon an established framework, that supports operational scenarios with multiple UAVs, a series of display alterations were performed to existing operation consoles. After test sessions, in a simulated environment, with human participants of different levels of operational certification, feedback and results are distilled and analysed. Operator feedback demonstrated an overwhelming preference for the developed consoles and results showed an improvement of situation awareness, as well as reduction of workload.
Keywords: Operator; Situational Awareness; UAS; UAV; Workload; Command and Control; Interface
A Web-Based Interface for a System That Designs Sensor Networks BIBAKFull-Text 688-697
  Lawrence J. Henschen; J. C. Lee
We describe the approach taken in the design of the interface for a system that helps application engineers who are not trained in computer science/engineering to design sensor networks. We cite various taxonomies from the senor network literature that guided the design of the interface. We then describe the overall structure of the system to set the context for how the human interacts with it. We present some examples of the kind of data required to design a sensor network and describe how our interface collects that information. We note at many points in the presentation that a deep understanding of the data of the application allows for the design of an appropriate interface.
Keywords: Sensor networks; automated design; HCI
An Interaction Concept for Public Displays and Mobile Devices in Public Transport BIBAKFull-Text 698-705
  Romina Kühn; Diana Lemme; Thomas Schlegel
Public displays increasingly find their way into public space and offer a wide range of information to the user. Currently, most of these displays just represent information without the chance to explore them or interact with them. In general, by technical enhancements in this field, more and more possibilities of interaction are given in different domains. This work presents interaction opportunities between public displays and users with mobile devices in the field of public transport. As a basis for understanding the usage and benefits of public displays it is necessary to have a closer look at different types of displays in the public domain, too.
Keywords: Interaction concept; mobile interaction; public display; public transport
Study of Interaction Concepts in 3D Virtual Environment BIBAKFull-Text 706-711
  Vera Oblaender; Maximilian Eibl
This paper describes what could be understood by interaction techniques and interaction concepts. In this work we focus in particular the second screen applications. Research of interaction techniques and concepts in this case investigates how to design interaction concepts with tablet as second screen, by remote connection with virtual environment on a primary screen. However, the actual samples used in this research are summarized by interactions like selection, manipulation and navigation aspects.
Keywords: Human computer interaction; second screen; manipulation; navigation on virtual environment in virtual reality; interaction technique; interaction concept; gestures
Undo/Redo by Trajectory BIBAKFull-Text 712-721
  Tatsuhito Oe; Buntarou Shizuki; Jiro Tanaka
We have developed a trajectory based undo/redo interface. Using the interface, a user traces actions' trajectories shown on a display. As a result, undo/redo manipulations are performed rapidly with selection of a target from a history. In this paper, we describe interaction techniques, implementation, and advanced usages of the interface.
Keywords: undo/redo; trajectory; history; tracing; desktop interface; gui
Multi-layer Control and Graphical Feature Editing Using Server-Side Rendering on Ajax-GIS BIBAKFull-Text 722-729
  Takeo Sakairi; Takashi Tamada; Katsuyuki Kamei; Yukio Goto
This paper presents the methods of the multi-layer control and the graphical feature editing by the server side rendering on Ajax-GIS. Ajax-GIS uses divided raster image file called "tile" in order to keep light handling. We propose that the multi-layer control is realized by means of merging transparent tiled images in the server application as the requests of the client application. Furthermore we propose the graphical feature editing protocol that sent from a client and send back to an image in order to edit a feature such as moving vertices, changing color. In an evaluation experiment of an actual map data, we confirmed the effectiveness of these methods as compared with conventional methods.
Keywords: Ajax-GIS; Server-Side Rendering; Multi-layer Control; Graphical Feature Editing
A Method for Discussing Musical Expression between Music Ensemble Players Using a Web-Based System BIBAFull-Text 730-739
  Takehiko Sakamoto; Shin Takahashi; Jiro Tanaka
Music ensemble players discuss musical expression of the piece of music they perform, and determine how to play each note in a score such as the length and the dynamics of tone or phrases in every detail of the music. This paper introduces our system that supports the discussion about musical expressions on the web. Our system enables the users to write comments, draw symbols, and link videos on the score where they are discussing about. We also conducted an informal usability study to evaluate the usefulness of the system.
A Study on Document Retrieval System Based on Visualization to Manage OCR Documents BIBAFull-Text 740-749
  Kazuki Tamura; Tomohiro Yoshikawa; Takeshi Furuhashi
Recently, the digitization of paper-based documents is rapidly advanced through the spread of scanners. However, tagging or sorting a huge amount of scanned documents one by one is difficult in terms of time and effort. Therefore, the system which extracts features from texts in the documents automatically, which is available by OCR, and classifies/retrieves documents will be useful. LDA, one of the most popular Topic Models, is known as a method to extract the features of each document and the relationships between documents. However, it is reported that the performance of LDA declines along with poor OCR recognition. This paper assumes the case of applying LDA to Japanese OCR documents and studies the method to improve the performance of topic inference. This paper defines the reliability of the recognized words using N-gram and proposes the weighting LDA method based on the reliability. Adequacy of the reliability of the recognized words is confirmed through the preliminary experiment detecting false recognized words, and then the experiment to classify practical OCR documents are carried out. The experimental results show the improvement of the classification performance by the proposed method comparing with the conventional methods.
Audio-Visual Documentation Method for Digital Storytelling for a Multimedia Art Project BIBAKFull-Text 750-758
  Chui Yin Wong; Chee Weng Khong; Kimberly Chu; Muhammad Asyraf Mhd Pauzi; Man Leong Wong
In this paper, we describe an interactive multimedia art project, namely FaceGrid, using mosaic photography art concept for digital storytelling. Inspired by mosaic photography and a montage concept, FaceGrid was produced by using many small image tiles that were woven and stitched together to form the pixel art design pattern. FaceGrid documents the different ways of living and lifestyles of ordinary folks in a multi-cultural and diverse ethnic society in Malaysia. We use audio-visual documentation methods (photography and film-documentary techniques) to record, capture and archive the different facets of lives and user stories by ordinary people. We then transform those slices of life via digital storytelling technique into an interactive multimedia art project.
Keywords: Audio-visual documentation method; digital storytelling; multimedia art; ordinary folks