HCI Bibliography Home | HCI Conferences | ACHI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ACHI Tables of Contents: 0809101112131415

Proceedings of the 2013 International Conference on Advances in Computer-Human Interactions

Fullname:Proceedings of the Sixth International Conference on Advances in Computer-Human Interactions
Editors:Leslie Miller
Location:Nice, France
Dates:2013-Feb-24 to 2013-Mar-01
Publisher:IARIA
Standard No:ISBN: 978-1-61208-250-9
Papers:79
Pages:495
Links:Conference Website | Proceedings
  1. Design and Evaluation I
  2. Design and Evaluation II
  3. Design and Evaluation III
  4. Human-robot Interaction I
  5. Human-robot Interaction II
  6. Applications
  7. Agents and Human Interaction
  8. Education
  9. Usability and Universal Accessibility I
  10. Usability and Universal Accessibility II
  11. Applications in Medicine
  12. Haptic Interfaces I
  13. Haptic Interfaces II
  14. Social Aspects of Human-computer Interaction
  15. User Modeling and User Focus I
  16. User Modeling and User Focus II
  17. Interfaces I
  18. Interfaces II
  19. Interaction Devices

Design and Evaluation I

Investigating Players' Affective States in an Interactive Environment BIBAKFull-Text 1-6
  Uttam Kokil
The objective of this paper is to examine player experience from a Human Computer Interaction Design perspective whereby usability, aesthetics, and hedonic components can be investigated in an interactive domain. Mihaly Csikszentmihalyi's concept of flow has been applied to the Component of User Experience (CUE) model to measure user experience (UX) of products, and other instruments such as the Presence Involvement Flow Framework (ver. 2) (PIFF); Gameflow and Game Experience Questionnaire to evaluate game enjoyment respectively. So far, the CUE Model has been applied in simulated user-testing situation. It becomes important to gauge the potential of the CUE Model in an interactive game scenario, given that it is composed of components such as perceived usability, perceived aesthetics and emotional responses. The aim of this study is to conduct a comparative analysis of two user experience models (Component of User Experience) and PIFF to examine players' emotional responses in four different conditions (low and high usability, low and high aesthetic value) when they play two different computer game category, namely "Hard Fun" and "Easy Fun" game respectively, for the touchscreen. This research will investigate how two independent variables, usability (low and high) and visual aesthetic (low and high), of a game user interface design will affect the dependent variables: player experience, task performance and emotional responses (enjoyment).
Keywords: user experience; visual aesthetics; computer games; game usability
Emergent Design System Using Computer-Human Interactions and Serendipity BIBAKFull-Text 7-12
  Akira Kito; Yuki Mizumachi; Koichiro Sato; Yoshiyuki Matsuoka
This paper describes a basic study on an emergent design system in which serendipity occurs from interaction between computer and human. Serendipity is a natural ability to unexpectedly make interesting or valuable discoveries. The possibility of generating new design ideas will increase if we can utilize serendipity. Therefore, we propose an emergent design system that produces serendipity by using form organizing phenomenon seen in nature and three-dimensional modeling like clay-modeling. Then, we perform elementary experiments with designers. Thus, this system prompts the chances of getting inspirations and unexpected discoveries in the process of deriving ideas. As a result, we show the possibility of generating new design ideas by using this system.
Keywords: emergence; design system; interaction; serendipity
Sensory Evaluation Method to Create Pictograms Based on Multiplex Sign Languages BIBAKFull-Text 13-16
  Naotsune Hosono; Hiromitsu Inoue; Yuji Nagashima; Yutaka Tomita
This paper discusses a method to create pictograms based on multiplex local sign languages with applying the concept of "Context of Use" on dialogue with applying Multivariate Analysis (MVA). Since pictograms are universal communication tools, human centred design (HCD) and context analysis by Persona model are applied. The experiments consist of three steps. The first step is to find out the similarity of a selected word among seven different local sign languages, which are American, British, French Spanish, Japanese, Korean and Chinese by means of MVA. The second step is to create a new common pictogram referring to the first step result by a pictogram designer. The final step is to validate the newly created pictogram by MVA. Under the cycle of HCD, the pictogram designer will perform to summarize the expression of several local sign languages by this method. The acquisition of this experience is to include it as a pictogram design guideline for context of universal communications such as emergency and traveling situations. Through the proposed method, the relationship between selected words and local sign languages are initially explained by sensory evaluation of the subjects. Currently the outcome of pictograms or icons of this experiment are implemented on the modern tablet computers with a touch panel display considering computer-human interactions.
Keywords: Context of Use; Human Centred Design; Pictogram; Universal Communication; Sensory Evaluation
Accessibility and Augmented Reality into Heritage Site Interpretation -- A Pilot Experience with Visitors at the Monument Lonja de la Seda in Valencia (Spain) BIBAKFull-Text 17-21
  Marina Puyuelo Cazorla; José Luís Higón Calvet; Mónica Val Fiel; Lola Merino Sanjuán
This paper summarizes a pilot experiment with Augmented Reality (AR) at this architectural monument declared World Heritage Site by UNESCO in 1996. The project aims to increase accessibility to this site and provide users, in this case, visitors to the place, with intuitive experience of this technology. The experience and the application are proposed as a complement to the visit in the real environment, emphasizing the site as a context for situated learning that contributes to knowledge. Comprehensive visits to world heritage sites endow them with extraordinary cultural and social value. Augmented reality was chosen to activate the visit experience of the place by providing elements that can be visualized and manipulated directly by users in the real environment of the building itself without the need for any excessively invasive equipment in this historical and artistic context. This application is proposed firstly, to resolve perceptual issues due to poor lighting, the distance from many details and access to some areas and secondly, to explain some of the more complex construction aspects. This pilot experiment aims to establish initial contact with visitors to the place in order to obtain relevant information for modifications and adjustments to improve the components designed and developed for this site.
Keywords: augmented-reality; accessibility; inclusive-design; heritage-sites; interactivity

Design and Evaluation II

A Deported View Concept for Touch Interaction BIBAKFull-Text 22-27
  Alexandre Alapetite; Rune Fogh; Henning Boje Andersen; Ali Gürcan Özkil
Following the paradigm shift where physical controls are replaced by touch-enabled surfaces, we report on an experimental evaluation of a user interface concept that allows touchscreen-based panels to be manipulated partially blindly (aircrafts, cars). The proposed multi-touch interaction strategy -- involving visual front-view feedback to the user from a copy of the peripheral panel being manipulated -- compares favourably against trackballs or head-down interactions.
Keywords: Keywords-HCI; Tactile interaction; Touch; Blind; Visual attention; Cockpit; In-vehicle systems
Effect of non-Unified Interaction Design of in-car Applications on Driving Performance, Situational Awareness and Task Performance BIBAKFull-Text 28-34
  Julia Manner; Christopher Kohl; Michael Schermann; Helmut Krcmar
It is common understanding that human-computer interaction (HCI) systems should be designed unified. However, ensuring a unified interaction design (UID) is a cost intensive and time-consuming venture. Especially the automotive industry struggles with exceeding costs and time-to-market pressure as drivers want to stay connected and informed while driving. Therefore, we investigated the effect of non-unified interaction design (NUID). We report on a simulator study with 44 participants in which we studied the effect of a NUID within an automotive HCI system consisting of five in-car applications. We measured the effect on driving performance, task performance and situational awareness when carrying out tasks. We found no significant effect of UIDs. We offer an explanation based on HCI and cognitive ergonomics literature.
Keywords: interaction design; in-car applications; cognitive load; multi-tasking; multiple-tasks; task complexity
Studying Depth in a 3D User Interface by a Paper Prototype as a Part of the Mixed Methods Evaluation Procedure. Early Phase User Experience Study BIBAKFull-Text 35-40
  Leena Arhippainen; Minna Pakanen; Seamus Hickey
A principal characteristic of three dimensional user interfaces is that it contains information in the 3rd axis. Visually, this information is presented as being placed further away from the screen or having depth. A consequence of this is that information can be occluded. Determining the optimal amount of depth levels for specifically sized icons is important in the design of 3D user interfaces. This paper investigates the depth placement of objects of a three dimensional user interface on a tablet device at the early stage of the development process. We present mixed methods evaluation with a paper prototype with a focus on the users' subjective experiences. Users were presented with concepts of different depth levels, with and without 3D objects. The findings indicate that users' preference was for depth levels 3-5. We recommend designing 3D UIs with a controllable depth by starting with a few depth levels and increasing them automatically based on the amount of 3D objects. Also, it is important to give a user a possibility to customize depth levels when needed. This paper provides user preference information on depth for 3D UI designers and developers, especially in the context of a touch screen tablet device.
Keywords: 3D UI; depth; touch screen tablet; paper prototype; user experience
Studying Four 3D GUI Metaphors in Virtual Environment in Tablet Context. Visual Design and Early Phase User Experience Evaluation BIBAKFull-Text 41-46
  Minna Pakanen; Leena Arhippainen; Seamus Hickey
In this paper, we focus on a possibility to have a personal 3D GUI inside a virtual environment on tablet device. We describe the visual design process and user experience evaluation of four 3D GUIs in a virtual environment. A user evaluation was conducted by using a structured pair evaluation procedure, where we adapted a concept walkthrough method with non-functional visually high quality prototypes. We found that participants would like to have their personal 3D GUI in a virtual environment. However, the visual design of the 3D GUI should create a secure and private feeling for them. Also, participants did not want the GUI to occlude too much with the background. The visual indication is needed also when a user transfers items from personal GUI to the virtual environment and for showing the user's active position between the GUI and virtual environment. We point out also other issues for interaction and visual designers.
Keywords: visual design; user experience; 3D GUI; touch screen tablet device, HCI

Design and Evaluation III

Towards a 3D User Interface in a Tablet Device Context. An Iterative Design and Evaluation Process BIBAKFull-Text 47-52
  Leena Arhippainen; Minna Pakanen; Seamus Hickey
This paper presents a design and evaluation process in the early design phase towards a three dimensional user interface on a touch screen mobile device with service multitasking use cases. Our Service Fusion concept is an outcome of this iterative concept design and evaluation process. In this paper, we present briefly four 3D user interface concepts, evaluated with users and experts. Two of these concepts are also implemented on a touch screen mobile device. We also present user experience findings and how they have supported our early phase design process. The Service Fusion demonstration of 3D multitasking shows multiple services running in a 3D city model, which allows users to purchase movie tickets, listen to music and find local bars and stores by drag and drop gestural controls. Evaluation findings indicate that people are interested in service multitasking and dragging and dropping interaction model in 3D user interfaces in the tablet device context. However, the 3D space context will affect how important and useful users perceive services to them.
Keywords: 3D UI; concept design; user experience; touch screen mobile device; tablet
Subjective Usability of Speech, Touch and Gesture in a Heterogeneous Multi-Display Environment BIBAKFull-Text 53-56
  Arnoud P. J. de Jong; Susanne Tak; Alexander Toet; Sven Schultz; Jan Pieter Wijbenga; Jan van Erp
Several interaction techniques have been proposed to enable transfer of information between different displays in heterogeneous multi-display environments. However, it is not clear whether subjective user preference for these different techniques depends on the nature of the displays between which information is transferred. We explore subjective usability of speech, touch and gesture for moving information between various displays in a heterogeneous multi-display environment, consisting of a multi-touch table, a wall-mounted display and a smartphone. We find that subjective user evaluation of the various interaction techniques depends on the combination of displays being used. This implies that the type of display combination should be taken into consideration when designing interaction techniques for the transfer of items between displays in a heterogeneous multi-display environment. Also, gesture based interactions were judged more acceptable when they involved holding a mobile phone, probably since this provided a cue explaining the action.
Keywords: large display; multi-display environment; multi-touch table; smartphone; speech; gestures
Dynamic Gesture Recognition Based on Fuzzy Neural Network Classifier BIBAKFull-Text 57-61
  Ching-Han Chen; Kirk Chang; Nai-Yuan Liu; Gimmy Su
This paper presents a dynamic gesture recognition method based on the combination of the fuzzy features of the dynamic gesture track changes and the fuzzy neural network inference system. This method first classified the dynamic gestures roughly into circular gestures and linear gestures. Further, gestures were classified narrowly into up, down, left, right, clockwise, and counter-clockwise gestures. These six dynamic gestures, which are commonly used in IP-TV controlling, were introduced as the recognition goal in our dynamic gesture recognition system. The results show that this method has a good recognition performance and fault tolerance, and more applicable to real gesture-controlled human-computer interactive environment.
Keywords: gesture recognition, fuzzy system, neural network
BrainBrush, a Multimodal Application for Creative Expressivity BIBAKFull-Text 62-67
  Bram van de Laar; Ivo Brugman; Femke Nijboer; Mannes Poel; Anton Nijholt
We combined the new developments of multimodal Brain-Computer Interfaces (BCI) and wireless EEG headsets with art by creating BrainBrush. Users can paint on a virtual canvas by moving their heads, blinking their eyes and performing selections using a P300 BCI. A qualitative evaluation (n=13) was done. A questionnaire was administered and structured interviews were conducted to assess the usability and user experience of the system. Most participants were able to achieve good control over the modalities and able to express themselves creatively. The user experience of the modalities varied. The use of head movement was considered most positive with the use of eye blinks coming in second. Users were less positive about the use of the BCI because of the low reliability and higher relative cost of an error. Even though the reliability of the BCI was low, the BCI was considered to have an added value: the use of BCI was considered to be fun and interesting.
Keywords: Creative Expression, Brain-Computer Interface, Multimodal Interaction, P300

Human-robot Interaction I

Multimodal Human-Robot Interactions: the Neurorehabilitation of Severe Autistic Children BIBAKFull-Text 68-82
  Irini Giannopulu
In the context of human-robot interactions, we studied quantitatively and qualitatively the interaction between autistic children and a mobile toy robot during free spontaneous game play. A range of cognitive nonverbal criteria including eye contact, touch, manipulation, and posture were analysed, firstly in a dyadic interaction and secondly in a triadic interaction. Once the cognitive state between the child and the robot established, the child interacts with a third person displaying positive emotion. Both dyadic and triadic interactions of autistic children and a mobile toy robot suggest that the mobile toy robot in an ecological situation such as free, spontaneous game play could be used as a neural mediator in order to improve children's brain activity.
Keywords: multimodal interactions; severe autism; mobile toy robot; spontaneous free game play; neural mediator
What Should a Robot do for you? -- Evaluating the Needs of the Elderly in the UK BIBAKFull-Text 83-88
  Hagen Lehmann; Dag Sverre Syrdal; Kerstin Dautenhahn; GertJan Gelderblom; Sandra Bedaf; Farshid Amirabdollahian
The increasing interest in the use of robotic assistive technologies in elderly care for the UK makes it necessary for roboticists to evaluate the needs, problems and demands of possible end-users of such technologies. Users of these technologies can be divided into three groups: informal caregivers (family members and friends), formal caregivers (medical staff, social workers, home- help), and the elderly themselves. In this paper we present the results of a series of focus groups conducted between March and May 2012. We used the metaplan method to evaluate the opinions and needs of each of the three different potential user groups mentioned above. In these discussions we extracted a variety of problem dimensions and their interconnections in order to understand in which parts of everyday life assistive technology could help, and is needed the most.
Keywords: elderly care; evaluation of needs; robotic assistive technology
Investigating Child-Robot Tactile Interactions: A Taxonomical Classification of Tactile Behaviour of Children with Autism Towards a Humanoid Robot BIBAKFull-Text 89-94
  Ben Robins; Farshid Amirabdollahian; Kerstin Dautenhahn
The work presented in this paper is part of our investigation in the ROBOSKIN project. One key research activity in the project was to explore tactile interactions of children with autism with the humanoid robot KASPAR in order to develop methods and mechanism to support robot-assisted therapy for children with autism. This article presents a detailed taxonomical classification of tactile interactions of 14 children with autism with the humanoid robot KASPAR. Our quantitative analysis confirms results from the literature highlighting the great variety of autistic children's interaction capabilities.
Keywords: assistive technology; human-robot interaction; autism therapy; robot assisted therapy
Resource-Efficient Methods for Feasibility Studies of Scenarios for Long-Term HRI Studies BIBAKFull-Text 95-100
  Nate Derbinsky; Wan Ching Ho; Ismael Duque; Joe Saunders; Kerstin Dautenhahn
Long term HRI studies can be costly, firstly in terms of researcher time, hardware/software development time, data-collection, data analysis, trial preparation, trial execution, robot time and subsequently, in terms of funding for robotics and other equipment. Methods which reduce such costs by using resource-efficient feasibility studies to analyze study methods and propose outcomes, debug code associated with data collection/analysis, and sanity check human-robot interactions by simulating, predicting and generating feasible scenarios would therefore be welcome. This paper proposes such methods and provides physical implementation details of these methods in practice and data from a preliminary study.
Keywords: feasibility studies; experimental methods
Person Identification using Skeleton Information from Kinect BIBAKFull-Text 101-108
  Aniruddha Sinha; Kingshuk Chakravarty; Brojeshwar Bhowmick
In recent past the need for ubiquitous people identification has increased with the proliferation of human-robot interaction systems. In this paper we propose a methodology of recognizing persons from skeleton data using Kinect. First a half gait cycle is detected automatically and then features are calculated on every gait cycle. As part of new features, proposed in this paper, two are related to area of upper and lower body parts and twelve related to the distances between the upper body centroid and the centriods derived from different joints of upper limbs and lower limbs. Feature selection and classification is performed with connectionist system using Adaptive Neural Network (ANN). The recognition accuracy of the individual people using the proposed method is compared with the earlier methods proposed by Arian et. al and Pries et. al. Experimental results indicate that the proposed approach of simultaneous feature selection and classification is having better recognition accuracy compared to the earlier reported ones.
Keywords: Person identification; gait recognition; adaptive artificial neural network (ANN); Kinect; connectionist system

Human-robot Interaction II

Robot Learning Rules of Games by Extraction of Intrinsic Properties BIBAKFull-Text 109-116
  Grégoire Pointeau; Maxime Petit; Peter Ford Dominey
A major open problem in human-robot interaction remains: how can robots learn from nontechnical humans? Such learning requires that the robot can observe behavior and extract the sine qua non conditions for when particular actions can be produced. The observed behavior can be either the robots own explorative behavior, or the behavior of humans that it observes. In either case, the only additional information should be from the human, stating whether the observed behavior is legal or not. Such learning may mimic the way that infants learn, through interaction with their caregivers. In the current research we implement a learning capability based on these principals of extracting rules from observed behavior using "Human-Robot" interaction or "Human-Human" interaction. We test the system using three games: In the first, the robot must copy a pattern formed by the human; in the second the robot must perform the mirror action of the human. In the third game, the robot must learn the legal moves of Tic Tac Toe. Interestingly, while the robot can learn these rules, it does not necessarily learn the rules of strategy, which likely require additional learning mechanisms.
Keywords: learning machine; robotics; iCub; Reactable; human-robot interaction; human feedback
"Where is Your Nose?" -- Developing Body Awareness Skills Among Children With Autism Using a Humanoid Robot BIBAKFull-Text 117-122
  Sandra Costa; Hagen Lehmann; Ben Robins; Kerstin Dautenhahn; Filomena Soares
This article describes an exploratory study in which children with autism interact with KASPAR, a humanoid robot, equipped with tactile sensors able to distinguish a gentle from a harsh touch, and to respond accordingly. The study investigated a novel scenario for robot-assisted play, namely to increase body awareness with tasks that taught the children about the identification of human body parts. Based on our analysis of the children's behaviours while interacting with KASPAR, our results show that the children started looking for a longer period of time to the experimenter, and a lot of interest in touching the robot was observed. They also show that the robot can be considered as a tool for prolonging the attention span of the children, being a social mediator during the interaction between the child and the experimenter. The results are primarily based on the analysis of video data of the interaction. Overall, this first study into teaching children with autism about body parts using a humanoid robot highlighted issues of scenario development, data collection and data analysis that will inform future studies.
Keywords: Assistive Technologies; Socially Assistive Robots; Human-Robot Interaction; Body Awareness
An Interactive Game with a Robot: Peoples' Perceptions of Robot Faces and a Gesture-Based User Interface BIBAKFull-Text 123-128
  Michael Walters; Samuel Marcos; Dag Sverre Syrdal; Kerstin Dautenhahn
This paper presents findings from a HRI user-study which investigated participants' perceptions and experiences playing a simple version of the classic game, stone-paper-scissors with a humanoid robot. Participants experienced the robot displaying one of four different robot faces and interacted with the robots using a gesture-based interface. Findings from the study indicated that the effects of the different robot faces were inter-related with participants gender and ratings for overall enjoyment of the game experience. The usability and effectiveness of the gesture-based interface were overall rated positively by participants, though the use of a separate display for the game interface seems to have distracted participants attention from the robot's face.
Keywords: HRI, Human-Robot-Interaction, Gesture-Based User Interface, Interactive Game Robot
Robust Perception of an Interaction Partner Using Depth Information BIBAKFull-Text 129-134
  Salah Saleh; Anne Kickton; Jochen Hirth; Karsten Berns
Social interactive robots require sophisticated perception abilities to behave and interact in a natural way. The proper perception of their human interaction partners plays a crucial role. The reduction of the false positive rate for human detection is very important for increasing the natural interaction abilities. This paper presents a combined method using RGB data as well as depth information to find humans in the robot's surrounding. To track a person over time a Kalman filter is applied, which also reduces the processing time. Furthermore, a head pose estimation on the basis of Support Vector Machines is integrated, which can be used to perceive nonverbal expressive cues like nodding. The proposed method is tested in various experiments.
Keywords: social robots; perception; human-robot interaction
Gesture Recognition for Humanoid Assisted Interactive Sign Language Tutoring BIBAKFull-Text 135-140
  Bekir Sitki Ertu&grul; Cemal Gurpinar; Hasan Kivrak; Ajla Kulaglic; Hatice Kose
This work is part of an ongoing work for sign language tutoring with imitation based turn-taking and interaction games (iSign) with humanoid robots and children with communication impairments. The paper focuses on the extension of the game, mainly for children with autism. Autism Spectrum Disorder (ASD) involves communication impairments, limited social interaction, and limited imagination. Many such children show interest in robots and find them engaging. Robots can facilitate social interaction between the child and teacher. In this work, a Nao H25 Humanoid robot assisted the human teacher to teach some signs and basic upper torso actions which were observed and imitated by the participants. Kinect camera based system was used to recognize the signs and other actions, and the robot gave visual and audial feedback to the participants based on the performance.
Keywords: Human-robot interaction; autism; imitation games; sign language

Applications

Knowledge-driven User Activity Recognition for a Smart House. Development and Validation of a Generic and Low-Cost, Resource-Efficient System BIBAKFull-Text 141-146
  Ismael Duque; Kerstin Dautenhahn; Kheng Lee Koay; Ian Willcock; Bruce Christianson
Our core interest is the development of autonomous and socially interactive robots that may support elderly users at home as part of a smart home, i.e. a home equipped with a sensor network that may detect activities of daily living such as preparing food in the kitchen, having meal in the living room, watching the television, etc. The current paper focuses on showing the design and implementation of a low-cost, resource-efficient activity recognition system that can detect user activities without the necessity of collecting a large dataset to train the system. Based on common-sense knowledge from activities of daily living, we generated a set of rules for defining user's activities in a home setting. These rules can be edited and adapted easily in order to accommodate different environments and daily life routines. The approach has been validated empirically with a pilot study in the University of Hertfordshire Robot House. The paper presents results from a study with 14 participants performing different daily life activities in the house. The results are promising, and future work will include the integration of this system in a Smart House used for Human-Robot Interaction studies. This may help develop context-aware robot companions capable of making better decisions to support users in their daily activities.
Keywords: Activity Recognition; Smart Houses; Context-Aware
Comparison of Simultaneous Measurement of Lens Accommodation and Convergence in Viewing Natural and Stereoscopic Visual Target BIBAKFull-Text 147-150
  Tomoki Shiomi; Takehito Kojima; Keita Uemoto; Masaru Miyao
Recent advances have been made in 3D technology. However, the influence of stereoscopic vision on human sight remains insufficiently understood. The public has come to understand that lens accommodation and convergence are mismatched during stereoscopic vision, and this is the main reason for visual fatigue caused while viewing 3D images. The aim in this study is to compare the fixation distance of accommodation and convergence in viewing real objects and 3D video clips. Real objects and 3D video clips perform the same movements; therefore, we measured accommodation and convergence in subjects who watched both. From the result of this experiment, we found that no discrepancy exists in viewing either 3D video clips or real objects. Therefore, we argue that the symptoms in viewing stereoscopic vision may not be due to the discrepancy between lens accommodation and convergence.
Keywords: accommodation, convergence, simultaneous measurement, stereoscopic vision
Study of a FCMAC ANN for Implementation in the Modeling of an Active Control Transtibial Prosthesis BIBAKFull-Text 151-156
  Jose Alberto Alves Andrade; Lourdes Mattos Brasil; Everaldo Henrique Diniz; Keli Cristina Vieira Siqueira Borges; Jeann Feitosa Figueiredo; Rita de Cassia Silva
This article presents some topics that have been developed by the authors on the development of a model of posture and behavior control of a Active Transtibial Prosthesis, i.e., applied to individuals with amputation below the knee. It is intended to use a Neuro-Fuzzy ANN as the basis of this control. Specifically, we intend to use an FCMAC ANN Neuro-Fuzzy type. Such ANN has the ability of memorizing a region of operation and allow that for similar entries to those known in memory, know outputs may be generated. It is intended to show an early version of this work whose application was the modeling of the inverse kinematics of a leg in the sagittal plane.
Keywords: Active Transtibial Prosthesis, Control, Artificial Intelligence, ANN Neuro-Fuzzy
The Iterative Design and Evaluation Approach for a Socially-aware Search and Retrieval Application for Digital Archiving BIBAKFull-Text 157-161
  Dimitris Spiliotopoulos; Ruben Bouwmeester; Dominik Frey; Georgios Kouroupetroglou; Pepi Stavropoulou
Designing user interfaces involves several iterations for usability design and evaluation as well as incremental functionality integration and testing. This paper reports on the methodological approach for the design and implementation of an application that is used for search and retrieval of socially-aware digital content. It presents the archivist view of professional media organizations and the specific requirements for successful retrieval of content. The content derived from the social media analysis is enormous and appropriate actions need to be taken to avoid irrelevant and/or repeated social information in the displayed results as well as over-information. The archivist feedback reveals the way humans address the social information as presented in the form of metadata along with the archived raw content and how this drives the design of a dedicated search and retrieval application.
Keywords: search and retrieval user interfaces; social network information; archiving; preservation; user interface design; usability

Agents and Human Interaction

Effect of Agent Embodiment on the Elder User Enjoyment of a Game BIBAKFull-Text 162-167
  Jérémy Wrobel; Ya Huei Wu; Hélène Kerhervé; Laila Kamali; Anne Sophie Rigaud; Céline Jost; Brigitte Le Pévédic; Dominique Duhaut
This paper presents a study that compared the elder user enjoyment of a game of trivia in three conditions: participants playing the game with a laptop PC vs. a robot vs. a virtual agent. Statistical analysis did not show any significant difference of the three devices on user enjoyment while qualitative analysis revealed a preference for the laptop PC condition, followed by the robot and the virtual agent. The elderly participants were concentrated on the task performance rather on the interaction with systems. They preferred laptop PC condition mainly because there were less interfaces distracting them from performing the task proposed by the game. Further, the robot was preferred to a virtual agent because of its physical presence. Some issues of the experiment design are raised and directions for future research are suggested to gain more insight into the effects of agent embodiment on human-agent interaction.
Keywords: Embodied agent; human-robot interaction; user enjoyment
Augmenting Remote Trading Card Play with Virtual Characters used in Animation and Game Stories -- Towards Persuasive and Ambient Transmedia Storytelling -- BIBAKFull-Text 168-177
  Mizuki Sakamoto; Todorka Alexandrova; Tatsuo Nakajima
Using well-known virtual characters is a promising approach to enhance information services, since such characters provoke people's empathetic feelings easily, and it is also easy for people to recall the leitmotif of the character's fictional stories. In Japan, recently, it has become a popular culture to use famous virtual characters of animations and games in various services, and this has even become a main business activity for some companies. In the real world, our daily life consists of various social activities, and virtual characters offer the possibility to enhance these activities. For example, our current social activities might be gamified by replacing unknown people with our favorite virtual characters or might be augmented by the characters' stories. In this paper, we present Augmented Trading Card Game that enhances remote trading card game play with virtual characters used in the fictional stories of popular animations and games. We show our observations about the way players use the system, realizing the game, and what their feelings and impressions about the game are. We believe the obtained results would be useful to consider how to use empathetic virtual characters and the fictional story that the characters are used in, in the real world activities for future information services. We also discuss how our approach can be extended to design a new type of transmedia storytelling by considering Augmented Trading Card Game as one form of transmedia storytelling.
Keywords: Empathetic virtual characters; Game design; Augmented reality; Trading card game; Ideological metaphor; Animation and game stories; Physical tangibility; Transmedia storytelling
An Interactive Agent Supporting First Meet Based on Adaptive Entrainment Control BIBAKFull-Text 178-183
  Tatsuya Hayamizu; Mutsuo Sano; Kenzaburo Miyawaki; Kentarou Mukai
This paper describes an agent that can facilitate first meeting communications. In this situation, a communication mediator is important because people can feel stress and an inability to talk comfortably. Our agent reduces this stress using embodied entrainment and promoting communication. In previous research into embodied entrainment, appropriate back-channel feedback has been discussed but communication studies have been limited. We propose an embodied entrainment control system that recognizes a state of communication and is adaptive to each situation with effective nonverbal communication. In this way, our agent mediates a balanced, two-way conversation. Our experiments with the agent con-firmed its effectiveness across various social skills levels. We demonstrate that the embodied entrainment of our agent in first meetings benefits people who have low social skills, there-by verifying the efficacy of our agent.
Keywords: Embodied Entrainment; Nonverbal Communication; Introducer Agent; Group Communication Introduction; Social Skills
The Virtual Counselor -- Automated Character Animation for Ambient Assisted Living BIBAKFull-Text 184-187
  Sascha Fagel; Martin Morandell; Christopher Mayer; Andreas Hilbert
We present a system for automated animation of text or voice messages suitable for Ambient Assisted Living user interfaces. Input to the system can be text, a pre-recorded speech file, or the speech signal captured directly from the microphone. Speech animation parameters are calculated by a co-articulation model either for the voice audio or -- if available -- from the phone chain extracted from the Text-To-Speech processing step in case of text input. An animation script that layers body movements and speech animation is generated. This script is rendered and converted into an h.264 video by a computer game engine. The system is developed to be used in care services for elderly users within a European research project.
Keywords: automatic character animation; embodied conversational agent; ambient assisted living; multimodal user interfaces; audiovisual speech synthesis

Education

AlgoPath's New Interface Helps You Find Your Way Through Common Algorithmic Mistakes BIBAKFull-Text 188-193
  Estelle Perrin; Sébastien Linck
This paper presents the new interface of our serious game AlgoPath and its related interactions. AlgoPath helps students learn algorithmic. The virtual world represented in AlgoPath is all linked to the business of road construction and people running along these roads: objects students interact with are 3D figures, houses (huts and suburban houses), boxes, a crane, a concrete mixer and a bus station. This paper shows that AlgoPath helps students avoid common mistakes they can make while learning algorithmic. The entire interface is dedicated to help them conceptualize and understand the rules of algorithmic and programming. Whenever it is possible, AlgoPath reminds students of these rules and corrects the mistakes.
Keywords: 3D-based training; education; algorithmic; ludic teaching
A Three-Dimensional Interactive Simulated-Globe System Application in Education BIBAKFull-Text 194-198
  Wei-Kai Liou; Chun-Yen Chang
This study proposes an innovative three-dimensional (3D) Interactive Simulated-Globe System application. The instrument includes: a data processing unit, a wireless control unit, an image capturing unit, a laser emission unit as well as 3D hemispheric body imaging. The 3D hemispheric body imaging is designed to display the output image from a data processing unit. The Laser emission unit is for emitting a laser spot on the output image. Based on the spherical coordinates of the laser spot, detected by a data processing unit through an image capture unit, the spherical coordinates, with a plane coordinates converter operation, can provide spherical coordinates to convert into plane coordinates for the data processing unit. Utilizing the coordinates location, of the laser spot through image acquisition, calibration and internal coordinates with external coordinate converter operation can be guided the cursor, output by the data processing unit on 3D hemispheric body to launch synchronous movement with laser spot. To combine wireless control technology can perform synchronous driven, control and interactive with the software image on 3D hemispheric body. This work allows general planet software such as Google Earth (Mar, Moon), used in a variety of panel displays or projection turn into the 3D hemispheric body, and then through the laser spot emitted by the laser emission unit and wireless control unit, allows the laser spot in synchronization to control the cursor and software on the 3D hemispheric body imaging for a variety of interactive, and then perform the effect of 3D interactive globe system in any classroom or astronomical Museum for formal and Social education.
Keywords: three-dimensional; interactive; spherical coordinates; internal coordinates; external coordinate
Virtual Simulation of the Construction Activity: Bridge Deck Composed of Precast Beams BIBAKFull-Text 199-203
  Luís Viana; Alcinia Zita Sampaio
In the execution of bridge or overpass decks several construction processes are applied. A geometric model 4D (3D + time) in a Virtual Reality environment which simulates the construction of a bridge deck composed of precast beams was implemented. The model allows viewing and interaction with the various steps and the main elements involved in the construction process. In order to develop the virtual model, the components of the construction, the steps inherent in the process and its follow-up and the type and mode of operation of the required equipment were initially examined, in detail. Based on this study, the 3D geometric modeling of the different elements that make up the site was created and a schedule that would simulate an interactive mode of construction activity was established. As the model is interactive, it allows the user to have access to different stages of the construction process, thus allowing different views in time and in space throughout the development of construction work, thereby supporting understanding of this constructive method. Since the model is didactic in character it can be used to support the training of students and professionals in the field of Bridge Construction.
Keywords: Bridge construction; interaction; simulation; virtual reality
The iPad in a Classroom: A Cool Personal Item or Simply an Educational Tool? BIBAKFull-Text 204-209
  Andrea Gasparini; Alma Leora Culén
In this paper, we discuss the dual role of the iPad among the teenage high school students using the tablet as a 1-1 (one tablet per student) educational tool. On one hand, the iPad is a personal, mobile and cool piece of technology. On the other hand, it is a piece of technology provided by the school and given to students as anytime, anyplace mobile educational tool. Our goal is to understand the space between the use situations related to school work and those that are private and personal. After ten months of observations of the use of the iPad, we conclude with that the iPad is treated by teenage participants in our study as purely educational tool.
Keywords: cool; identity; iPad; education; learning; techno-cools
Architecture of an Intelligent Tutoring System Applied to the Breast Cancer Based on Ontology, Artificial Neural Networks and Expert Systems BIBAKFull-Text 210-214
  Henrique P. Maffon; Jairo S. Melo; Tamara A. Morais; Patrycia B. Klavdianos; Lourdes M. Brasil; Thiago L. Amaral; Gloria M. Curilem
This paper presents an Intelligent Tutoring System (ITS) applied to the teaching of breast anatomy and pathology, more specifically of the breast cancer, the type of cancer that kills more women in the world. This paper aims to elucidate the importance of using these systems, lift requirements for the development and designing of an architecture of ITS. Through resources and applications of Artificial Intelligence techniques this ITS has the capacity to acquire the profile of its user and define teaching methodologies to build interactive and dynamics environments, based including in the use of Virtual Reality. The architecture of this ITS consist on four modules: Tutor Module (Artificial Neural Network), Student Module (Expert System), Domain Module (Ontology) and Interface Module (Adaptive Hypermedia Systems). The use of this ITS provides a didactic help to students and health professionals to the understanding of the explanations and practical applications needed to this domain of knowledge.
Keywords: Intelligent Tutoring System; Expert System; Artificial Neural Network; Ontology

Usability and Universal Accessibility I

CyPhy-UI: Cyber-Physical User Interaction Paradigm to Control Networked Appliances with Augmented Reality BIBAKFull-Text 215-221
  Kenya Sato; Naoya Sakamoto; Shinya Mihara; Hideki Shimada
Many kinds of networked home appliances, which are connected by standardized control functions, have recently appeared and continue to increase. Because a general infrared remote control is for single-way communication from a remote control to a specific appliance, but not to receive signals from an appliance to a remote control, it is impossible to gather an appliance's information with a remote control. Meanwhile, unlike a general infrared remote control, it is difficult to control a specific appliance because users can simultaneously operate all of their appliances with a WiFi controller. In this paper, to control networked appliances with a smart phone or tablet computer as a WiFi controller, we propose a new interface paradigm called a cyber-physical user interaction that creates virtual (cyber) space, sends commands, and receives responses from networked (physical) appliances through space with augmented-reality (AR) technology. With the paradigm, which enables interconnectivity among appliances from various vendors, it is possible users with uniform and intuitive operation of home appliances. In addition, we implement and evaluate an Embodied Visualization with Augmented-Reality for Networked Systems (EVANS) that controls a system of home appliances and sensor devices through a cyber-physical user interaction (CyPhy-UI) paradigm by a web camera to retrieve information from real world environments and touch-screen display to show AR visualization and user interaction components to retrieve user input.
Keywords: appliance; control; cyber-physical; user interface; augmented reality
Luminance Contrast Influences Reaction Time in Young and Older Adults BIBAKFull-Text 222-227
  Patrick J. Grabowski; Andrea H. Mason
Age-specific design principles for three dimensional virtual environment systems are sparse. Given that sensorimotor control systems change across the lifespan, understanding age differences in motor performance within virtual environments is crucial to designing effective, usable interfaces. This paper investigates the effect of luminance contrast level on reaction time to a visual stimulus in both young and senior adults. Results indicate that young adults have faster reaction times than seniors, but both groups improved reaction times with increasing luminance contrast of the target. Young adults improved at lower levels of contrast than seniors. Implications for age-specific design of virtual environments are discussed.
Keywords: virtual environment; aging; motor control; reach to grasp; luminance contrast
Networked Visibility: The case of smart card ticket information BIBAKFull-Text 228-233
  Maja van der Velden; Alma Leora Culén; Jo Herstad; Abdulhakeem Atif
This paper concerns the replacement of paper tickets with smart card tickets for public transportation. By contrasting the visibility of ticket information to users of paper tickets and smart card tickets, this paper describes the move from local information on paper tickets to distributed information on smart cards. Using the concept of 'networked visibility', this paper argues that this move has resulted in less informed travelers and more informed providers. In order to restore the accessibility of ticket information to users, we present one possible solution, ARTick, a mobile phone smart card reader app. ARTick shows that smart cards can make complete ticket information visible to the user, whenever and wherever this information is needed.
Keywords: Information Visibility; Networked Visibility, Smart Cards; Ticket Information; Mobile Phones
TV Applications for the Elderly: Assessing the Acceptance of Adaptation and Multimodality BIBAKFull-Text 234-242
  José Coelho; Pradipta Biswas; Tiago Guerreiro; Gokçen Aslan; Carlos Duarte; Pat Langdon
Current TV applications present uncountable challenges to an elderly user and these are prone to increase, working as a vehicle of exclusion. The GUIDE project aims at improving the elderly experience with present TV applications by deploying interfaces adapted to their abilities and preferences. We do so by building the interface based on a user model and providing new ways to interact with the TV (multimodality), and tailoring the UI to the users' abilities, needs and preferences. In this paper, we assess concepts of the GUIDE framework with particular focus to the User Initialization Application (UIA), an interactive application able to build the aforementioned user model. We report an evaluation with 40 older users from two countries (UK and Spain). Results show that the UIA is able to create adequate profiles and that the users are able to positively observe the adaptations. Further, novel ways of interacting with the TV were also successfully evaluated as the users tended to experiment and rate positively most alternatives, particularly Speech and Tablet interaction.
Keywords: accessible applications, elderly, multimodal, simulation, GUIDE
Identifying Cross-Platform and Cross-Modality Interaction Problems in e-Learning Environments BIBAKFull-Text 243-249
  André da Silva; Fernanda Freire; Heloísa da Rocha
Web applications and sites are designed to use keyboard and mouse as input devices and a medium resolution screen as output device. Mobile devices, such as smartphones and tablets, have enough computation power to render Web pages, allowing browsing the Internet. But, their main interaction style is touching style that was not usually considered in the Web applications design. Changing the platform or interaction style can lead to interaction problems. To study these problems, we investigated the use of TelEduc, an e-Learning environment designed to Internet and to be used with keyboard and mouse, in two touchscreen devices, a smartphone and a tablet. Some problems are usability problems and do not have relation with the platform or modality, but other problems are related to the platform or modality changing.
Keywords: Mobile devices and services; Interfaces, interactions and systems for distance education; Interface evaluation; Usability testing and evaluation

Usability and Universal Accessibility II

Applying Commercial Digital Games to Promote Upper Extremity Movement Functions for Stroke Patients BIBAKFull-Text 250-255
  Lan-Ling Huang; Chang-Franw Lee; Mei-Hsiang Chen
The objective of this study is to evaluate the effectiveness, usability and satisfaction of conventional devices, Nintendo Wii and XaviX, on upper extremity rehabilitation patients in Taiwan, and to summarize a guideline for improvement design of such devices. Twelve stroke patients were divided in three groups: (1) Conventional, (2) Wii, (3) XaviX groups. Eight senior occupational therapists were interviewed about the usage problems and additional needs related to the use of these devices. The results show that Wii and XaviX could be equivalent to conventional rehabilitation devices for improving upper extremity motor functions. All patients in this clinical trial were satisfied with using the digital gaming devices for rehabilitation. The suggestions for improvement design in game devices are as follows. For the software interface devices: (a) To increase difficulty and the response time levels of the games need adjustment, (b) To record movement data and game scores each time, some device for recording is needed, and (c) The games need a Chinese version of the software interface. For the hardware design: (a) The hand controller must be interchangeable for the users, (b) The controller should be adjustable to fit different hand dimensions of the patients, (c) The game and controller movements need to be designed to correspond to real-life activities, and (d) The controller's operation needs to be simplified. These proposed guidelines would be necessary in order to embody design improvements of the devices.
Keywords: effectiveness; usability assessment; commercial digital game devices; stroke; upper extremity rehabilitation
Evaluating the Interaction of Users with Low Vision in a Multimodal Environment BIBAKFull-Text 256-262
  Clodis Boscarioli; Marcio Seiji Oyamada; Jorge Bidarra; Marcelo Fudo Rech
This paper presents the PlatMult environment, a multimodal platform that aims to provide accessibility in interactive kiosks to users with low vision and elderly ones. The PlatMult solution is composed of a screen magnifier (visual stimulus), a screen reader (auditory stimulus), and motor feedback (tactile feedback). The evaluation of the interaction with users with low vision is described, focusing on usability and accessibility aspects. This paper also discusses the potential of the platform for social and digital inclusivity. We conclude that the PlatMult, with its integrated features, helps users to access and use information systems in a suitable way.
Keywords: PlatMult environment; low vision; usability; accessibility
Bimanual Performance in Unpredictable Virtual Environments: A Lifespan Study BIBAKFull-Text 263-268
  Andrea Mason; Drew Rutherford; Patrick Grabowski
Interaction and interface design for the young and the elderly has become an important research topic. The purpose of the research described here is to characterize motor performance in virtual environments across the lifespan. Participants between the ages of 7 and 90 years simultaneously reached to pick up two objects with their right and left hands in a desktop virtual environment. On random trials, objects were unexpectedly moved to new locations. Results indicated that older adults used different movement strategies in the virtual environment when compared to results from natural environment experiments. Further, children and older adults responded to perturbation conditions with different movement time and hand coupling strategies than young and middle-aged adults. These results suggest that age and task-specific design is necessary to ensure general access and optimal performance in virtual environments.
Keywords: virtual environment; aging; motor control; bimanual reach to grasp
Usability Analysis of Children's iPad Electronic Picture books BIBAKFull-Text 269-273
  Pei-shiuan Tsai; Manlai You
The main purpose of the research is to understand the current situation of design and development of iPad electronic picture books and analyze the usability. The researcher used the ranking lists search and browsing in the Apple Store for browsing various electronic picture books in great number. In the final stage, we screened out six different models to be used in iPad electronic picture books and conducted the analyses of usability. We selected by purposive sampling 15 adults (including eight teachers and seven mothers) and six six-year-old children (3 boys and 3 girls) who had experiences of using iPad. The subjects at first browsed six electronic picture books. Then 15 adults filled out the questionnaires, six children were interviewed and their operations were observed to understand their preferences and the uses of the products. The research found that the commonalities of the design of iPad electronic picture books a) dominated by page style; b) focused on linear development; c) most of interactive designs of story contents are clicking the objects on the screen; d) provided different languages and audio versions; e) most of them adopted the design of limited animation in which they used zoom-in and zoom-out and diversion techniques to show animations. The recommendations for future publishers and designers were: a) increase the interactions of story content; b) increase traditional Chinese subtitles and voice; c) integrate picture book platforms.
Keywords: Usability; e-Picture Book; iPad Picture book
Evaluating the Impact of Spatial Ability in Virtual and Real World Environments BIBAKFull-Text 274-279
  Georgi Batinov; Kofi Whitney; Les Miller; Sarah Nusser; Bryan Stanfill; Kathleen Ashenfelter
Survey agencies in the United States continue to move many map-based surveys from paper to handheld computers. With large highly diverse workforces, it is necessary to test software with a diverse population. The present work examines the performance of participants grouped by their level of spatial visualization. The participants were tested in either the field or in a fully immersive virtual environment. The methodology of the study is explained. The performance of the participants in the two environments is modeled with least squares regression. Results of the study are presented and discussed.
Keywords: map-based survey, virtual reality, spatial ability

Applications in Medicine

Software Lifecycle Activities to Improve Security Into Medical Device Applications BIBAKFull-Text 280-285
  Diogo Rispoli; Vinicius Rispoli; Paula Fernandes; Lourdes Brasil
This work proposes a methodology to include into Medical Software Development Lifecycle activities that helps improve security. The methodology uses assessment techniques and methods, applied to each phase of software lifecycle, that address security concerns and help to improve software quality. As a result, a partial analysis using the methodology proposed was performed in medical software at development stage to help reduce its gap between safety and security requirements.
Keywords: information security; security software; hackers; medical software lifecycle; security risks
"Handreha": A new Hand and Wrist Haptic Device for Hemiplegic Children BIBAKFull-Text 286-292
  Mohamed Bouri; Charles Baur; Reymond Clavel; Christopher John Newman; Milan Zedka
This paper presents the development of a new haptic device for hand rehabilitation for hemiplegic children. "Hanreha" has been developed at the EPFL thanks to the interest of the "Neurology and Pediatric NeuroRehabilitation" Service of the CHUV (Centre Hospitalier Universitaire du Canton de Vaud). The novelty of this device is that it is a 3 degrees of freedom desktop system, supporting pronation/supination, flexion/extension and grasping hand movements and it is totally dedicated, in its current state, to children. The kinematics and the construction aspects of this desktop device are presented. Its different advantages are discussed to point out the benefits of this structure. Control and force feedback aspects combined with virtual reality are also presented. A prototype of the "Handreha" is realized and presented and the performances discussed. The first evaluations with hemiplegic children really show that the mechanical design of the device fits the targeted specifications.
Keywords: Hand Rehabilitation; wrist; hemiplegic; children; force feedback; control; virtual reality
Fundamental Study to Consider for Evaluation of A Welfare Device BIBAKFull-Text 293-298
  Hiroaki Inoue; Shunji Shimizu; Noboru Takahashi; Hiroyuki Nara; Takeshi Tsuruga; Fumikazu Miwakeichi; Nobuhide Hirai; Senichiro Kikuchi; Eiju Watanabe; Satoshi Kato
Recently, Japan (also world-wide countries) has become aged society, and a wide variety welfare device and system have been developed. But evaluation of welfare system and device are limited only stability, intensity and partial operability. So, evaluation of usefulness is insufficient. Evaluation of usefulness is necessity to consider about interaction of human and welfare device. In this paper, we measure load of sitting and standing movement to use EMG (Electoromyogram) and 3D Motion Capture and set a goal to establish objective evaluation method. We think that establishing objective evaluation method is necessity to develop useful welfare device. We examined possibility of assessing load and fatigue from measuring brain activity to use NIRS (Near Infra-Red Spectroscopy). We think that measuring load and fatigue is very important for developing user-friendly welfare device. Idea of universal design is widespread in welfare device and system. Measuring require verification of all generations. But, we performed to measure younger subjects as a first step. We think that younger subjects were observed the significant difference, because they had enough physical function. Considering younger subjects as a benchmark is appropriate for creating evaluation method.
Keywords: Evaluation; Movement; Exercise; 3D Motion Capture; NIRS; EMG; Care; Welfare Technology; Useful welfare device evaluation; Evaluation method

Haptic Interfaces I

Haptic Manipulation of Objects on Multitouch Screens: Effects of Screen Elevation, Inclination and Task Requirements on Posture and Fatigue BIBAKFull-Text 299-302
  Samantha Scotland; Shwetarupalika Das; Thomas Armstrong; Bernard Martin
Application of multi-touch screen interfaces is desired by manufacturers of measuring instruments to follow the current trend of consumer products. This study is a first approach to determine the influence of screen location and orientation on upper limb movements and posture. Neck and wrist posture, as well as finger movements and subjective perception of discomfort were evaluated in simple tasks simulating the placement and scaling of objects using a multi-touch interface placed on a computer screen of the same size. The results show that wrist and neck postures are generally affected by an interaction between screen height and inclination. They also suggest that precision requirements associated with measuring instruments may not be compatible with simple transfer of manipulation methods from consumer products.
Keywords: Natural gestures; finger movements; wrist posture; discomfort
Sliding Raised-Dots Perceptual Characteristics BIBAKFull-Text 303-308
  Yoshihiko Nomura; Syed Muammar Najib Syed Yusoh; Kazuki Iwabu; Ryota Sakamoto
Abstract: The authors have studied new mode of cutaneous sensation characteristics on finger pads toward an accurate physical-line presenting computer-human interface. A series of psychophysical experiments were carried out on motion perception characteristics with the raised dots sliding on finger pads. As a result, it was found that there were two kinds of modes with the perception: (1) dot-counting mode, (2) raised-dots sliding-speed perceptual mode. The first mode worked in the long period of dot spacing and in the low sliding speed conditions, and showed such a high accuracy as a proprioceptive-sensation-based fingertip motion perception. The second mode worked in the short period or in the high speed conditions, and suffered ill performance similar to sliding flat surface.
Keywords: cutaneous sensation; fingerpad; sliding; raised dots; counting; speed
1 DOF Tabletop Haptic Mouse for Shape Recognition of 3D Virtual Objects BIBAKFull-Text 309-314
  Hiroshi Suzuki; Hiroaki Yano; Hiroo Iwata
In this paper, we propose a 1 Degree-Of-Freedom (DOF) mouse-shaped haptic device for shape perception of 3D virtual objects. Decrement of DOF of haptic device brings advantages, such as a miniaturization, solidity, and a weight reduction. A 1 DOF haptic device that consists of a built-in optical shaft encoder, two position sensors and a motor, has been developed. This device is easier to integrate with a tabletop system compared to multiple DOF haptic devices. We propose some haptic algorithms, which are effective to 1 DOF haptic device, and two types of pointing environments with a multi-touch overlay. Some experiments were conducted to evaluate the effectiveness of the proposed system. The elements required to make the system functional were clarified.
Keywords: 1 DOF; Mouse Device; Haptic; Image Display; Direct Pointing; Multi-touch Overlay
Usability Study of Static/Dynamic Gestures and Haptic Input as Interfaces to 3D Games BIBAKFull-Text 315-323
  Farzin Farhadi-Niaki; Jesse Gerroir; Ali Arya; S. Ali Etemad; Robert Laganière; Pierre Payeur; Robert Biddle
In this paper, the quality of the interaction of users with a 3D game using different modalities is studied. Three different interaction methods with a 3D virtual environment are considered: a haptic 3D mouse, natural static gestures (postures), and natural dynamic (kinetics) gestures. Through a comprehensive user experiment we compared the pre-defined natural gestures to each other and also to a haptic interface which is designed for the same game. The experiments analyze precision (error), efficiency (time), ease-of-use, pleasantness, fatigue, naturalness, mobility, and overall satisfaction as evaluation criteria. We also used user-selected ranks of importance as weight values for evaluation criteria to measure the overall satisfaction. Finally, our user experiment presents a learning curve for each of the three input methods which along with the other findings can be a good source for further research in the field of natural multimodal Human-Computer Interaction.
Keywords: Usability study, static/dynamic gestures, haptics, 3D game, human factors

Haptic Interfaces II

Interactive Dynamic Simulations with Co-Located Maglev Haptic and 3D Graphic Display BIBAKFull-Text 324-329
  Peter Berkelman; Sebastian Bozlee; Muneaki Miyasaka
We have developed an system which combines realtime dynamic simulations, 3D display, and magnetic levitation to provide high-fidelity co-located haptic and graphic interaction. Haptic interaction is generated by a horizontal array of cylindrical coils which act in combination to produce arbitrary forces and torques in any direction on magnets fixed to an instrument handle held by the user, according to the position and orientation sensed by a motion tracking sensor and the dynamics of a realtime physical simulation. Co-located graphics are provided by a thin flat screen placed directly above the coil array so that the 3D display of virtual objects shares the same volume as the motion range of the handheld instrument. Shuttered glasses and a head tracking system are used to preserve the alignment of the displayed environment and the interaction handle according to the user's head position. Interactive demonstration environments include rigid bodies with solid contacts, suspended mass-spring-damper assemblies, and deformable surfaces.
Keywords: haptics, interactive systems, simulations
Haptic System for Eyes Free and Hands Free Pedestrian Navigation BIBAKFull-Text 330-335
  Nehla Ghouaiel; Jean-Marc Cieutat; Jean-Pierre Jessel
Until now, Augmented Reality was mainly associated with visual augmentation which was often reduced to superimposing a virtual object on to a real world. We present in this document a vibro-tactile system called HaptiNav, which illustrates the concept of Haptic Augmented Reality. We use the haptic feedback method to send users information about their direction, thus enabling them to reach their destination. To do so, we use a turn by turn metaphor which consists of dividing the route into many reference points. In order to assess the performances of the HaptiNav system, we carry out an experimental study in which we compare it to both Google Maps Audio and Pocket Navigator systems. The results show that there is no significant difference between HaptiNav and Google Maps Audio in terms of performance, physical load and time. However, statistical analysis of the mental load, frustration and effort highlights the advantages of HaptiNav compared to two other systems. In the light of the results obtained, we present possible improvements for HaptiNav and describe its second prototype, at the end of this paper.
Keywords: haptic navigation; augmented reality; mobile computing; human computer interaction
Haptic Mouse -- Enabling Near Surface Haptics in Pointing Interfaces BIBAKFull-Text 336-341
  Kasun Karunanayaka; Sanath Siriwardana; Chamari Edirisinghe; Ryohei Nakatsu; Ponnampalam Gopalakrishnakone
In this study, we are introducing an innovative pointing interface for computers, which provides mouse functionalities with near surface haptic. It could also be configured as a haptic display, where users can feel the basic geometrical shapes in the GUI by moving the finger on top of the device surface. These functionalities are attained by tracking 3D position of a neodymium magnet, using Hall Effect sensors grid and generating like polarity haptic feedback using an electromagnet array. Where previously haptic sensations are felt only on top of the buttons of the haptic mouse implementations, this interface brings the haptic sensations to the 3D space.
Keywords: pointing interface; near surface haptic feedback; tactile display; tangible user interface
Fundamental Study to Consider for Advanced Interface in Grasping Movement BIBAKFull-Text 342-347
  Shunji Shimizu; Hiroaki Inoue; Noboru Takahashi
The analysis of human grasping movement is important in developing methodologies for controlling robots or understanding human motion programs. In analyzing human grasping movement, it is advantageous to classify movements. In previous papers, classifications of grasping patterns were proposed according to the posture. Among these classifications of grasping patterns, no unified view has been reached as yet. The measured quantities in grasping have included only the posture of the hand, force and its distribution. Few have pertained to classifications based on grasping force and its distribution. This paper first tries to analyze the effect of visual information on grasping movements, and then attempts to classify grasping movements broadly according to their purpose. For the elements of the purposes of grasping movements, movements that were decided upon were those which require attention, snapping or the adjustment of the wrist or movements which do not require any special action to achieve their purpose. Secondly, we focus on the tactile information to predict with a limitation of movement. Finally, we attempted to discuss the relation between human brain activity and grasping movement on cognitive tasks.
Keywords: human hand; grasping force; grasping pattern; brain activity; NIRS

Social Aspects of Human-computer Interaction

Analyzing the Effects of Virtualizing and Augmenting Trading Card Game based on the Player's Personality BIBAKFull-Text 348-357
  Mizuki Sakamoto; Todorka Alexandrova; Tatsuo Nakajima
In this paper, we focus on the Trading Card Game (TCG), which offers two versions of playing. One is the version of the TCG played with paper-based cards, and the other is the TCG played on a computer. We discuss the lost reality and the lost enjoyment by playing the computer based TCG compared to the paper based one. For analyzing the virtuality in the computer game, we propose a scenarios-based analysis based on the player's personality. We believe that the personality analysis described in the paper is useful to analyze the human social relationships in various games and social media, and also clarifies the possible obstacles and reasons for dissatisfaction in using social media and playing computer based games for each type of personality. Our study also claims that computer technologies can solve the problems caused due to the virtuality introduced in a game by using augmented reality techniques. We present Augmented Trading Card Game (Augmented TCG), and describe the results of some experiments with it that show how some problems and pitfalls of the remote trading card game play on the Internet could be improved. We believe that the case study and the scenario analysis given in this paper would be useful to improve the current realizations of games.
Keywords: Virtualization and augmentation; Personality; Game design; Augmented reality; Scenarios-based analysis
Influence of Relationship between Game Player and Remote Player on Emotion BIBAKFull-Text 358-363
  Masashi Okubo; Tsubasa Yamashita; Mamiko Sakata
The main trend in video game playing is the move to collaborative from competitive styles. Additionally, the game system is no longer confined to a stand-alone video game machine, but opened to other players through the Internet, so the competitors and the cooperators in the video game's world are not only programs, but also human beings. Over the internet, the players compete for a goal with their competitors in the competitive style game and obtain their goal together with their partners in the collaborative style game. Therefore, the game player cannot estimate the partner or enemy's behavior, and the interest in the game doesn't only depend upon the game itself but also on the partners' and competitors' behaviors. In this paper, we will investigate the influence of a partner or competitor on the performance and the state of mind of the game player. Another aspect of game playing is discussed by Mihaly Csikszentmihalyi, who identifies 'Flow', in which a person is fully immersed in a feeling of energized focus, as a central experience for enjoyable something. It is said that a person can feel Flow when they recognize their skill is just enough to accomplish the task. We will investigate the game player's state of mind based on Flow theory. As a result of some experiments, it will be shown that the player tends to feel good when the performance of their partner or enemy is almost as high as the skill of the subject. These experiments were performed by participants playing various video games, however, we think that the results of the experiment may be applied on the many and varied systems which support human motivation.
Keywords: social game; video game; flow theory; collaboration; competition
Reducing the User Burden of Identity Management: A Prototype Based Case Study for a Social-Media Payment Application BIBAKFull-Text 364-370
  Till Halbach Røssvoll; Lothar Fritsch
Payment applications inside social media dealing with privacy and security sensitive content require, besides trust in the involved parts like financial institutions and providers of electronic identities, in particular the trust of the users. The e-Me project focuses on this trust and aims at providing multimodal, adaptive authentication and authorization methods for social media that are usable for all users. In an integrated social-payment application connected to online banking, an OpenID provider has been developed by means of inclusive-identity management methods. The provider is used for both the social-media access control and the embedded payment service. This work describes the design decisions and eventual design made for the prototypes with considerations concerning both e-inclusion and information security and privacy.
Keywords: Trust, security, privacy, identity management; e-inclusion, accessibility, usability, universal design; social media/networking applications

User Modeling and User Focus I

Heads Up: Using Cognitive Mapping to Develop a Baseline Description for Urban Visualization BIBAKFull-Text 371-376
  Ginette Wessel; Elizabeth Unruh; Eric Sauda
Kevin Lynch's work on urban legibility has taken on new importance as the delivery of information about cities has shifted largely to mobile computing devices. This study extends his work with the aim of quantifying the number and type of elements that constitute a competent cognitive map of a city. We conducted a user study of 109 student sketch maps of Chicago that test the frequency and nature of the elements identified by Lynch (path, edge, district, node and landmark), their interrelationship and the effect of gender, prior experience and scale. We find that (1) participants identify two distinct urban scales, one at the neighborhood level and the other citywide, (2) competent cognitive maps involve relatively small numbers of elements: 15 (+/-7), (3) the selection of elements for the sketch map may include any of the elements identified by Lynch, but the frequency of landmarks and districts is negatively correlated, (4) participants recall significantly more districts and nodes at the citywide level, and (5) in addition to Lynch's identification of physical landmarks, participants also identify landmarks by function; such functional landmarks are more frequent at the neighborhood level.
Keywords: urban legibility, cognitive mapping, urban visualization
User Support System for Designing Decisional Database BIBAKFull-Text 377-382
  Fatma Abdelhédi; Gilles Zurfluh
The design of a multidimensional schema is usually performed by a specialist (computer scientist). According to data-driven, requirement-driven or hybrid-driven approaches, he determines the facts and axis of analysis. Such an approach assumes that the decision maker expresses, more or less formally, analysis needs and communicate them to the computer scientist. We propose multidimensional schema designing by the decision maker himself following a hybrid-driven approach. Through a process of assistance successively viewing intermediate schemas from sources, the decision maker gradually built his multidimensional schema. As what determined the measures studied, the analysis dimensions and hierarchies within dimensions. A software tool named SelfStar based on this principle has been developed and validated with decision makers.
Keywords: Multidimensional model; design process; decisional Data-base; decision-makers' requirements; data-source
Information Needs Of Chinese Mobile Internet Users: A User Study BIBAKFull-Text 383-388
  Yanxia Yang; Grace Deng
This paper investigates the information needs of Chinese mobile Internet users in their fast-paced environment. The mobile Internet users grown up in such an environment would have different interests, ways of using, and information needs. In order to obtain a better understanding on Chinese mobile Internet user behavior, a web survey was conducted in Xi'an, China, followed by the previous ethnographical study. With the pervasiveness of smart-phones, people from different age groups joined the mobile Internet market, and the results of this study illustrated the various user behaviors on the mobile Internet across different age groups of people. The study revealed that in the mobile Internet era, Chinese users are more concerned with information relating to the scope of safety and love/belonging needs, advancing towards the next level in Maslow's Hierarchy of Needs.
Keywords: Internet; mobile Internet; interests; user behavior; web survey; mobile applications
Emotion Recognition using Autonomic Nervous System Responses BIBAKFull-Text 389-394
  Byoung-Jun Park; Eun-Hye Jang; Sang-Hyeob Kim; Chul Huh; Myoung-Ae Chung; Jin-Hun Sohn
Recently in HCI research, emotion recognition is one of the core processes to implement emotional intelligence. There are many studies using physiological signals in order to recognize human emotions. The purpose of this study is to recognize emotions using autonomic nervous system responses induced by three different emotions (boredom, pain and surprise). Three different emotional states are evoked by emotional stimuli, physiological signals (EDA, ECG, PPG and SKT) for the induced emotions are measured as the reactions of stimuli, and 27 features are extracted from their physiological signals for emotion recognition. The stimuli are used to provoke emotions and tested their appropriateness and effectiveness. Audio-visual film clips used as stimuli are captured originally from movies, documentary, and TV shows with the appropriateness of 86%, 97.3% and 94.1% for boredom, pain and surprise, respectively, and the effectiveness of 5.23 for happiness, 4.96 for pain and 6.12 for surprise (7 point Likert scale). Also, for the three emotion recognition, we propose a Fuzzy c-means clustering based neural networks using the physiological signals. The proposed model consists of three layers, namely, input, hidden and output layers. Here, fuzzy c-means clustering method, two types of polynomial and linear combination function are used as a kernel function in the input layer, the hidden layer and the output layer of neural networks, respectively. To evaluate the performance of emotion recognition of the proposed model, we use the 10-fold cross validation and a comparative analysis shows that the proposed model exhibit higher accuracy when compared with some other models that exist in the literature.
Keywords: emotion; recognition; stimuli; physiological signal; autonomic nervous system responses; neural networks
Classification of Human Emotions from Physiological signals using Machine Learning Algorithms BIBAKFull-Text 395-400
  Eun-Hye Jang; Byoung-Jun Park; Sang-Hyeob Kim; Myoung-Ae Chung; Mi-Sook Park; Jin-Hun Sohn
Emotion recognition is one of the key steps towards emotional intelligence in advanced human-machine interaction. Recently, emotion recognition using physiological signals has been performed by various machine learning algorithms as physiological signals are important for emotion recognition abilities of human-computer systems. The purpose of this study is to classify three different emotional states (boredom, pain, and surprise) from physiological signals using several machine learning algorithms and to identify the optimal algorithms being able to classify these emotions. 217 subjects participated in this experiment. The emotional stimuli designed to induce three emotions (boredom, pain, and surprise) were presented to subjects and physiological signals were measured for 1 minute as baseline and for 1-1.5 minutes during emotional states. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 parameters were extracted from these signals. For classification of three different emotions, machine learning algorithms of Decision tree, k-NN (k-nearest neighbor algorithm), LDA (linear discriminant analysis), and SVM (support vector machine) were done by using the difference values of signal parameters subtracting baseline from the emotional state. Classification accuracy using LDA was 74.9% and the result of emotion recognition using Decision Tree showed that accuracy to recognize all emotions was 67.8%. In analysis of k-NN and SVM, classification accuracy was 62.0%. The result of emotion recognition shows that LDA is the best algorithm being able to classify pain, surprise, and boredom emotions. This led to better chance to recognize other emotions except human basic emotions and to assist more accurate and greater understanding on emotional interactions between man and machine based on physiological signals.
Keywords: pain; surprise; boredom; physiological signals; machine learning algorithm

User Modeling and User Focus II

Filling the User Skill Gap Using HCI Techniques to Implement Experimental Protocol on Driving Simulators BIBAKFull-Text 401-406
  Ghasan Bhatti; Roland Bremond; Jean-Pierre Jessel; Nguyen-thong Dang; Fabrice Vienne; Guillaume Millet
Programming activities are performed not only by programmers but also by end-users in order to support their primary goals in different domains and applications. End-users do not have formal training in programming, so interaction environment and systems are needed, which could account for user skills. The objective of our work is to fill the gap between the user skills and the goals they want to achieve using driving simulators. This paper presents the results of a research in which we have proposed a solution for the primary users of the driving simulator to design and implement experimental protocol. We have used the user-centered design (UCD) technique, conducted a user survey, and proposed a solution, in which we have categorized the Interface of the driving simulator into three sub-interfaces based on the skills of the users. These interfaces are Experiment Builder (Non-technical persons), Template builder (for technical persons) and Experiment Interface (for any user to run as experiment). A prototype based on this concept is developed and some feedback were collected from end-users. Our results indicate that, users can implement an experimental protocol without having programming skills using our proposed design.
Keywords: Experimental protocol; User-Centered Design; HCI Techniques; Scenario modeling; Driving simulators
Instrumentation and Features Selection Using a Realistic Car Simulator in Order to Perform Efficient Single-User Drunkenness Analysis BIBAKFull-Text 407-412
  Audrey Robinel; Didier Puzenat
We instrumented a car simulator by gathering low level data and fed it to an artificial neural network in order to perform blood alcohol content (BAC) estimations. The results depend on the quality of the data extraction and processing, and also on the selected inputs. We explain our data extraction and processing methodology, and how we used it to generate reliable and comparable features. At last, we describe the performances of individual features and how they combine. In the end, the prototype was able to accurately estimate the BAC value of a subject after being trained with driving samples of this subject with various BAC values.
Keywords: Instrumentation and Features Selection Using a Realistic Car Simulator in Order to Perform Efficient Single-User Drunkenness Analysis Blood Alcohol Content; Driving; Interface; Artificial Neural Networks; Intelligent systems; Machine learning; Instrumen
Adaptive Simulation of Monitoring Behavior: The Adaptive Information Expectancy Model BIBAKFull-Text 413-419
  Bertram Wortelen; Andreas Lüdtke
Human attention is a fundamental but limited resource. Especially when performing safety critical tasks a suitable distribution of attention is essential for safe operation. E.g., changes in task relevant information have to be recognized in time in order to react adequately. This paper presents the Adaptive Information Expectancy (AIE) model, which simulates the scheduling of attention within cognitive architectures. It can be used for model-based evaluations of interactive human-machine systems. Results of a first evaluation study are shown based on a simple laboratory monitoring task. An overview on the AIE model is given and it is shown how it was integrated in the Cognitive Architecture for Safety Critical Task Simulation (CASCaS). A formal model for the laboratory task was developed and then simulated using CASCaS. Several aspects of the AIE model are evaluated on the basis of the simulations of this agent in two main steps. The first step of the evaluation compares the agent behavior with results from the studies conducted by Senders. In this step two alternative AIE model variants are compared to participants' behavior. The second evaluation step explores parameter sensitivity and the convergence behavior of the model.
Keywords: event expectancy; cognitive model; attention allocation; monitoring behavior
Shape Modeling: From Linear Anthropometry to Surface Model BIBAKFull-Text 420-425
  Ameersing Luximon; Huang Chao
Due to technological innovations, there is a shift from linear anthropometric measures to surface model. This is essential since body surface information is needed in medical, archaeological, forensic and many other disciplines. As a result, there is a need to shift from linear anthropometry tables to surface anthropometry databases. This study provides a general modelling technique, to convert linear anthropometry to complex surface anthropometry using recursive regression equations technique (RRET) and scaling technique. In order to build the model, scanned data of a 3D body part for a selected population is needed. After scanning, the parts are aligned along a reference axis before extracting cross-sections perpendicular to the reference axis. Using the cross sections, a 'standard' part is computed based on an averaging method. The 'standard' part provides the shape information of a given population and thus it can be stored in a database for shape prediction. Using the recursive regression equations technique, regression equations are constructed based on selected anthropometric measures of each cross section. During 3D part shape prediction, only few anthropometric measures are used and the anthropometric measures of all the other cross sections are predicted recursively using the equations developed from RRET. Eventually, the predicted measures are used to scale the 'standard' part in order to generate a predicted 3D shape of the selected part. Previous studies published by our team on foot shape modelling have given accurate results of about 2mm. Thus, depending on different applications, this technique can be applied to generate 3D shape from anthropometry and can be applied to reconstructive surgery, forensic, anthropology, design, psychology and other fields.
Keywords: anthropometric; surface antropometry; recursive regression equation

Interfaces I

A Modular Interface Design to Indicate a Robot's Social Capabilities BIBAKFull-Text 426-432
  Frank Hegel
This paper presents an intersection of human-like appearance, product design, and information design in order to systematically manipulate a robot's conceptual user interface design. The social robot 'Flobi' appears as an iconic cartoon-like character to mediate between users and application scenarios. Flobi's interface design consist of three visual dimensions to choreograph user expectations of the robot's capabilities, traits, and competences. First, the robot has dynamic facial features to display various emotional expressions. Second, the structural head design consist of exchangeable modular parts which are magnetically connected. Through the modular design the visual features of Flobi (e.g., hairstyle, facial features) can be altered easily in order to create various characters. Third, different clothing will prospectively be used to trigger the robot's social roles. All three dimensions of visual features are highly likely to have an effect on the evaluation of Flobi's traits and capabilities.
Keywords: Social Robots, Industrial Design, Human Factors
Automatic Discrimination of Voluntary and Spontaneous Eyeblinks. Use of the blink as a switch interface BIBAKFull-Text 433-439
  Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Hironobu Sato; Shoichi Ohi
This paper proposes a method to analyze the automatic detection and discrimination of eyeblinks for use with a human-computer interface. When eyeblinks are detected, the eyeblink waveform is also acquired from a change in the eye aperture area of the subject by giving a sound signal. We compared voluntary and spontaneous blink parameters obtained by experiments, and we found that the trends of the subjects for important feature parameters could be sorted into three types. As a result, the realization of automatic discrimination of voluntary and spontaneous eye blinking can be expected.
Keywords: Computer interface, Automatic discrimination, Voluntary eye blink, Spontaneous eye blink
Cursor Control Trace: Another look into eye-gaze, hand, and eye-hand pointing techniques BIBAKFull-Text 440-443
  Ricardo Sol; Mon-Chu Chen; José Carlos Marques
We analyzed cursor control trace with respect to three cursor control methods: eye-gaze, hand, and eye-hand. First, we look into the mechanism that allows users to control cursor positions by hands and/or eyes. Second, we conducted an experiment in which subjects perform a searching, pointing and selection task in three different conditions (eye-only, hand-only, and eye-hand). Third, we further studied the cursor trace and analyzed the moments when users switch between eye-gaze-control and hand-control. Although different from a simpler Fitts' pointing task, our results mostly corroborate previous work. In addition, the cursor traces analysis further shows why eye-hand is more efficient, and how users progress from an inefficient pointing behavior to an optimal one.
Keywords: pointing; accuracy; gaze; eye; tracking
A Hybrid Tracking Solution to Enhance Natural Interaction in Marker-based Augmented Reality Applications BIBAKFull-Text 444-453
  Rafael Radkowski; James Oliver
In this paper a method for enhanced natural interaction in Augmented Reality (AR) applications is presented. AR applications are interactive applications, designed to facilitate the handling of virtual objects, represented by a physical proxy object. Ideally, interaction should be natural, in that the user should not notice its technical realization. Interaction capability relies on tracking technology, which enables spatial registration of virtual objects in 3D. Markers are a common solution for this. However, the marker must stay in line of sight of a video camera. In highly interactive applications, the user's hands regularly cover the markers. Thus, the virtual object disappears. This paper describes a hybrid tracking solution, which incorporates marker-based tracking and optical flow-based tracking. The optical flow-based tracking supports the marker-based tracking: it seeks corners of the marker to keep track of them. If no markers are visible, the optical flow tracking extrapolates the position of the object to track. Thus, the virtual object remains visible. A prototype implementation and example application show the feasibility of the solution.
Keywords: augmented reality; hybrid tracking; interaction

Interfaces II

Evaluating Multi-Modal Eye Gaze Interaction for Moving Object Selection BIBAKFull-Text 454-459
  Jutta Hild; Elke Müller; Edmund Klaus; Elisabeth Peinsipp-Byma; Jürgen Beyerer
Moving object selection is a frequently occurring interaction task in expert video analysis. State-of-the-art, video analysts use the mouse as input device. As the selection of moving objects is more complex than the selection of static objects, particularly when objects move fast and unpredictable, using the mouse is less efficient than usual and induces more manual stress. In this contribution, multi-modal gaze-based interaction is proposed as an alternative interaction technique for moving object selection. In an experiment using an abstract moving circle scenario, the two gaze-based interaction techniques gaze + key press and liberal MAGIC pointing are compared to mouse interaction. Evaluation of both user performance and user satisfaction shows that at least the gaze + key press technique might be a promising interaction alternative for moving objects selection.
Keywords: moving object selection; eye gaze interaction; multi-modal interaction; experiment; video analysis
Touch-Screens and Elderly users: A Perfect Match? BIBAKFull-Text 460-465
  Alma Leora Culén; Tone Bratteteig
This paper discusses some challenges in use of touch-based screens by elderly adults. We are focusing primarily on touch-based interactions with personal artifacts such as smart phones and tablets or touch-screens embedded in the home environment. We have conducted several small studies as a prequel to a larger study of "smart home" package designed and employed in "care residences" for elderly. We report here on findings from these studies and extend them into more general discussion on the use of touch interfaces by elderly. We discuss challenges related to diversity of elderly as a user group, progressive changes due to aging and their effects on the use of touch-screens. How to use technology to support mastering of daily life tasks and at the same time easy to use, touch-based solutions that support mastery (usually requiring some level of skills)? How to select other modes of interaction when the touch is not enough? Elderly people constitute a challenging and vulnerable user group that we want to strengthen and empower in the spirit of participatory design.
Keywords: Touch Interfaces; Design Method; Elderly Users; Multimodal Interactions
Basic Study for New Assistive System Based on Brain Activity during Car Driving BIBAKFull-Text 466-471
  Shunji Shimizu; Hiroaki Inoue; Hiroyuki Nara; Noboru Takahashi; Fumikazu Miwakeichi; Nobuhide Hirai; Senichiro Kikuchi; Eiju Watanabe; Satoshi Kato
Final purpose in this research is to contribute to developing of assistive robot and apparatus. Recently, there is a pressing need to develop a new system which assists and acts for car driving and wheelchair for the elderly as the population grows older. In terms of developing a new system, it is thought that it is important to examine behaviors as well as spatial recognition. Therefore, experiments have been performed for an examination of human spatial perceptions, especially right and left recognition, during car driving using NIRS. In previous research, it has been documented that there were significant differences at dorsolateral prefrontal cortex at left hemisphere during virtual driving task and actual driving. In this paper, brain activity during car driving was measured and detailed analysis was performed by segmental brain activity during car driving on the basis of subjects' motion. So, we report the relationship between brain activity and movement concerned with perception during driving in this paper.
Keywords: brain information processing during driving task; spatial cognitive task; determining direction; NIRS

Interaction Devices

Proposal of an Automobile Driving Interface Using Gesture Operation for Disabled People BIBAKFull-Text 472-478
  Yoshitoshi Murata; Kazuhiro Yoshida; Kazuhiro Suzuki; Daisuke Takahashi
A steering operation interface has been designed for disabled people that uses gesture operation. It incorporates both non-linear and semi-automatic steering control. Experiments using a gyro sensor and a driving simulator demonstrated that the driving operation is close to that achieved with conventional steering wheel operation. Sufficient practice in using the proposed interface should therefore enable a user to achieve steering control closer to that achieved with a steering wheel.
Keywords: automobile driving interface, disabled people, gyro sensor, gesture operation, body part operation
The European MobileSage Project -- Situated Adaptive Guidance for the Mobile Elderly BIBAKFull-Text 479-482
  Till Halbach Røssvoll
MobileSage is an AAL project with the goal to provide particularly elderly people with relevant and useful multimodal guidance on demand, depending on the context, and in a personalized, adaptive, accessible, and usable manner. The project aims to increase the independence of elderly people in particular in the home environment and during travel. This work presents the project and its two main deliverables, discusses related research, summarizes preliminary results, and gives an outlook of anticipated results.
Keywords: AAL, mobile, application, assistance, help on demand, personalization, adaptive, accessible, usable, multimodal
e-Learning Environment with Multimodal Interaction: A proposal to improve the usability, accessibility and learnability of e-learning environments BIBAKFull-Text 483-487
  André da Silva; Heloísa da Rocha
The Human-Computer Interaction is challenging the use of many modalities to interact with an application. The e-Learning environments interfaces are been exposed to this diversity of modalities, but they are designed to be used with a limited set. The impact is that users have interaction problems caused by the cross modality. The e-Learning environments need to evolve allowing users to interact with a more broadly interaction styles. One solution is adopt Multimodal Interaction concepts, improving the usability and accessibility of the e-Learning environment and make possible to embrace better learning contexts, property that we define as learnability.
Keywords: Human-Computer Interaction; Interaction Styles; Multimodal Interaction; Electronic Learning Environment
A.M.B.E.R. Shark-Fin: An Unobtrusive Affective Mouse BIBAKFull-Text 488-495
  Thomas Christy; Ludmila I. Kuncheva
Analysing, measuring, recognising and exploiting emotion is an attractive agenda in designing computer games. The current devices for mputing physiological modalities are usually awkward to wear or handle. Here we propose a Shark-fin Mouse which streams three signals in real-time: pulse, electrodermal activity (EDA, also known as galvanic skin response or GSR) and skin temperature. All sensors are embedded into a fully functional computer mouse and positioned so as to ensure maximum robustness of the signals. No restriction of the mouse operation is imposed apart from the user having to place the tip of their middle finger into the Shark-fin hub. Boundary tests and experiments with a simple bespoke computer game demonstrate that the Shark-fin Mouse is capable of streaming clean and useful signals.
Keywords: Interaction device; Affective gaming; Physiological sensors; Biometric feedback; Emotion