HCI Bibliography Home | HCI Conferences | VAMR Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VAMR Tables of Contents: 13-113-214-114-215

VAMR 2013: 5th International Conference on Virtual, Augmented and Mixed Reality, Part I: Designing and Developing Augmented and Virtual Environments

Fullname:VAMR 2013: 5th International Conference on Virtual Augmented and Mixed Reality, Part I: Designing and Developing Augmented and Virtual Environments
Note:Volume 18 of HCI International 2013
Editors:Randall Shumaker
Location:Las Vegas, Nevada
Dates:2013-Jul-21 to 2013-Jul-26
Volume:1
Publisher:Springer Berlin Heidelberg
Series:Lecture Notes in Computer Science 8021
Standard No:DOI: 10.1007/978-3-642-39405-8 hcibib: VAMR13-1; ISBN: 978-3-642-39404-1 (print), 978-3-642-39405-8 (online)
Papers:43
Pages:399
Links:Online Proceedings | Conference Webpage
  1. VAMR 2013-07-21 Volume 1
    1. Developing Augmented and Virtual Environments
    2. Interaction in Augmented and Virtual Environments
    3. Human-Robot Interaction in Virtual Environments
    4. Presence and Tele-presence

VAMR 2013-07-21 Volume 1

Developing Augmented and Virtual Environments

Passive Viewpoints in a Collaborative Immersive Environment BIBAKFull-Text 3-12
  Sarah Coburn; Lisa Rebenitsch; Charles Owen
Widespread acceptance of virtual reality has been partially handicapped by the inability of current systems to accommodate multiple viewpoints, thereby limiting their appeal for collaborative applications. We are exploring the ability to utilize passive, untracked participants in a powerwall environment. These participants see the same image as the active, immersive participant. This does present the passive user with a varying viewpoint that does not correspond to their current position. We demonstrate the impact this will have on the perceived image and show that human psychology is actually well adapted to compensating for what, on the surface, would seem to be a very drastic distortion. We present some initial guidelines for system design that minimize the negative impact of passive participation, allowing two or more collaborative participants. We then outline future experimentation to measure user compensation for these distorted viewpoints.
Keywords: virtual reality; passive participation; immersion; perception
Virtual Reality Based Interactive Conceptual Simulations BIBAKFull-Text 13-22
  Holger Graf; André Stork
This paper presents a new approach for the design and realization of a Virtual Reality (VR) based engineering front end that enables engineers to combine post processing tasks and finite element methods for linear static analyses at interactive rates. "What-if-scenarios" have become a widespread methodology in the CAE domain. Here, designers and engineers interact with the virtual mock-up, change boundary conditions (BC), variate geometry or BCs and simulate and analyze its impact on the CAE mock-up. The potential of VR for post-processing engineering data enlightened ideas to deploy it for interactive investigations at conceptual stage. While it is a valid hypothesis, still many challenges and problems remain due to the nature of the "change'n play" paradigm imposed by conceptual simulations as well as the non-availability of accurate, interactive FEM procedures. Interactive conceptual simulations (ICS) require new FEM approaches in order to expose the benefit of VR based front ends.
Keywords: Computer Aided Engineering; Interactive Conceptual Simulations; VR environments for engineering
Enhancing Metric Perception with RGB-D Camera BIBAKFull-Text 23-31
  Daiki Handa; Hirotake Ishii; Hiroshi Shimoda
Metric measurement of environment has fundamental role in tasks such as interior design and plant maintenance. Conventional methods for these tasks suffer from high development cost or unstability. We propose a mobile metric perception enhancement system which focuses on interactivity through user locomotion. The proposed system overlays geometric annotations in real-time on a tablet device. The annotation is generated from RGB-D camera in per-frame basis, alleviating the object recognition problem by effectively utilizing processing power of human. We show a few illustrative cases where the system is tested, and discuss correctness of annotations.
Keywords: Augmented Reality; Augmented Human; Mobile Device; RGB-D Camera; Geometric Annotation; Per-frame Processing
Painting Alive: Handheld Augmented Reality System for Large Targets BIBAKFull-Text 32-38
  Jae-In Hwang; Min-Hyuk Sung; Ig-Jae Kim; Sang Chul Ahn; Hyoung-Gon Kim; Heedong Ko
This paper presents a handheld augmented reality (AR) system and an authoring method which provides alive contents in large targets. In the general augmented reality tools, they are not designed for large targets but for only adequate size of target which fits in the screen. Therefore we designed and built a vision-based AR system and an authoring method that can handle much larger targets than the view frustum.
Keywords: augmented reality
VWSocialLab: Prototype Virtual World (VW) Toolkit for Social and Behavioral Science Experimental Set-Up and Control BIBAKFull-Text 39-48
  Lana Jaff; Austen Hayes; Amy Ulinski Banic
There are benefits for social and behavioral researchers to conduct studies in online virtual worlds. However, typically learning scripting takes additional time or money to hire a consultant. We propose a prototype Virtual World Toolkit for to help researchers design, set up, and run experiments in Virtual Worlds, with little coding or scripting experience needed. We explored three types of prototype designs, focused on a traditional interface with pilot results. We also present results of initial expert user study of our toolkit to determine the learnability, usability, and feasibility of our toolkit to conduct experiments. Results suggest that our toolkit requires little training and sufficient capabilities for a basic experiment. The toolkit received a great feedback from a number of expert users who thought that it is a promising first version that lays the foundation to more future improvements. This toolkit prototype contributes to enhancing researchers' capabilities in conducting social/behavioral studies in virtual worlds and hopefully will empower social and behavioral researchers by proving a toolkit prototype that requires less time, efforts and costs to setup stimulus responses types of human subject studies in virtual worlds.
Keywords: Virtual Humans; Online Virtual Worlds; Virtual Environments; Social Science; Psychology; Behavioral Science; Human Experiments; Toolkit; Evaluation; Prototype Experimental Testbed
Controlling and Filtering Information Density with Spatial Interaction Techniques via Handheld Augmented Reality BIBAKFull-Text 49-57
  Jens Keil; Michael Zoellner; Timo Engelke; Folker Wientapper; Michael Schmitt
In our paper we are proposing a method for contextual information filtering based on the user's movement and location in order to enable the intuitive usage of an "internet of things" via augmented reality (AR) without information overload. Similar to Ray & Charles Eames' "Power of Ten" and Jef Raskin's "Zooming Interface" we are displaying seamless information layers by simply moving around a Greek statue or a miniature model of an Ariane-5 space rocket. Therefore we are employing concepts of camera- and motion-based interaction techniques and use the metaphors of "investigation" and "exploration" to control the way augmented and visually superimposed elements are presented in order to mediate information in an enhanced and engaging manner with aspects of digital storytelling techniques.
Keywords: Adaptive and personalized interfaces; Human Centered Design; Information visualization; Interaction design; New Technology and its Usefulness
Development of Multiview Image Generation Simulator for Depth Map Quantization BIBAKFull-Text 58-64
  Minyoung Kim; Ki-Young Seo; Seokhwan Kim; Kyoung Shin Park; Yongjoo Cho
This study presents the novel multiview image generation simulator system based on the Depth Image-Based Rendering (DIBR) technique. This system supports both actual photographs and computer graphics scenes. It also provides the simple plug-in for pre-processing of depth map or post-processing of hole-filling algorithm. We intended to make this system as a platform to conduct various experiments such as the number of cameras, a depth map precision, etc. In this paper, we explain the design and the development of this simulator and give a brief comparative evaluation on linear and non-linear depth quantization method for computer graphics 3D scenes. The results showed that non-linear depth quantization method produced better performance on 7- to 3-bit depth levels.
Keywords: Depth Image Based Rendering; Multiview System; Depth Map Quantization; Hole-Filling
Authoring System Using Panoramas of Real World BIBAKFull-Text 65-72
  Hee Jae Kim; Jong Weon Lee
A panorama is a wide-angle view of a real world. Panoramas provide users real world information as the component of map services. Recently researchers try to augment additional information on panoramas to extend the usefulness of panoramas. However, the existing researches and applications provide users inconsistent experience by augmenting information on a single panorama. To solve this inconsistency, we present an authoring system helping users create contents on panoramas. Users create contents by augmenting virtual information on panoramas using the authoring system that propagates virtual information augmented on one panorama to neighboring panoramas. The resulting contents provide users consistent viewing experiences. Users can experience the contents on their desktop or they can view the contents on a smartphone display at the locations near to the locations panoramas were captured.
Keywords: Panoramas; Authoring; Augmenting; Consistent Experience
Integrated Platform for an Augmented Environment with Heterogeneous Multimodal Displays BIBAKFull-Text 73-78
  Jaedong Lee; Sangyong Lee; Gerard Jounghyun Kim
With the recent advances and ubiquity of various display systems, one may configure an augmented space with a variety of display systems, such as 3D monitors, projectors, mobile devices, holographic displays, and even non-visual displays such as speakers and haptic devices. In this paper, we present a software support platform for representing and executing a dynamic augmented 3D scene with heterogeneous display systems. We extend the conventional scene graph so that a variety of modal display rendering (aside from just visual projection) can be supported. The execution environment supports multi-threading of the rendering processes for the multiple display systems and their synchronization. As multiple and heterogeneous displays, in effect representing a particular set objects in the augmented environment, are scattered in the environment, additional perception based spatial calibration method is also proposed.
Keywords: Augmented space; Extended scene graph; Multiple displays; Calibration; Floating image display
Optimal Design of a Haptic Device for a Particular Task in a Virtual Environment BIBAKFull-Text 79-85
  Jose San Martin; Loic Corenthy; Luis Pastor; Marcos Garcia
When we create an environment of virtual reality based training that integrates one or several haptic devices sometimes the first choice to make is the device to use. This paper introduces an algorithm that allows us, for a particular task to be simulated in a virtual environment, to find key data for the design of appropriate haptic device, or to select the clues in order to get optimum performance for that environment and that particular task.
Keywords: Virtual Reality; Haptics workspace; Manipulability; Optimal designing
Real-Time Dynamic Lighting Control of an AR Model Based on a Data-Glove with Accelerometers and NI-DAQ BIBAKFull-Text 86-93
  Alex Rodiera Clarens; Isidro Navarro
The lighting of models displayed in Augmented Reality (AR) is now one of the most studied techniques and is in constant development. Dynamic control of lighting by the user can improve the transmission of information displayed to enhance the understanding of the project or model presented. The project shows the development of a dynamic control of lighting based on a data-glove with accelerometers and A/D NI-DAQ converter. This device transmits (wired/wirelessly) the signal into the AR software simulating the keystrokes equivalent to lighting control commands of the model. The system shows how fast and easy it is to control the lighting of a model in real-time following user movements, generating great expectations of the transmission of information and dynamism in AR.
Keywords: Real-time lighting; NI-DAQ; Accelerometers; Xbee; Data-glove; augmented reality
Ultra Low Cost Eye Gaze Tracking for Virtual Environments BIBAKFull-Text 94-102
  Matthew Swarts; Jin Noh
In this paper we present an ultra-low cost eye gaze tracker specifically aimed at studying visual attention in 3D virtual environments. We capture camera view and user eye gaze for each frame and project vectors back into the environment to visualize where and what subjects view over time. Additionally we show one measure of calculating the accuracy in 3D space by creating vectors from the stored data and projecting them onto a fixed sphere. The ratio of hits to non-hits provides a measure of 3D sensitivity of the setup.
Keywords: Low Cost; Eye Tracking; Virtual Environments
Real-Time Stereo Rendering Technique for Virtual Reality System Based on the Interactions with Human View and Hand Gestures BIBAKFull-Text 103-110
  Viet Tran Hoang; Anh Nguyen Hoang; Dongho Kim
This paper proposes the methods of generating virtual reality system with stereo vision, simple and widely used 3D stereoscopic displays. However, we are motivated by not only 3D stereo display but also realistic rendered scenes popped out of screen which can be thought of as an interactive system addressing the human-to-virtual-objects manipulation. The user of the system can observe the objects in the scene in 3D stereoscopy and can manipulate directly by using hand gestures. We present the technique to render the 3D scene out of the screen and use KINECT device to keep track of user's hand movement to render the objects according to the user's view.
Keywords: virtual reality; stereoscopy; real-time rendering; head tracking
Information Management for Multiple Entities in a Remote Sensor Environment BIBAKFull-Text 111-117
  Peter Venero; Allen Rowe; Thomas Corretta; James Boyer
Current remote piloted aircraft (RPA) operations typically have one sensor operator dedicated to a single sensor, but this may change in the future. To maintain a clear line of sight, the operator must know which sensor to switch to, especially for a moving target. We researched whether using augmented reality and presenting obstruction information helped operators maintain good situational awareness about sensor target relationships. This study had two independent variables: predictive interface (three levels -- none, predictive only, and predictive with rays) and interface configuration (two levels -- with and without dedicated sensor screens). The results of this study showed that the predictive interface did not increase the operators' performance; however, their performance did increase when we added the dedicated screens.
Keywords: augmented reality; sensor management; RPA; control station

Interaction in Augmented and Virtual Environments

Tactile Apparent Motion Presented from Seat Pan Facilitates Racing Experience BIBAKFull-Text 121-128
  Tomohiro Amemiya; Koichi Hirota; Yasushi Ikei
When moving through the world, humans receive a variety of sensory cues involved in self-motion. In this study, we clarified whether a tactile flow created by a matrix of vibrators in a seat pan simultaneously presented with a car-racing computer game enhances the perceived forward velocity of self-motion. The experimental results show that the forward velocity of self-motion is significantly overestimated for rapid tactile flows and underestimated for slow ones, compared with only optical flow or non-motion vibrotactile stimulation conditions.
Keywords: Tactile flow; Optic flow; Multisensory integration
Predicting Navigation Performance with Psychophysiological Responses to Threat in a Virtual Environment BIBAKFull-Text 129-138
  Christopher G. Courtney; Michael E. Dawson; Albert A. Rizzo; Brian J. Arizmendi; Thomas D. Parsons
The present study examined the physiological responses collected during a route-learning and subsequent navigation task in a novel virtual environment. Additionally, participants were subjected to varying levels of environmental threat during the route-learning phase of the experiment to assess the impact of threat on consolidating route and survey knowledge of the directed path through the virtual environment. Physiological response measures were then utilized to develop multiple linear regression (MLR) and artificial neural network (ANN) models for prediction of performance on the navigation task. Comparisons of predictive abilities between the developed models were performed to determine optimal model parameters. The ANN models were determined to better predict navigation performance based on psychophysiological responses gleaned during the initial tour through the city. The selected models were able to predict navigation performance with better than 80% accuracy. Applications of the models toward improved human-computer interaction and psychophysiologically-based adaptive systems are discussed.
Keywords: Psychophysiology; Threat; Simulation; Navigation; Route-Learning
A Study of Navigation and Selection Techniques in Virtual Environments Using Microsoft Kinect® BIBAKFull-Text 139-148
  Peter Dam; Priscilla Braz; Alberto Raposo
This work proposes and studies several navigation and selection techniques in virtual environments using Microsoft Kinect®. This device was chosen because it allows the user to interact with the system without need of hand-held devices or having a device attached to the body. This way we intend to increase the degree of virtual presence and, possibly, reduce the distance between the virtual world and the real world. Through these techniques we strive to allow the user to move and interact with objects in the virtual world in a way similar to how s/he would do so in the real physical world. For this work three navigation and three selection techniques were implemented. A series of tests were undertaken to evaluate aspects such as ease of use, mental effort, time spent to complete tasks, fluidity of navigation, amongst other factors for each proposed technique and the combination of them.
Keywords: 3D Interaction; Virtual Reality; Gesture Recognition; HCI
Legibility of Letters in Reality, 2D and 3D Projection BIBAKFull-Text 149-158
  Elisabeth Dittrich; Stefan Brandenburg; Boris Beckmann-Dobrev
Virtual prototypes are essential for engineers to understand the complex structures and arrangements of mechatronic products like automobiles. Currently, Virtual Environments (VE) are used for visual analysis and interaction with virtual models. In the next years more supplementary information will be integrated in the VE, completing the 3D-model. This includes names of single parts, corresponding materials or masses. However, up till now there is little explicit research on the psychological effects of additional text visualization in VE's. For example it unclear if it is possible to visualize the textual information like on paper prints or on 2D displays. The current study empirically compares these types of different output mediums to advise rules for visualization of text in 3D Virtual Environments. Results show, that textual information has to be slightly enlarged for the 3D Virtual Environment. In addition, subjects performed better in conditions with projected textual information compared to real text.
Keywords: 2D and 3D text; Virtual Environments; legibility of letters; information visualization
The Visual, the Auditory and the Haptic -- A User Study on Combining Modalities in Virtual Worlds BIBAKFull-Text 159-168
  Julia Fröhlich; Ipke Wachsmuth
In order to make a step further towards understanding the impact of multi-modal stimuli in Virtual Reality we conducted a user study with 80 participants performing tasks in a virtual pit environment. Participants were divided into four groups, each presented a different combination of multi-sensory stimuli. Those included real-time 3D graphics, audio stimuli (ambient, static and event sounds), and haptics consisting of wind and tactile feedback when touching objects. A presence questionnaire was used to evaluate subjectively reported presence on the one hand, and on the other physiological sensors were used to measure heart rate and skin conductance as an objective measure. Results strongly indicate that an increase of modalities does not automatically result in an increase of presence.
Keywords: Presence; User Study; Multi-modal Feedback; Virtual Reality
Spatial Augmented Reality on Person: Exploring the Most Personal Medium BIBAKFull-Text 169-174
  Adrian S. Johnson; Yu Sun
Spatial Augmented Reality (SAR) allows users to collaborate without need for see-through screens or head-mounted displays. We explore natural on-person interfaces using SAR. Spatial Augmented Reality on Person (SARP) leverages self-based psychological effects such as Self-Referential Encoding (SRE) and ownership by intertwining augmented body interactions with the self. Applications based on SARP could provide powerful tools in education, health awareness, and medical visualization. The goal of this paper is to explore benefits and limitations of generating ownership and SRE using the SARP technique. We implement a hardware platform which provides a Spatial Augmented Game Environment to allow SARP experimentation. We test a STEM educational game entitled 'Augmented Anatomy' designed for our proposed platform with experts and a student population in US and China. Results indicate that learning of anatomy on-self does appear correlated with increased interest in STEM and is rated more engaging, effective and fun than textbook-only teaching of anatomical structures.
Keywords: spatial augmented reality; self-referential encoding; education
Parameter Comparison of Assessing Visual Fatigue Induced by Stereoscopic Video Services BIBAKFull-Text 175-183
  Kimiko Kawashima; Jun Okamoto; Kazuo Ishikawa; Kazuno Negishi
A number of three-dimensional (3D) video services have already been rolled out over IPTV. In 3D video services, there are concerns that visual fatigue still exists, so evaluation of visual fatigue induced by video compression and delivery factors is necessary to guarantee the safety of 3D video services. To develop an assessment method of visual fatigue, we conducted evaluation experiments designed for 3D videos in which the quality of the left and right frames differ due to encoding. We explain the results from our evaluation experiments of visual fatigue, that is, results of specific parameters of visual fatigue biomedical assessment methods.
Keywords: 3D video; quality assessment; visual fatigue; encoding
Human Adaptation, Plasticity and Learning for a New Sensory-Motor World in Virtual Reality BIBAKFull-Text 184-191
  Michiteru Kitazaki
Human perception and action adaptively change depending on everyday experiences of or exposures to sensory information in changing environments. I aimed to know how our perception-action system adapts and changes in modified virtual-reality (VR) environments, and investigated visuo-motor adaptation of position constancy in a VR environment, visual and vestibular postural control after 7-day adaptation to modified sensory stimulation, and learning of event related cortical potential during motor imagery for application to a brain-machine interface. I found that human perception system, perception-action coordination system, and underlying neural system could change to adapt a new environment with considering quantitative sensory-motor relationship, reliability of information, and required learning with real-time feedback. These findings may contribute to develop an adaptive VR system in a future, which can change adaptively and cooperatively with human perceptual adaptation and neural plasticity.
Keywords: Adaptation; Plasticity; Position constancy; Galvanic vestibular stimulation; ERD/ERS
An Asymmetric Bimanual Gestural Interface for Immersive Virtual Environments BIBAFull-Text 192-201
  Julien-Charles Lévesque; Denis Laurendeau; Marielle Mokhtari
In this paper, a 3D bimanual gestural interface using data gloves is presented. We build upon past contributions on gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface that can be used in a wide variety of immersive virtual environments. Based on real world bimanual interactions, the proposed interface uses the hands in an asymmetric style, with the left hand providing the mode of interaction and the right hand acting on a finer level of detail. To validate the efficiency of this interface design, a comparative study between the proposed two-handed interface and a one-handed variant was conducted on a group of right-handed users. The results of the experiment support the bimanual interface as more efficient than the unimanual one. It is expected that this interface and the conclusions drawn from the experiments will be useful as a guide for efficient design of future bimanual gestural interfaces.
A New Approach for Indoor Navigation Using Semantic Webtechnologies and Augmented Reality BIBAKFull-Text 202-210
  Tamás Matuszka; Gergo Gombos; Attila Kiss
Indoor navigation is an important research topic nowadays. The complexity of larger buildings, supermarkets, museums, etc. makes it necessary to use applications which can facilitate the orientation. While for outdoor navigation already exist tried and tested solutions, but few reliable ones are available for indoor navigation. In this paper we investigate the possible technologies for indoor navigation. Then, we present a general, cost effective system as a solution. This system uses the advantages of semantic web to store data and to compute the possible paths as well. Furthermore it uses Augmented Reality techniques and map view to provide interaction with the users. We made a prototype based on client-server architecture. The server runs in a cloud and provides the appropriate data to the client, which can be a smartphone or a tablet with Android operation system.
Keywords: Indoor Navigation; Augmented Reality; Semantic Web; Ontology; Mobile Application
Assessing Engagement in Simulation-Based Training Systems for Virtual Kinesic Cue Detection Training BIBAKFull-Text 211-220
  Eric Ortiz; Crystal Maraj; Julie Salcedo; Stephanie Lackey; Irwin Hudson
Combat Profiling techniques strengthen a Warfighter's ability to quickly react to situations within the operational environment based upon observable behavioral identifiers. One significant domain-specific skill researched is kinesics, or the study of body language. A Warfighter's ability to distinguish kinesic cues can greatly aid in the detection of possible threatening activities or individuals with harmful intent. This paper describes a research effort assessing the effectiveness of kinesic cue depiction within Simulation-Based Training (SBT) systems and the impact of engagement levels upon trainee performance. For this experiment, live training content served as the foundation for scenarios generated using Bohemia Interactive's Virtual Battlespace 2 (VBS2). Training content was presented on a standard desktop computer or within a physically immersive Virtual Environment (VE). Results suggest that the utilization of a highly immersive VE is not critical to achieve optimal performance during familiarization training of kinesic cue detection. While there was not a significant difference in engagement between conditions, the data showed evidence to suggest decreased levels of engagement by participants using the immersive VE. Further analysis revealed that temporal dissociation, which was significantly lower in the immersive VE condition, was a predictor of simulation engagement. In one respect, this indicates that standard desktop systems are suited for transitioning existing kinesic familiarization training content from the classroom to a personal computer. However, interpretation of the results requires operational context that suggests the capabilities of high-fidelity immersive VEs are not fully utilized by existing training methodologies. Thus, this research serves as an illustration of technology advancements compelling the SBT community to evolve training methods in order to fully benefit from emerging technologies.
Keywords: Kinesic cues; Engagement; Simulation-Based Training
Development of Knife-Shaped Interaction Device Providing Virtual Tactile Sensation BIBAKFull-Text 221-230
  Azusa Toda; Kazuki Tanaka; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura
We have been developing "ToolDevice," a set of devices to help novice users in performing various operations in a mixed reality (MR) space. ToolDevice imitates the familiar shapes, tactile sensation, and operational feedback sounds of hand tools that are used in everyday life. For example, we developed BrushDevice, KnifeDevice, TweezersDevice, and HammerDevice. Currently, KnifeDevice is insufficiency in force feedback. This paper proposes a tactile feedback model for cutting a virtual object utilizing two vibration motors and the principles of phantom sensation. We built a prototype to implement the proposed feedback model, and confirmed the usability of our model through an experiment. Finally, we redesigned KnifeDevice and implemented the tactile sensation on the basis of the results of the experiment.
Keywords: Mixed Reality; ToolDevice; phantom sensation; tactile sensation
GUI Design Solution for a Monocular, See-through Head-Mounted Display Based on Users' Eye Movement Characteristics BIBAKFull-Text 231-240
  Takahiro Uchiyama; Kazuhiro Tanuma; Yusuke Fukuda; Miwa Nakanishi
A monocular, see-through head-mounted display (HMD) enables users to view digital images superimposed on the real world. Because they are hands-free and see-through, HMDs are expected to be introduced in the industry as task support tools. In this study, we investigate how the characteristics of users' eye movements and work performance are affected by different brightness levels of images viewed with an HMD as the first step to establish a content design guideline for see-through HMDs. From the results, we propose specific cases based on the users' preferences for the brightness level of the image contents depending on the use of the HMD and the work environment. In one case, the users prefer low brightness levels, and in the other case, they prefer high brightness levels.
Keywords: Monocular; see-through head-mounted display; characteristics of users' eye movements; brightness of images
Visual, Vibrotactile, and Force Feedback of Collisions in Virtual Environments: Effects on Performance, Mental Workload and Spatial Orientation BIBAKFull-Text 241-250
  Bernhard Weber; Mikel Sagardia; Thomas Hulin; Carsten Preusche
In a laboratory study with N = 42 participants (thirty novices and twelve virtual reality (VR) specialists), we evaluated different variants of collision feedback in a virtual environment. Individuals had to perform several object manipulations (peg-in-hole, narrow passage) in a virtual assembly scenario with three different collision feedback modalities (visual vs. vibrotactile vs. force feedback) and two different task complexities (small vs. large peg or wide vs. narrow passage, respectively). The feedback modalities were evaluated in terms of assembly performance (completion time, movement precision) and subjective user ratings. Altogether, results indicate that high resolution force feedback provided by a robotic arm as input device is superior in terms of movement precision, mental workload, and spatial orientation compared to vibrotactile and visual feedback systems.
Keywords: Virtual environments; virtual prototyping; virtual assembly; haptic feedback; sensory substitution; usability; user study
Note: Best paper award

Human-Robot Interaction in Virtual Environments

What Will You Do Next? A Cognitive Model for Understanding Others' Intentions Based on Shared Representations BIBAKFull-Text 253-266
  Haris Dindo; Antonio Chella
Goal-directed action selection is the problem of what to do next in order to progress towards goal achievement. This problem is computationally more complex in case of joint action settings where two or more agents coordinate their actions in space and time to bring about a common goal: actions performed by one agent influence the action possibilities of the other agents, and ultimately the goal achievement. While humans apparently effortlessly engage in complex joint actions, a number of questions remain to be solved to achieve similar performances in artificial agents: How agents represent and understand actions being performed by others? How this understanding influences the choice of agent's own future actions? How is the interaction process biased by prior information about the task? What is the role of more abstract cues such as others' beliefs or intentions?
   In the last few years, researchers in computational neuroscience have begun investigating how control-theoretic models of individual motor control can be extended to explain various complex social phenomena, including action and intention understanding, imitation and joint action. The two cornerstones of control-theoretic models of motor control are the goal-directed nature of action and a widespread use of internal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions, it is assumed that inverse and forward internal models used in individual action planning and control are re-enacted in simulation in order to understand others' actions and to infer their intentions. This motor simulation view of social cognition has been adopted to explain a number of advanced mindreading abilities such as action, intention, and belief recognition, often in contrast with more classical cognitive theories -- derived from rationality principles and conceptual theories of others' minds -- that emphasize the dichotomy between action and perception.
   Here we embrace the idea that implementing mindreading abilities is a necessary step towards a more natural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need to continuously estimate their teammates' proximal goals and distal intentions in order to choose what to do next. We present a probabilistic hierarchical architecture for joint action which takes inspiration from the idea of motor simulation above. The architecture models the casual relations between observables (e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at two deeply intertwined levels: at the lowest level the same circuitry used to execute my own actions is re-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner, while the highest level encodes more abstract task representations which govern each agent's observable behavior. Here we assume that the decision of what to do next can be taken by knowing 1) what the current task is and 2) what my teammate is currently doing. While these could be inferred via a costly (and inaccurate) process of inverting the generative model above, given the observed data, we will show how our organization facilitates such an inferential process by allowing agents to share a subset of hidden variables alleviating the need of complex inferential processes, such as explicit task allocation, or sophisticated communication strategies.
Keywords: joint action; motor simulation; shared representations; human-robot collaboration
Toward Task-Based Mental Models of Human-Robot Teaming: A Bayesian Approach BIBAFull-Text 267-276
  Michael A. Goodrich; Daqing Yi
We consider a set of team-based information tasks, meaning that the team's goals are to choose behaviors that provide or enhance information available to the team. These information tasks occur across a region of space and must be performed for a period of time. We present a Bayesian model for (a) how information flows in the world and (b) how information is altered in the world by the location and perceptions of both humans and robots. Building from this model, we specify the requirements for a robot's computational mental model of the task and the human teammate, including the need to understand where and how the human processes information in the world. The robot can use this mental model to select its behaviors to support the team objective, subject to a set of mission constraints.
Assessing Interfaces Supporting Situational Awareness in Multi-agent Command and Control Tasks BIBAKFull-Text 277-284
  Donald Kalar; Collin Green
Here, we describe our efforts to uncover design principles for multi-agent supervision, command, and control by using real-time strategy (RTS) video games as a source of data and an experimental platform. Previously, we have argued that RTS games are an appropriate analog for multi-agent command and control [3] and that publicly-available data from gaming tournaments can be mined and analyzed to investigate human performance in such tasks [5]. We outline additional results produced by mining public game data and describe our first foray into using RTS games as an experimental platform where game actions (e.g., clicks, commands) are logged and integrated with eye-tracking data (e.g., saccades, fixations) to provide a more complete picture of human performance and a means to assess user interfaces for multi-agent command and control. We discuss the potential for this method to inform UI design and analysis for these and other tasks.
Keywords: Situation Awareness; Automation; RTS; Gaze Tracking; User Interfaces
Cognitive Models of Decision Making Processes for Human-Robot Interaction BIBAKFull-Text 285-294
  Christian Lebiere; Florian Jentsch; Scott Ososky
A fundamental aspect of human-robot interaction is the ability to generate expectations for the decisions of one's teammate(s) in order to coordinate plans of actions. Cognitive models provide a promising approach by allowing both a robot to model a human teammate's decision process as well as by modeling the process through which a human develops expectations regarding its robot partner's actions. We describe a general cognitive model developed using the ACT-R cognitive architecture that can apply to any situation that could be formalized using decision trees expressed in the form of instructions for the model to execute. The model is composed of three general components: instructions on how to perform the task, situational knowledge, and past decision instances. The model is trained using decision instances from a human expert, and its performance is compared to that of the expert.
Keywords: Human-robot interaction; shared mental models; cognitive modeling; cognitive architectures; ACT-R; decision trees
Human Considerations in the Application of Cognitive Decision Models for HRI BIBAKFull-Text 295-303
  Scott Ososky; Florian Jentsch; Elizabeth Phillips
In order for autonomous robots to succeed as useful teammates for humans, it is necessary to examine the lens through which human users view, understand, and predict robotic behavior and abilities. To further study this, we conducted an experiment in which participants viewed video segments of a robot in a task-oriented environment, and were asked to explain what the robot was doing, and would likely do next. Results showed that participants' perceived knowledge of the robot increased with additional exposures over time; however participant responses to open-ended questions about the robot's behavior and functions remained divergent over multiple scenarios. A discussion of the implications of apparent differences in human interpretation and prediction of robotic behavior and functionality is presented.
Keywords: human-robot interaction; mental models; perception of behavior
Computational Mechanisms for Mental Models in Human-Robot Interaction BIBAFull-Text 304-312
  Matthias Scheutz
Mental models play an important and sometimes critical role in human-human interactions, in particular, in the context of human team tasks where humans need to interact with each other to achieve common goals. In this paper, we will describe some of the challenges involved in developing general computational mechanisms for mental models and their applications in the context human-robot interactions in mixed initiative tasks.
Increasing Robot Autonomy Effectively Using the Science of Teams BIBAKFull-Text 313-320
  David Schuster; Florian Jentsch
Even as future robots grow in intelligence and autonomy, they may continue to face uncertainty in their decision making and sensing. A critical issue, then, is designing future robots so that humans can work with them collaboratively, thereby creating effective human-robot teams. Operators of robot systems can mitigate the problems of robot uncertainty by maintaining awareness of the relevant elements within the mission and their interrelationships, a cognitive state known as situation awareness (SA). However, as evidenced in other complex systems, such as aircraft, this is a difficult task for humans. In this paper, we consider how application of the science of human teaming, specifically task design and task interdependence in human teams, can be applied to human-robot teams and how it may improve human-robot interaction by maximizing situation awareness and performance of the human team member.
Keywords: Human-robot interaction; system design; situation awareness
Cybernetic Teams: Towards the Implementation of Team Heuristics in HRI BIBAKFull-Text 321-330
  Travis J. Wiltshire; Dustin C. Smith; Joseph R. Keebler
This paper examines a future embedded with "cybernetic teams": teams of physical, biological, social, cognitive, and technological components; namely, humans and robots that communicate, coordinate, and cooperate as teammates to perform work. For such teams to be realized, we submit that these robots must be physically embodied, autonomous, intelligent, and interactive. As such, we argue that use of increasingly social robots is essential for shifting the perception of robots as tools to robots as teammates and these robots are the type best suited for cybernetic teams. Building from these concepts, we attempt to articulate and adapt team heuristics from research in human teams to this context. In sum, research and technical efforts in this area are still quite novel and thus warranted to shape the teams of the future.
Keywords: Human-robot interaction; team heuristics; cybernetic teams; social robots

Presence and Tele-presence

Embodiment and Embodied Cognition BIBAKFull-Text 333-342
  Mark R. Costa; Sung Yeun Kim; Frank Biocca
Progressive embodiment and the subsequent enhancement of presence have been important goals of VR researchers and designers for some time (Biocca, 1997). Consequently, researchers frequently explore the relationship between increasing embodiment and presence yet rarely emphasize the ties between their work and other work on embodiment. More specifically, we argue that experiments manipulating or implementing visual scale, avatar customization, sensory enrichment, and haptic feedback, to name a few examples, all have embodiment as their independent variable. However, very few studies explicitly frame their work as an exploration of embodiment. In this paper we will leverage the field of Embodied Cognition to help clarify the concept of embodiment.
Keywords: human-computer interaction; presence; embodied cognition; virtual reality
DigiLog Space Generator for Tele-Collaboration in an Augmented Reality Environment BIBAKFull-Text 343-350
  Kyungwon Gil; Taejin Ha; Woontack Woo
Tele-collaboration can allow users to connect with a partner or their family in a remote place. Generally, tele-collaborations are performed in front of a camera and screen. Due to their fixed positions, these systems have limitations for users who are moving. This paper proposes an augmented-reality-based DigiLog Space Generator. We can generate interested space and combine remote space in real time ensuring movement. And our system uses reference object to calculate scale of space and coordinates. Scale and coordinates are saved at Database (DB) and used for realistic combination of space. DigiLog Space Generator is applicable to many AR applications. We discuss the experiences and limitations of our system. Future research is also described.
Keywords: Augmented Reality; Tele-collaboration; Human-Computer Interaction
Onomatopoeia Expressions for Intuitive Understanding of Remote Office Situation BIBAKFull-Text 351-358
  Kyota Higa; Masumi Ishikawa; Toshiyuki Nomura
This paper proposes a system for intuitive understanding of remote office situation using onomatopoeia expressions. Onomatopoeia (imitative word) is a word that imitates sound or movement. This system detects office events such as "conversation" or "human movement" from audio and video signals of remote office, and converts them to onomatopoeia texts. Onomatopoeia texts are superimposed on the office image, and sent to the remote office. By using onomatopoeia expressions, the office event such as "conversation" and "human movement" can be compactly expressed as just one word. Thus, people can instantly understand remote office situation without watching the video for a while. Subjective experimental results show that easiness of event understanding is statistically significantly improved by the onomatopoeia expressions compared to the video at 99% confidence level. We have developed a prototype system with two cameras and eight microphones, and then have exhibited it at ultra-realistic communications forum in Japan. In the exhibition, the concept of this system was favorably accepted by visitors.
Keywords: onomatopoeia; audio/video signal; remote office situation; collaborative work
Enhancing Social Presence in Augmented Reality-Based Telecommunication System BIBAKFull-Text 359-367
  Jea In Kim; Taejin Ha; Woontack Woo; Chung-Kon Shi
The main contribution of this paper is to examine the new method of augmented reality from a telecommunication point of view. Then, we tried to present the fact that the concept of social presence is an important cue for developing telecommunication system based on augmented reality technology. The evaluation was conducted with 32 participants. According to the questionnaires results, the augmented reality based telecommunication system was better than 2 dimensional based display telecommunication system. To develop our concept, we should closely analyze communication patterns and improve our augmented reality based communication system.
Keywords: Telecommunication; Augmented Reality; Social Presence
How Fiction Informed the Development of Telepresence and Teleoperation BIBAKFull-Text 368-377
  Gordon M. Mair
This paper shows that many telepresence and teleoperation innovations and patents actually had their precursors in fiction and that this led the way for technological developments. This suggests justification for those companies that have invested, or are considering investing, in funding science fiction writers to provide future scenarios for their respective products and industries. The research leading to this conclusion has involved a search of patents, technical and scientific publications, and fictional works. The paper is mainly concerned with telepresence and teleoperation but aspects of virtual reality are included where the technological and literary concepts are relevant.
Keywords: Virtual reality; presence; teleoperation; science-fiction; telepresence history
High Presence Communication between the Earth and International Space Station BIBAKFull-Text 378-387
  Tetsuro Ogi; Yoshisuke Tateyama; Yosuke Kubota
In this study, in order to realize high presence communication with the astronaut who is staying on the ISS, the experiment on remote communication using the technologies of 2D/3D conversion, immersive dome display, and sharing space among multiple sites were conducted. In this case, biological information such as electrocardiogram, thermal image, and eye movement were measured to evaluate the sense of presence, and the tendency that the user felt the high presence sensation when experiencing the high resolution three-dimensional stereo image. From these results, we can understand that high presence communication between the earth and the ISS was realized.
Keywords: Tele-immersion; High Presence Sensation; Biological Information; 2D/3D Conversion; International Space Station
Effects of Visual Fidelity on Biometric Cue Detection in Virtual Combat Profiling Training BIBAKFull-Text 388-396
  Julie Salcedo; Crystal Maraj; Stephanie Lackey; Eric Ortiz; Irwin Hudson; Joy Martinez
Combat Profiling involves observation of humans and the environment to identify behavioral anomalies signifying the presence of a potential threat. Desires to expand accessibility to Combat Profiling training motivate the training community to investigate Virtual Environments (VEs). VE design recommendations will benefit efforts to translate Combat Profiling training methods to virtual platforms. Visual aspects of virtual environments may significantly impact observational and perceptual training objectives. This experiment compared the effects of high and low fidelity virtual characters for biometric cue detection training on participant performance and perceptions. Results suggest that high fidelity virtual characters promote positive training perceptions and self-efficacy, but do not significantly impact overall performance.
Keywords: Biometric Cue Detection; Visual Fidelity; Virtual Training