HCI Bibliography Home | HCI Conferences | VAMR Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VAMR Tables of Contents: 13-113-214-114-215

VAMR 2015: 7th International Conference on Virtual, Augmented and Mixed Reality

Fullname:VAMR 2015: 7th International Conference on Virtual, Augmented and Mixed Reality
Note:Volume 11 of HCI International 2015
Editors:Randall Shumaker; Stephanie Lackey
Location:Los Angeles, California
Dates:2015-Aug-02 to 2015-Aug-07
Publisher:Springer International Publishing
Series:Lecture Notes in Computer Science 9179
Standard No:DOI: 10.1007/978-3-319-21067-4 hcibib: VAMR15; ISBN: 978-3-319-21066-7 (print), 978-3-319-21067-4 (online)
Papers:54
Pages:546
Links:Online Proceedings | Conference Website
  1. User Experience in Virtual and Augmented Environments
  2. Developing Virtual and Augmented Environments
  3. Agents and Robots in Virtual Environments
  4. VR for Learning and Training
  5. VR in Health and Culture
  6. Industrial and Military Applications

User Experience in Virtual and Augmented Environments

Design of the Augmented Reality Based Training System to Promote Spatial Visualization Ability for Older Adults BIBAKFull-Text 3-12
  Kuo-Ping Chang; Chien-Hsu Chen
In this paper, we present the design of spatial visualization training system implemented by augmented reality (AR). Spatial visualization is the ability to mentally transform complex stimuli in space. However, this ability declines with human age, resulting in spatial problems in one's normal life. Based on the fact that AR interface can reduce cognitive load and provide correct spatial information, we are devoted to designing an AR spatial visualization training system for older adults to use. The system consists of a manual controller and a visualization training task. In the process of manual controller design, think aloud experiment is adopted to generate intuitive manipulation, and morphological analysis is used to evaluate the most elderly-friendly controller. In the process of training task design, by analyzing spatial training factors, a new visualization training task is designed. In the process of AR integration, the system is implemented by Qualcomm AR in Unity3D with Vuforia protal, and the final AR based spatial visualization ability training system is completed.
Keywords: Augmented reality; Spatial visualization ability; Elderly
The Effectiveness of Virtual Reality for Studying Human Behavior in Fire BIBAKFull-Text 13-21
  Xinxin Feng; Rongzhen Cui; Jiabao Zhao
In this study, a virtual environment of fire condition was designed and implemented to support the research of the human behavior under anxiety states. The results gathered from this experimental platform were compared to the data from real fire condition to verification the effectiveness of the information provided from this virtual platform. The Correlation coefficient is 0.9958, which indicate that the simulation system is highly practical in research of human behavior under pressure condition. Conclusion could be made that virtual environment based on cave virtual display system is suitable for simulation of fire condition.
Keywords: Fire condition; Virtual reality; Environmental stress
Pilot Study for Telepresence with 3D-Model in Mixed Reality BIBAKFull-Text 22-29
  Sungchul Jung; Charles E. Hughes
In this paper we present the results of an experiment investigating a participant's sense of presence by examining the correlation between visual information and physical actions in a mixed reality environment. There have been many approaches to measure presence in a virtual reality environment, such as the "Pit" experiment, a physiological presence experiment that used a person's fear of heights to test body ownership. The studies reported in these prior works were conducted to measure the extent to which a person feels physical presence in virtual worlds [1-3]. Here, we focus on situational plausibility and place illusion in mixed reality, where real and virtual content coexist [4]. Generally, the phenomenon we are studying is called telepresence: an aroused sensation of 'being together in the same real location' between users [5].
Keywords: Telepresence; Mixed reality; Situational plausibility; Place illusion; Co-presence
Synthetic Evaluation Method of Electronic Visual Display Terminal Visual Performance Based on the Letter Search Task BIBAKFull-Text 30-38
  Wei Liu; Weixu Cai; Borui Cui; Muxuan Wang
Today the electronic visual display terminal (VDT) plays an indispensable role in people's life, so the visual performance of VDT is very important. In order to evaluate the visual performance of VDT from the perspective of user experience more accurately and reliably, we synthesize three kinds of commonly used visual performance assessment techniques, i.e. main task measure method, physiology measure method and subjective evaluate method, and establish a system to evaluate the visual performance of VDT. We use pseudo-text letter search task as the main task, analyze and extract suitable evaluation indicators, and calculate the weight of these evaluation indicators by the entropy method. At last, a qualification visual performance evaluation value of each VDT is got. The experimental result shows that this evaluation value is consistent with the subjective score in general. The evaluation system is combined with present main evaluation methods of visual performance and utilizes their advantages. The evaluation result can be quantified more directly and clearly. It provides a reference for the visual performance evaluation from the perspective of user experience.
Keywords: Visual performance; VDT; Visual fatigue; Synthetic evaluation; Letter search
Subjective Usability Evaluation Criteria of Augmented Reality Applications BIBAKFull-Text 39-48
  Valéria Farinazzo Martins; Tereza Gonçalves Kirner; Claudio Kirner
This paper presents an extensive list of attributes to measure the usability of Augmented Reality applications. These attributes were collected from a systematic review of papers in journals and conferences at the global scope, arising from the productions of the last five years, such as main areas, more used attributes, AR environments, and number of papers by year. We used the most relevant studies in the literature to compose the organization and categorization of the main usability attributes discussed and used in Augmented Reality. Finally, we propose a set of questions on these Augmented Reality usability attributes, based on established questionnaires and also experience in the evaluation of the authors.
Keywords: Usability evaluation; Augmented reality; Usability testing
Spatial Mapping of Physical and Virtual Spaces as an Extension of Natural Mapping: Relevance for Interaction Design and User Experience BIBAKFull-Text 49-59
  Daniel Pietschmann; Peter Ohler
Natural user interfaces are designed to be intuitive, and quick to learn. With the use of natural mapping, they rely on previous knowledge or skills from users by employing spatial analogies, cultural standards or biological effects. Virtual environments with a high interaction fidelity also use rich spatial information in addition to natural mapping, e.g. stereoscopy or head-tracking. However, an additional factor for naturalism is the relationship of perceived interaction spaces: We propose to examine the Spatial Mapping of the perceived physical and virtual spaces as an extension of Natural Mapping. Similarly to NM, a high degree of spatial mapping using an isomorphic mapping should result in more intuitive interactions, reducing the mental workload required. However, the benefits of Spatial Mapping on user experience and task performance are only evident for complex spatial tasks. As a consequence, many tasks do not benefit from complex spatial information (e.g. stereoscopy or head-tracking).
Keywords: Natural mapping; User experience; Mental models; Spatial mapping
The Impact of Time Pressure on Spatial Ability in Virtual Reality BIBAKFull-Text 60-66
  Hua Qin; Bole Liu; Dingding Wang
The aim of this study is to explore the influence of time pressure on the spatial distance perceived by participants in the virtual reality. The results show that there is no significant difference while participants estimates the distance whether with or without time pressure. But while participants estimating short distance and long distance in the virtual environments there is significant difference. And under horizontal or vertical direction there are also significant differences while participants estimate long distance has more errors than short distance.
Keywords: Time pressure; Under horizontal; Vertical direction
Research on the Visual Comfort for Small Spaces in Different Illuminance Environments BIBAKFull-Text 67-73
  Linghua Ran; Xin Zhang; Hua Qin; Taijie Liu
This paper conducted an ergonomic experiment based on the visual comfort of users by simulating 5 external illuminance environments of night, kitchen, living room, common supermarket and high-end market. It studied the comfortable and acceptable illuminance levels for three doors refrigerator with ceiling lamps separately in the case of vacancy and filling with items. The experiment involved 40 subjects. Results showed that with ceiling lamps, there was no significant difference between the two status of vacancy and filling with items; however, they was different in 5 different external lighting environments. Based on the experimental data, this paper established a regression mathematical model in order to illustrate the relationship between the external lighting environments and the internal comfortable illuminance levels for the three doors refrigerators. The result will provide the reference for the optimization data for the lighting design of three doors refrigerators.
Keywords: Visual comfort; Lighting environment; Illuminance; Mathematical model
Analysis of Sociocultural Constructs Applicable to Blue Force Teams: Increasing Fidelity from Pencil and Paper and Video Tests to Virtual Environments BIBAKFull-Text 74-80
  David Scribner; Pete Grazaitis; Asi Animashaun; Jock Grynovicki; Lauren Reinerman-Jones
Understanding sociocultural factors and the role they play in a military context is becoming recognized as a shortcoming within both military training and decision-making tools for commanders in the field. We begin by discussing sociocultural theory, its development, and history. Next, we discuss approaches to collecting small-scale friendly force leader and subordinate sociocultural factors. Then, we describe and discuss the utility of Situational Judgment Tests (SJTs) to elicit various sociocultural values in decision-making and how those tests may be translated to more enriched and life-like scenarios in virtual environments. Positive and negative attributes of each approach and viable resources to support their use are discussed.
Keywords: Sociocultural; Culture; Decision-making; Military; Data collection tools; Virtual environments; Situational judgment test
Influence of Highlighting Words Beneath Icon on Performance of Visual Search in Tablet Computer BIBAKFull-Text 81-87
  Li Wang; Liezhong Ge; Ting Jiang; Hongyan Liu; Hongting Li; Xinkui Hu; Hanling Zheng
This study compares the influence of different kinds of highlighting words beneath icons on visual search performance on the interface of tablets through response time and accurate rates of participants completing a search task. The results indicate highlighting words below icons could improve performance of searching a target icon; and when the icons are gray, under the condition of color words and flicker words is more effective for visual search. When the icons are colorful, under the condition of flicker words is more effective for visual search.
Keywords: Highlighting words; Icon color; Visual searching performance
Applying Tangible Augmented Reality in Usability Evaluation BIBAKFull-Text 88-94
  Xiaotian Zhang; Young Mi Choi
Feedback from users is an invaluable part of the product design process. Prototypes of varying levels of detail are frequently used to solicit this feedback for attributes related to the physical and user experience aspects of a product. Accurate feedback is most useful in the early steps of design process where changes are easier to make. At the same time, highly detailed prototypes which allow accurate feedback are generally not available until much later in the process after many decisions have already been made. This goal of this study will be to investigate the use of Tangible Augmented Reality for performing usability testing of products with physical interface elements. The results will be compared to more traditional usability testing methods to determine whether the results are similar. Similar results may indicate accurate usability testing may be possible through the use of Tangible Augmented Reality allowing for earlier evaluation of product concepts.
Keywords: Tangible augmented reality; Design process

Developing Virtual and Augmented Environments

Fact and Fiction Merge in Telepresence and Teleoperation BIBAKFull-Text 97-107
  Gordon M. Mair
This paper examines current trends in commercially available products and research related to telepresence and teleoperation systems. It also compares them with some aspects of speculative fiction directly related to these systems and their human interface. The presentation of the results in the form of a parallel timeline highlights the research gaps that remain to be filled in order to obtain the ideal telepresence system in which the technological mediation becomes as transparent as in fictional representations.
Keywords: Teleoperation; Virtual reality; Augmented reality; Science-fiction; Human computer symbiosis
Delta Global Illumination for Mixed Reality BIBAFull-Text 108-118
  Maik Thöner; Arjan Kuijper
The focus point in Mixed Reality applications is the merging of objects from different realities into a new, visibly homogeneous scene. To achieve this, next to a spatial registration, a plausible illumination of the objects is required. While shadows and direct illumination can deliver a realistic look to the objects, adding indirect interaction of illumination will result in a seamless integration of objects, to appear as part of the scene instead of glued-on patches. Mixed Relality systems find appliance in entertainment areas like movies and games, as well as prototype presentations or visualization of planned or damaged constructions. We propose an algorithm based on Voxel Cone Tracing to provide Global Illumination for Mixed Reality that enables diffuse as well as specular lighting and easy to compute soft-shadows.
Registration System Errors Perception in Augmented Reality Based on RGB-D Cameras BIBAKFull-Text 119-129
  Daniel M. Tokunaga; Cléber G. Corrêa; Fernanda M. Bernardo; João Bernardes; Edith Ranzini; Fátima L. S. Nunes; Romero Tori
One of the main objectives in augmented reality (AR) is to totally merge virtual information into the real world. However, different problems in computational processes can directly affect the user perception. Although several works investigate how rendering or interaction issues are perceived by the user, little has been studied of how spatial registration problems can affect the user perception in AR systems, even that registration being one of the central problems of AR. In this work, we study how system errors of three-points RANSAC pose estimation algorithm based on RGB-D cameras can affect the user perception, by applying psychophysical tests. With these user tests, we address how depth map and feature matching noises, among other issues, can affect the perception of object registration.
Keywords: 3D registering error; Augmented reality; Pose estimation; User perception
Local 3D Pose Estimation of Feature Points Based on RGB-D Information for Object Based Augmented Reality BIBAKFull-Text 130-141
  Daniel M. Tokunaga; Ricado Nakamura; João Bernardes; Edith Ranzini; Romero Tori
We here describe a novel approach for locally obtaining pose estimation of match feature points, observed using RGB-D cameras, in order to apply to locally planar object pose estimation with RANSAC method for augmented reality systems. Conventionally, object pose estimation based on RGB-D cameras are achieved by the correlation between observed 3D points captured by feature point matching and known 3D points of the object. However, in such methods, features are simplified as single 3D points, losing information of the feature and its neighborhood surface. This approach based on local 3D pose estimation of locally planar feature points, brings richer information for 3D pose estimation of planar, 3D rigid or deformable objects. This information enables more stable pose estimation across RANSAC settings than conventional three-points RANSAC methods.
Keywords: Computer vision; Pose estimation; Feature points
Towards a Structured Selection of Game Engines for Virtual Environments BIBAKFull-Text 142-152
  Martin Westhoven; Thomas Alexander
Development and maintenance of virtual reality engines are coupled with large effort. It is therefore common today, to use existing solutions originating from the entertainment sector. This is often a compromise, since they fulfill individual requirements only in parts, due to their different background. The decision for a specific engine can have a large effect on the effort required to implement own functionality. The number of existing engines further complicates decision making. To enable a comprehensible and replicable decision making, we propose a structured selection process. In a multi-step approach, first the requirements and criteria for comparison are identified and analyzed. A pre-filtering is then used to select a feasible number of engines which are then compared in detail.
Keywords: Game engines; Virtual environments; Serious gaming
Evaluation and Fair Comparison of Human Tracking Methods with PTZ Cameras BIBAFull-Text 153-161
  Alparslan Yildiz; Noriko Takemura; Yoshio Iwai; Kosuke Sato
Evaluation and comparison of methods, repeatability of experiments, and availability of data are the dynamics driving science forward. In computer vision, a database with ground-truth information enables fair comparison and facilitates rapid improvement of methods in a particular topic. Being a high-level discipline, Human-Computer Interaction (HCI) systems rises on numerous computer vision building blocks, including eye-gaze localization, human localization, action recognition, behavior analysis etc. using mostly active systems employing lasers, projectors, infrared scanners, pan-tilt-zoom cameras and other various active sensors.
   In this research, we focus on fair comparison of human tracking methods with active (PTZ) cameras. Although there are databases on human tracking, no specific database is available for active (pan-tilt-zoom) camera human tracking. This is because active camera experiments are not repeatable, as camera views depend on previous decisions made by the system. Here, we address the above problem of systematical evaluation of active camera tracking methods and present a survey of their performances.

Agents and Robots in Virtual Environments

Experimental Environments for Dismounted Human-Robot Multimodal Communications BIBAKFull-Text 165-173
  Julian, IV Abich; Daniel J. Barber; Lauren Reinerman-Jones
The goal for multimodal communication (MMC) is to facilitate the conveyance of information through various modalities, such as auditory, visual, and tactile. MMC has become a major focus for enabling human-robot teaming, but it is often the case that the technological-state of robot capabilities is limited for research and development. Currently, robots often serve a single role, not equipped to interact dynamically with human team members. However, before that functionality is developed, it is important to understand what robot capability is needed for effective collaboration. Through the use of simulations, controlled systematic evaluation of MMC input and output devices can be evaluated to garner a better understanding of how to apply MMC with respect to user's abilities and preferences, as well as assess the communication hardware and software functionality. An experiment will be presented and discussed to illustrate this approach.
Keywords: Interactive simulation; Multimodal communication; Human-robot interaction; Dismounted soldiers
Displays for Effective Human-Agent Teaming: The Role of Information Availability and Attention Management BIBAKFull-Text 174-185
  Maia B. Cook; Cory A. Rieth; Mary K. Ngo
In military and industrial systems, visual displays form a critical link between humans and agents. Through visual displays, human operators monitor indications of agent status to detect issues and proactively manage emerging problems. As operators manage increasingly more agents, conveying status becomes an especially complex visualization challenge. To effectively manage operator attention and support proactivity, careful consideration must be given to what information to provide access to and how best to assign visual salience to that information. Here, we systematically analyze and empirically evaluate the effectiveness of standard and novel status indicator formats in supporting proactive monitoring of multiple agents. The results reveal shortfalls of standard formats fielded in today's control systems, inadequacies of those formats for future multi-agent monitoring, and benefits of novel formats. For application, we provide guidance for status format design and use, mitigations for improving inadequate formats, and inspiration for creating novel formats for improved monitoring.
Keywords: Evaluation methods and techniques; Information visualization; Intelligent and agent systems
Exploring the Implications of Virtual Human Research for Human-Robot Teams BIBAFull-Text 186-196
  Jonathan Gratch; Susan Hill; Louis-Philippe Morency; David Pynadath; David Traum
This article briefly explores potential synergies between the fields of virtual human and human-robot interaction research. We consider challenges in advancing the effectiveness of human-robot teams makes recommendations for enhancing this by facilitating synergies between robotics and virtual human research.
Animation Guidelines for Believable Embodied Conversational Agent Gestures BIBAKFull-Text 197-205
  Ivan Gris; Diego A. Rivera; David Novick
In animating embodied conversational agents (ECAs), run-time blending of animations can provide a large library of movements that increases the appearance of naturalness while decreasing the number of animations to be developed. This approach avoids the need to develop a costly full library of possible animations in advance of use. Our principal scientific contribution is the development of a model for gesture constraints that enables blended animations to represent naturalistic movement. Rather than creating over-detailed, fine-grained procedural animations or hundreds of motion-captured animation files, animators can include sets of their own animations for agents, blend them, and easily reuse animations, while constraining the ECA to use motions that would occur and transition naturally.
Keywords: Embodied conversational agents; Animation; Usability
A Mark-Up Language and Interpreter for Interactive Scenes for Embodied Conversational Agents BIBAKFull-Text 206-215
  David Novick; Mario Gutierrez; Ivan Gris; Diego A. Rivera
Our research seeks to provide embodied conversational agents (ECAs) with behaviors that enable them to build and maintain rapport with human users. To conduct this research, we need to build agents and systems that can maintain high levels of engagement with humans over multiple interaction sessions. These sessions can potentially extend to longer periods of time to examine long-term effects of the virtual agent's behaviors. Our current ECA interacts with humans in a game called "Survival on Jungle Island." Throughout this game, users interact with our agent across several scenes. Each scene is composed of a collection of speech input, speech output, gesture input, gesture output, scenery, triggers, and decision points. Our prior system was developed with procedural code, which did not lend itself to rapid extension to new game scenes. So to enable effective authoring of the scenes for the "Jungle" game, we adopted a declarative approach. We developed ECA middleware that parses, interprets, and executes XML files that define the scenes. This paper presents the XML coding scheme and its implementation and describes the functional back-end enabled by the scene scripts.
Keywords: Embodied conversational agents; Scene; Interpreter; Parser
Displays for Effective Human-Agent Teaming: Evaluating Attention Management with Computational Models BIBAKFull-Text 216-227
  Cory A. Rieth; Maia B. Cook; Mary K. Ngo
In information-dense work domains, the effectiveness of display formats in drawing attention to task-relevant information is critical. In this paper, we demonstrate a method to evaluate this capability for on-screen indicators used to proactively monitor multiple automated agents. To estimate the effectiveness of indicator formats in drawing attention to emerging problems, we compared the visual salience of indicators, as measured by computational models, to task-relevant attributes needed during proactive monitoring. The results revealed that standard formats generally do not draw attention to the information needed to identify emerging problems in multi-indicator displays, and validated the success of formats designed to more closely map task-relevant information to visual salience. We additionally report an extended saliency-based monitoring model to predict task performance from saliency and discuss implications for broader design and application.
Keywords: Information visualization; Intelligent and agent systems; Evaluation methods and techniques
Intelligent Agents for Virtual Simulation of Human-Robot Interaction BIBAKFull-Text 228-239
  Ning Wang; David V. Pynadath; K. V. Unnikrishnan; Santosh Shankar; Chirag Merchant
To study how robots can work better with humans as a team, we have designed an agent-based online testbed that supports virtual simulation of domain-independent human-robot interaction. The simulation is implemented as an online game where humans and virtual robots work together in simulated scenarios. This testbed allows researchers to carry out human-robot interaction studies and gain better understanding of, for example, how a robot's communication can improve human-robot team performance by fostering better trust relationships among humans and their robot teammates. In this paper, we discuss the requirements, challenges and the design of such human-robot simulation. We illustrate its operation with an example human-robot joint reconnaissance task.
Keywords: Human robot interaction; Intelligent virtual agent; Social simulation

VR for Learning and Training

GlassClass: Exploring the Design, Implementation, and Acceptance of Google Glass in the Classroom BIBAKFull-Text 243-250
  Dave A. Berque; James T. Newman
Google Glass is worn like a pair of eye-glasses and is controlled with a small screen, touchpad, and microphone. A variety of Augmented Reality and Mixed Reality Glassware applications are available for Glass. However, due to the size and position of the screen, it is hard for onlookers to discern what the user is doing while using these applications. Additionally, the user can surreptitiously take pictures and record videos of nearby people and things, resulting in privacy concerns. We hypothesized that use of Glassware in a specific domain, where onlookers were apprised of the use of the Glassware, would be better accepted than the more generic use of Glassware. This paper reports on our design, implementation and evaluation of several Glass applications to enhance communication between teachers and students in the classroom and presents results from a study that suggests that students accept the use of Glassware in this environment.
Keywords: Augmented reality; Google glass; Glassware; Educational applications of glassware; Wearable computing
Augmented Reality Training of Military Tasks: Reactions from Subject Matter Experts BIBAKFull-Text 251-262
  Roberto Champney; Stephanie J. Lackey; Kay Stanney; Stephanie Quinn
The purpose of this research effort was to understand the training utility of augmented reality and simulation-based training capabilities in an outdoor field environment. Specifically, this research focused on evaluating the training efficacy of the Augmented Immersive Team Training (AITT) system, a portable augmented reality training solution that targets Forward Observer (FO) tasks associated with a Call for Fire (CFF) mission. The assessment focused on evaluating training utility, satisfaction, usability, simulator sickness, presence, immersion and appropriateness of the fidelity cues provided by the AITT system. Data were gathered via questionnaires. The results of this study provided insight for formative evolution of the AITT system design and may have implications to other similar technologies.
Keywords: Augmented reality; Training; Learning; Immersive virtual reality; Wearable technology; Mixed reality; Training systems
Training Effectiveness Evaluation: Call for Fire Trainer -- Augmented Virtuality (CFFT-AV) BIBAKFull-Text 263-272
  Gino Fragomeni; Stephanie J. Lackey; Roberto Champney; Julie Nanette Salcedo; Stephen Serge
As emerging technologies continue to modernize battlefield systems, the use of Mixed Reality (MR) training has been increasingly proposed as a lower cost and more time-effective alternative to live training. However, there has been minimal empirical data to demonstrate the effectiveness of MR type training which leaders require to make informed decisions about training device acquisition. In an effort to assist in the decision making process of future training system acquisition a Training Effectiveness Evaluation (TEE) is being conducted by U.S. Army Research Laboratory (ARL) Human Research and Engineering Directorate (HRED), Simulation and Training Technology Center (STTC) on the Call for Fire Trainer -- Augmented Virtuality (CFFT-AV). This paper describes the methodology of the TEE with regard to the effectiveness of AV as a platform within the Call for Fire (CFF) task domain and how AV technologies and methods can impact CFF training.
Keywords: Augmented virtuality; Simulation-Based training; Joint forward observer
Design and Analysis of the Learning Process Management System BIBAKFull-Text 273-279
  Songfeng Gao; Ziqi Wang
In this paper, we put forward a set of learning process management system. This system, integrating the monitoring and feedback functions which are widely ignored in other current teaching platforms, focus on the students' learning process, and digitizing the users' daily learning. From login this platform, to recording the user's learning habits and learning methods, at the same time, this system shows users' learning situation in the form of graphs showing and provides the objective effective data for the "teaching" and "learning". The learning process management system aims at helping students to establish the purpose of good study habits and learning methods.
Keywords: Learning process; Teaching platform; Process management; Process feedback; .net platform
Applying Research in the Cognitive Sciences to the Design and Delivery of Instruction in Virtual Reality Learning Environments BIBAKFull-Text 280-291
  Martin S. Goodwin; Travis Wiltshire; Stephen M. Fiore
Current approaches to the design and delivery of instruction in virtual reality learning environments (VRLEs) draw heavily from traditional instructional strategies and design practices. This is problematic given that these strategies and practices were developed for learning contexts lacking the dynamic nature and capabilities of technology-rich, immersive learning environments. This directly affects the instructional efficacy of VRLEs by creating a dichotomy between the learning interface, which emphasizes knowledge as object, and the learning environment, which can emphasize knowledge as action. Drawing from theory and research in the cognitive sciences on embodied and enactive cognition, we present an instructional strategy that addresses this dichotomy by incorporating techniques and design practices that are better aligned with the learning dynamics provided by VRLEs.
Keywords: Virtual reality learning environments; Simulation-based training; Instructional design; Embodied cognition; Enactive cognition
Virtual Approach to Psychomotor Skills Training: Manipulating the Appearance of Avatars to Influence Learning BIBAKFull-Text 292-299
  Irwin Hudson; Karla Badillo-Urquiola
Using avatars as virtual instructors is becoming increasingly popular in the military domain due to the emerging advances in distributive technologies (e.g., internet, virtual worlds, etc.). The use of virtual environments and avatars are viable means for achieving enhancements in the area of psychomotor skill development. Although prior research has focused on investigating the benefits of implementing virtual agents into learning environments, there is limited research on examining the impact an avatar's physical appearance has on training. The purpose of this paper is to examine the fundamental applications of three types of virtual avatars (i.e., generic, highly recognizable subject matter expert (SME), and doppelganger) and provide recommendations for future psychomotor skills training. A case study assesses the benefits of applying this virtual approach to physical therapy. Finally, this research seeks to expand the knowledge base of several training domains, such as the military, rehabilitation, high performance athletic training, etc.
Keywords: Agent; Avatar; Doppelganger; Physical therapy; Psychomotor learning; Virtual environments; Virtual reality
Squad Overmatch: Using Virtual Technology to Enhance Live Training Environments BIBAFull-Text 300-308
  Patrick M. Ogden; Terry N. Wollert; Paul Butler; Julie N. Salcedo
The application of virtual augmentation to the U.S. Army's training continuum may reduce Post-Traumatic Stress (PTS) and suicides by increasing Soldiers' resilience and cognitive skills at the squad level pre-deployment. This may be accomplished through current programs of record with technological injections, thereby enhancing the training experience improving involvement and retention. Virtual platforms also invite more skill and task repetitions at a much lower cost and reduced risk of injury.
   In support of the squad as a decisive force, MG Brown, 2011 Commander at the Maneuver Center of Excellence, conducted a study to identify the critical aspects of U.S. Army training support needed to prepare squads to see first and act first. Focusing on the training devices utilized at the squad level, the concept was to build an enhanced training environment that would make our squads more resilient, efficient, and effective through improvements in human performance. This was demonstrated through virtual insertions into current programs of record spanning the gaming, virtual, and live continuums.
   Using data from Walter Reed Medical Center and the Federal Law Enforcement Training Center on stressors and stress exposure training, the study assessed where such exposures could be inserted during the current U.S. Army training cycle. Leveraging standard U.S. Army battle drills, a series of scenarios were developed incorporating the most detrimental of stressors including; loss of a comrade, defensive and unintentional civilian casualties, and witnessing of a death. Soldiers experienced a gradual increase of knowledge and stress through a base scenario in the gaming environment and two subsequent scenarios in the virtual and live environments. Each scenario built upon the previous and was driven by a standard U.S. Army mission planning at the platoon level and intelligence injects. The live scenarios used virtual targets and interactive avatars, live actors, and battlefield effects to enhance the training environment.
Leveraging Stress and Intrinsic Motivation to Assess Scaffolding During Simulation-Based Training BIBAKFull-Text 309-320
  Julie Nanette Salcedo; Stephanie J. Lackey; Karla A. Badillo-Urquiola
Instructional designers in the Simulation-Based Training (SBT) community are becoming increasingly interested in incorporating scaffolding strategies into the SBT pedagogical paradigm. Scaffolding models of instruction involve the adaptation of instructional delivery methods or content so that the learner may gradually acquire the knowledge or skill until mastery and independence are achieved [1, 2]. One goal for incorporating scaffolding models into SBT is to bridge the gap between trainees' immediate knowledge and skill with their potential level of understanding when provided with scaffolded support. This gap represents an optimal level of learning often referred to as the Zone of Proximal Development (ZPD). ZPD may be maintained dynamically through the adjustment of instructional support and challenge levels [3]. Theoretically, for ZPD to be achieved, the training experience should be neither too easy nor too difficult. A challenge in implementing scaffolding in SBT and assessing its effectiveness is the lack of metrics to measure a trainee's ZPD. Therefore, this study investigates the use of stress and intrinsic motivation metrics using the Dundee Stress State Questionnaire (DSSQ) and the Intrinsic Motivation Inventory (IMI) to assess the level of challenge elicited by selected instructional strategies in SBT for behavior cue analysis. Participants completed pre-test, training, practice, and post-test scenarios in one of three conditions including a Control and two instructional strategy conditions, Massed Exposure and Highlighting. Participants reported their stress using the DSSQ after each training and practice scenario and overall intrinsic motivation using the IMI at the end of all scenarios. Results compared stress and intrinsic motivation levels between conditions. Ultimately, the results indicate that Massed Exposure strategy may be preferable to maintain ZPD during SBT for behavior cue analysis.
Keywords: Simulation-based training; Instructional strategies; Instructional design; Scaffolding; Stress; Motivation
Working the Modes: Understanding the Value of Multiple Modalities of Technologies for Learning and Training Success BIBAKFull-Text 321-328
  Eileen Smith; Ron Tarr; Cali Fidopiastis; Michael Carney
Technology for learning has a great potential to decrease training time, as well as impart complex knowledge to the learner. However, one technology may not provide the complete learning experience. We discuss this issue using a fielded fire rescue incident command simulation-based training. Of first importance is properly defining the training material, and then assessing the efficacy of the training through scenario-based critique. The immersive nature of the incident command simulation allowed learners of all ages and backgrounds to experience the realism of a fire command post. Newer immersive technologies are discussed that will support transfer of training, as well as provide seamless integration into real world settings. Finally, we advocate for the development of direct brain measures of the learning process within operational environments. In this way, instructional design becomes a true brain-based approach and selecting the supporting technology for learning delivery is more exact for the learning purpose.
Keywords: Learning; Training; Modeling and simulation; Education; Virtual reality; Psychophysiological metrics; Transfer; Mastery; Systems design; Immersion; Mixed reality; Assessment
Augmenting Reality in Sensor Based Training Games BIBAKFull-Text 329-336
  Peter A. Smith
Building an Augmented Reality experience has traditionally been limited by the use of physical markers, and GPS capabilities that are hampered indoors. Physical markers are intrusive in an environment that is dual use between an AR and more traditional experience, making them a less than popular choice for physical locations. GPS solves many of these problems outdoors. Unfortunately, this cannot be capitalized on in an indoor setting where interference from the building cannot guarantee the fidelity of the location data. A recent technology is a low energy Bluetooth transmitter that allows devices to determine their proximity to the transmitter. These devices can be configured and installed discretely in a physical location and power AR experiences and also open up new opportunities to augment, extend, push, and track a user's experience.
Keywords: iBeacon; BLE; Augmented reality; Location based training
A Serious-Game Framework to Improve Physician/Nurse Communication BIBAKFull-Text 337-348
  Marjorie Zielke; Susan Houston; Mary Elizabeth Mancini; Gary Hardee; Louann Cole; Djakhangir Zakhidov; Ute Fischer; Timothy Lewis
This paper focuses on a serious-game framework for a dialogue-driven game called GLIMPSE (A Game to Learn Important Communications Methods for Patient Safety Enhancement). The eight essential components of the framework include: recommended communication behavior; accurate translation; narrative-driven, role-playing episodes that allow practice in different challenging situations; perspective sharing mechanisms; a design paradigm that accommodates time challenges of participants; motivational gameplay rewards; feedback/assessment mechanisms; and curriculum. The paper explores how the framework was developed as well as implementation challenges, lessons learned and opportunities for future research.
Keywords: Dashboards; Interprofessional communication; Narrative systems; Patient safety; Perspective sharing; Persuasive technology; Physician/nurse communication; Role-playing; SBAR; Serious games; Serious game framework; Team-based communication; Learning portals

VR in Health and Culture

Low Cost Hand-Tracking Devices to Design Customized Medical Devices BIBAKFull-Text 351-360
  Giorgio Colombo; Giancarlo Facoetti; Caterina Rizzi; Andrea Vitali
This paper concerns the development of a Natural User Interface (NUI) for lower limb prosthesis design. The proposed solution exploits the Leap Motion device to emulate traditional design tasks manually performed by the prosthetist. We first illustrate why hand-tracking devices can be adopted to design socket of lower limb prosthesis using virtual prototyping tools. Then, we introduce the developed NUI and its features mainly with regards to ergonomics and ease of use. Finally, preliminary tests are illustrated as well as results reached so far.
Keywords: Augmented interaction; Hand-tracking devices; SMA
Effect of 3D Projection Mapping Art: Digital Surrealism BIBAKFull-Text 361-367
  Soyoung Jung; Frank Biocca; Daeun Lee
This study examines the superior effect of spatialized projection mapping, also known as spatialized augmented reality or three-dimensional projection mapping, compared to projection on the screen. Specifically, to examine the effect of this modality, other variables are limited, such as sound effects or any other contents. The stimuli have little representative meaning with moving geometric patterns. The results show that spatialized projection mapping has been positively evaluated and that it elicits greater spatial presence.
Keywords: Augmented reality; Spatialized projection mapping; Three dimensional projection mapping; Psychological effect; Spatial memory
Human Factors and Interaction Strategies in Three-Dimensional Virtual Environments to Support the Development of Digital Interactive Therapeutic Toy: A Systematic Review BIBAKFull-Text 368-378
  Eunice P. dos Santos Nunes; Eduardo M. Lemos; Cristiano Maciel; Clodoaldo Nunes
Therapeutic Toy is applied in the hospital environment in order to explain to the child about the procedure he/she will go through. In order to this, the professionals of health usually use physical material, such as dolls and hospital accessories. However, there is a possibility of exploring Augmented and Virtual Reality systems to develop a therapeutic toy in a digital and interactive way. The main goal of this paper is to present results of a Systematic Review (SR), which seeks to identify if Three-Dimensional Virtual Environments (3D VEs) have been used focusing to assist the hospitalized children, which interaction strategies have been used and which human factors have been explored. The results allowed researchers to formulate hypotheses based on the human factors e strategies of interaction identified, to specify a Preliminary Reference Model for the development of Digital Interactive Therapeutic Toy.
Keywords: Three-dimensional virtual environments; Human factors; Interaction strategies; Hospitalized children
Development and Evaluation of an Easy-to-Use Stereoscopic Ability Test to Assess the Individual Ability to Process Stereoscopic Media BIBAKFull-Text 379-387
  Daniel Pietschmann; Benny Liebold; Peter Ohler; Georg Valtin
With the rise of 3D cinema in recent years, 3D stereoscopic images have quickly conquered the entertainment industry. As a consequence, many scholars from different research disciplines study the effects of stereoscopy on user experience, task performance, or naturalism. However, parts of the population suffer from stereoblindness and are unable to process stereo images. For scientific studies, it is important to assess stereoblindness to avoid bias in the gathered data. Several clinical tests are available to measure deficiencies in stereo vision, but they often require special equipment and a trained investigator. We developed an easy to use and economic Stereoscopic Ability Test (SAT) that can be used directly within the intended experimental environment. Initial evaluation data for the test and guidelines for the test application are discussed.
Keywords: Stereoscopic vision; Psychology; Experimental; Diagnostics
The Virtual Meditative Walk: An Immersive Virtual Environment for Pain Self-modulation Through Mindfulness-Based Stress Reduction Meditation BIBAKFull-Text 388-397
  Xin Tong; Diane Gromala; Amber Choo; Ashfaq Amin; Chris Shaw
One in five people in North America experience chronic pain. The primary non-pharmacological approach to treat chronic pain is to 'manage' pain by practices like Mindfulness-based Stress Reduction (MBSR) Meditation. Previous research shows the potential of mindfulness meditation to help foster patients' emotional wellbeing and pain self-modulation. Thus, the Virtual Reality (VR) system named "Virtual Meditative Walk" (VMW) was developed to help patients direct their attention inward through mindfulness meditation, which incorporates biofeedback sensors, an immersive virtual environment, and stereoscopic sound. It was specifically designed to help patients to learn MBSR meditation by providing real-time feedback, and to provide further training reinforcement. VMW enables patients to manage their chronic pain by providing real-time immersive visual signals and sonic feedback, which are mapped to their physiological biofeedback data. In the proof-of-concept study, this combination of immersive VR and MBSR meditation pain self-modulation technique proved to be effective for managing chronic pain.
Keywords: Virtual reality; Chronic pain; Mindfulness-based stress reduction meditation; Immersive environment
Digital Archiving of Takigi Noh Based on Reflectance Analysis BIBAKFull-Text 398-408
  Wataru Wakita; Shiro Tanaka; Kohei Furukawa; Kozaburo Hachimura; Hiromi T. Tanaka
We propose a real-time bidirectional texture function (BTF) and image-based lighting (IBL) rendering of the Takigi Noh based on reflectance analysis. Firstly, we measured a sample of the Noh costume by omnidirectional anisotropic reflectance measurement system called Optical Gyro Measuring Machine (OGM), and we modeled the BTF of the Noh costume based on multi-illuminated High Dynamic Range (HDR) image analysis and modeled Noh stage in 3D based on archival records. Secondly, we captured motion data of Noh player, and modeled Noh player wearing a costume. To achieve the real-time rendering, we modeled the Noh costume by mass spring damper model. Finally, we modeled animated ambient map based on the Improving Noise to achieve the real-time dynamic lighting by fire of the Takigi, and we calculated the optical reflection by the IBL and deformation of the Noh costume.
Keywords: Real-time rendering; BTF; Takigi noh; Reflectance analysis; Digital museum
Multimodal Digital Taste Experience with D'Licious Vessel BIBAKFull-Text 409-418
  Liangkun Yan; Barry Chew; Jie Sun; Li-An Chiu; Nimesha Ranasinghe; Ellen Yi-Luen Do
Increasingly, people are replacing soft drinks with natural fruit juices, since soft drinks usually contain excessive sugar and little nutrition. This paper introduces a multimodal digital taste control system 'D'Licious Vessel' and the respective prototypes. The goal is to provide a digital solution to health concerns regarding overuse of sugar in our daily drinks by decreasing the natural sourness. The system applies gentle electrical signals to a person's tongue to trigger different taste sensations and improve the taste of drinks digitally without involving consumption of actual chemicals. We conducted user studies in a public setting to collect the experimental data and to find the system's effectiveness in improving the taste of lemon juice. During the study, participants were provided with lemon juice and asked to compare the taste difference while drinking with different settings of the taste stimulation prototype. Their opinions for different prototype designs are recorded and discussed.
Keywords: Flavor; Digital taste; Multimodal interaction; User interfaces; Virtual reality

Industrial and Military Applications

Assessing Performance Using Kinesic Behavior Cues in a Game-Based Training Environment BIBAKFull-Text 421-428
  Karla A. Badillo-Urquiola; Crystal S. Maraj
Warfighters are trained in Behavior Cue Analysis to detect anomalies in their environment amongst several domains. This research highlights the Kinesics domain for Behavior Cue Analysis training. As efforts to transition from live, classroom-based training to distributed virtual environment training continue, investigating instructional gaming strategies that elicit improved performance and user perception becomes progressively important. Applying gaming strategies (e.g., goals, competition, feedback, etc.) to Simulation-Based Training, offers a novel approach to delivering the core curriculum for Behavior Cue Analysis. This paper examines two game-based strategies (i.e., excessive positive feedback and competition) to determine the difference in performance scores (i.e., detection and classification accuracy). The results showed no significant difference in performance; however, insight was gained on the significance of excessive positive feedback. Consequently, the paper considers the application of game-based strategies for training behavior cues, as well as discusses the limitations and alternatives for future research.
Keywords: Behavior cue detection; Game-based training; Gaming strategies; Kinesics; Performance
The Virtual Dressing Room: A Usability and User Experience Study BIBAKFull-Text 429-437
  Michael B. Holte; Yi Gao; Eva Petersson Brooks
This paper presents the design and evaluation of a usability and user experience test of a virtual dressing room. First, we motivate and introduce our recent developed prototype of a virtual dressing room. Next, we present the research and test design grounded in related usability and user experience studies. We give a description of the experimental setup and the execution of the designed usability and user experience test. To this end, we report interesting results and discuss the results with respect to user-centered design and development of a virtual dressing room.
Keywords: Human-computer interaction; Usability; User experience; Virtual reality; Augmented reality; Computer graphics; Computer vision; Pose estimation; Gesture recognition; 3D imaging; 3D scanning and textile industry
Occlusion Management in Augmented Reality Systems for Machine-Tools BIBAKFull-Text 438-446
  Claudia Gheorghe; Didier Rizzotti; François Tièche; Francesco Carrino; Omar Abou Khaled; Elena Mugellini
Nowadays, augmented reality systems must be as realistic as possible. A major issue of such a system is to present the augmentations as an integrated part of the environment. Sometimes the virtual parts must be placed and hidden partially or even totally behind a real object. This problem is known under the name of "occlusion problem". In this paper we present a pragmatic solution to manage the occlusion problem based on the prior knowledge of the position and shape of the real objects in two particular scenarios. Only the conception and the qualitative evaluation of the developed system are detailed down the line.
Keywords: Augmented reality; Occlusion; 3D tracking; 3D modeling
Human-Computer Collaboration in Adaptive Supervisory Control and Function Allocation of Autonomous System Teams BIBAKFull-Text 447-456
  Robert S. Gutzwiller; Douglas S. Lange; John Reeder; Rob L. Morris; Olinda Rodas
The foundation for a collaborative, man-machine system for adaptive performance of tasks in a multiple, heterogeneous unmanned system teaming environment is discussed. An autonomics system is proposed to monitor missions and overall system attributes, including those of the operator, autonomy, states of the world, and the mission. These variables are compared within a model of the global system, and strategies that re-allocate tasks can be executed based on a mission-health perspective (such as relieving an overloaded user by taking over incoming tasks). Operators still have control over the allocation via a task manager, which also provides a function allocation interface, and accomplishes an initial attempt at transparency. We plan to learn about configurations of function allocation from human-in-the-loop experiments, using machine learning and operator feedback. Integrating autonomics, machine learning, and operator feedback is expected to improve collaboration, transparency, and human-machine performance.
Keywords: Autonomics; Autonomous systems; Supervisory control; Task models
ARTiSt -- An Augmented Reality Testbed for Intelligent Technical Systems BIBAFull-Text 457-469
  Bassem Hassan; Jörg Stöcklein; Jan Berssenbrügge
This paper describes a simulation and visualization environment called ARTiSt (Augmented Reality Testbed for intelligent technical Systems), which serves as a tool for developing extension modules for the miniature robot BeBot. It allows developers to simulate, visualize, analyze, and optimize new simulated components with existing, real system components. In ARTiSt real BeBots combined with virtual prototypes of a lifter- and a transporter-module, which are attached on top of the real BeBot. The simulation of the virtual components and the management of real BeBots are realized with MATLAB/Simulink. The determination of important parameters for the simulation of the real BeBots, such as real-world position and -rotation, is done using an Augmented Reality tracking system. A camera, installed on top of the testbed, continuously captures the testbed and determines the real-world transformation of the BeBots. The calculated transformations are the basis for further pathfinding within the simulation in MATLAB/Simulink.
Evaluation of a Vehicle Exterior's Sportiness Under Real vs. Virtual Conditions BIBAKFull-Text 470-479
  Max Hoermann; Maximilian Schwalm
In order to identify a customer's liking, original equipment manufacturers (OEMs) in the automotive industry conduct so-called car clinics. Contemporary car clinics, however, generate great expenses as real prototypes are required.
   Considering these facts, car clinics with virtual models would help to solve this problem. To this end an empirical study was designed and conducted to address the question whether the perception of the overall sportiness of a vehicle's exterior can show the same results for virtual as well as real vehicle models. Due to the fact that until today no standardized instrument was available, we evolved a questionnaire designed to capture a vehicle exterior's sportiness by six depending factors.
   Results revealed that the assessments of real and virtual vehicle exteriors correlate highly but do not match exactly.
Keywords: Virtual reality; Car clinic; Vehicle exterior; Questionnaire; Sportiness
Theoretical Foundations for Developing Cybersecurity Training BIBAKFull-Text 480-487
  Eric C. Ortiz; Lauren Reinerman-Jones
Cybersecurity is a computer term regarding the detection, anticipation, and prevention of computer technologies and peripherals from damage, attack, or unauthorized access. These technologies include the monitoring of networks, programs, applications, and personnel. Cybersecurity can be viewed from both an offensive or defensive posture involving maintaining and proactively assessing security vulnerabilities. In 2013, Edward Snowden used his position as an infrastructure analyst to leak thousands of top-secret classified documents detailing the U.S. Government's global covert surveillance and eavesdropping undertakings to the public. This incident identified the human threat as a contributing factor that highlighted several weaknesses in the present state of U.S. cybersecurity affairs. In efforts to strengthen cyber defenses, a solid theoretical research foundation regarding cyber vulnerabilities is warranted. Building upon that foundation, training and experimentation can provide insight into current cybersecurity training methods and how they can be transitioned and implemented into future training regimens.
Keywords: Cybersecurity; Human component; Virtual and gaming environments
Investigation of Visual Features for Augmented Reality Assembly Assistance BIBAFull-Text 488-498
  Rafael Radkowski
The overall goal of this research is to investigate the effectivity of augmented reality (AR) assembly assistance in relation to the difficulty of a particular assembly task since advantages of AR, such as time and error reduction, have not been consequently reported in literature. The research aims to identify additional factors that affect the design of virtual instructions for AR applications. This paper intends to discuss a suggested classification of visual features for AR assembly assistance applications. The classification suggests visual features for different assembly activities and distinguishes significant from less significant parts. It represents a theoretical framework for this research. A user study was conducted to verify the suggested visual features. The results are not significant and do not support the classification. However, observations made during the study indicate additional factors.
Evaluation of Autonomous Approaches Using Virtual Environments BIBAFull-Text 499-512
  Katharina Stahl; Jörg Stöcklein; Sijia Li
In this paper, we address the challenging problem of evaluating autonomous research approaches by the example of an online anomaly detection framework for dynamical real-time systems. We propose to use a virtual test environment that was conceptualized based on the specific evaluation requirements. The architecture is composed of all system parts required for evaluation: the operating system implementing the anomaly detection framework, reconfigurable autonomous applications, an execution platform device for the operating system and its applications, and the device's environment. We demonstrate our concepts by the example of our miniature robot BeBot that acts as our virtual prototype (VP) to execute autonomous applications. With an interactive module, the virtual environment (VE) offers full control over the environment and the VP so that using different levels of hardware implementation for evaluation, but also failure injection at runtime becomes possible. Our architecture allows to determine clear system boundaries of the particular parts composed of perception function, decision making function and execution function which is essential for evaluating autonomous approaches. We define evaluation scenarios to show the effectiveness of each part of our approach and illustrate the powerfulness of applying virtual test environments to evaluate such approaches as the here referred one.
Appraisal of Augmented Reality Technologies for Supporting Industrial Design Practices BIBAKFull-Text 513-523
  Basak Topal; Bahar Sener
Having become widespread and easily accessible with the rapid advancements in technology, augmented reality (AR) offers potential uses for industrial designers, especially for design students. Some design stages, during which traditional tools and methods are used, may not fully communicate the total experience that a product offers. AR can provide a digital layer in which designers can present information and make their presentations more interactive. With this aim in mind, design practice fieldwork with three progressive studies was conducted. The results show that AR can be utilized mainly in presentation and prototyping stages of a design process, to show details such as audiovisual feedback and digital interfaces. With further developments, AR has potential use for several other design activities.
Keywords: Augmented reality; Industrial design education; Design process; Design activities
Advancing Interagency Collaboration Through Constructive Simulation: Results from the 2015 Federal Consortium of Virtual Worlds Workshop BIBAKFull-Text 524-534
  Barbara Truman; David Metcalf
Immersive, 3D conferences are becoming viable using OpenSimulator, open source software. The act of planning for an immersive conference using the software dependent on the conference success strengthens the community of users that participate in the platform. This paper describes three conference events held from 2013-2015 involving an emerging consortium of leading developers and researchers of virtual worlds. The implications of technological success of immersive conferences hold promise for government and military agencies facing training requirements under fiscal restrictions. A workshop was conducted during the writing of this paper establishing the inaugural, immersive workshop for the Federal Consortium of Virtual Worlds sponsored by the US Army and Avacon Incorporated, a non-profit organization producing conference events.
Keywords: Education; Military; Distributed environments; Virtual worlds
Study on the Design Characteristics of Head Mounted Displays (HMD) for Use in Guided Repair and Maintenance BIBAKFull-Text 535-543
  Tao Yang; Young Mi Choi
Head-Mounted Displays (HMDs) are believed to be extremely useful in industrial applications. However, few studies have discussed the impact of different design characteristics of head mounted displays on task performance. This study aims to find out how different display positions of Head Mounted Displays may affect the performance of workers performing guided repair and maintenance tasks. A set of car maintenance and repair tasks will be performed with the guidance of HMD technologies with 3 different display locations: above eye, eye-centered and below eye, and the traditional paper manual. Time and errors will be measured and discussed, so as the implications of human factors. Designers and engineers may leverage the findings to develop next-generation HMDs that improve the effectiveness, efficiency and satisfaction for workers.
Keywords: Head mounted display; Guided repair and maintenance; Wearable computer