HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2013 ACM Symposium on Virtual Reality Software and Technology

Fullname:Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Editors:Nadia Magnenat Thalmann; Enhua Wu; Susumi Tachi; Daniel Thalmann; Luciana Nedel; Weiwei Xu
Location:Singapore, Singapore
Dates:2013-Oct-06 to 2013-Oct-09
Publisher:ACM
Standard No:ISBN: 978-1-4503-2379-6; ACM DL: Table of Contents; hcibib: VRST13
Papers:40
Pages:268
Links:Conference Website
  1. Medical VR and virtual rehabilitation
  2. Human factors
  3. 3D interaction
  4. Navigation
  5. Avatars and robots in telepresence
  6. Simulation
  7. Rendering
  8. VR display technologies
  9. Poster abstracts

Medical VR and virtual rehabilitation

Pattern-based real-time feedback for a temporal bone simulator BIBAFull-Text 7-16
  Yun Zhou; James Bailey; Ioanna Ioannou; Sudanthi Wijewickrema; Stephen O'Leary; Gregor Kennedy
Delivering automated real-time performance feedback in simulated surgical environments is an important and challenging task. We propose a framework based on patterns to evaluate surgical performance and provide feedback during simulated ear (temporal bone) surgery in a 3D virtual environment. Temporal bone surgery is composed of a number of stages with distinct aims and surgical techniques. To provide context-appropriate feedback we must be able to identify each stage, recognise when feedback is to be provided, and determine the nature of that feedback. To achieve these aims, we train pattern-based models using data recorded by a temporal bone simulator. We create one model to predict the current stage of the procedure and separate stage-specific models to provide human-friendly feedback within each stage. We use 27 temporal bone simulation runs conducted by 7 expert ear surgeons and 6 trainees to train and evaluate our models. The results of our evaluation show that the proposed system identifies the stage of the procedure correctly and provides constructive feedback to assist surgical trainees in improving their technique.
Towards hand-eye coordination training in virtual knee arthroscopy BIBAFull-Text 17-26
  Shahzad Rasool; Alexei Sourin; Pingjun Xia; Bin Weng; Fareed Kagda
Minimally invasive arthroscopic surgery has replaced the common orthopaedic surgery procedures on joints. However it demands from surgeons to acquire very different motor-skills for using special miniature pencil-like instruments and cameras inserted through little incisions on the body while observing the surgical field on a video monitor. Training in virtual reality is becoming an alternative to traditional surgical training based on either real patients or increasingly difficult to procure cadavers. In this paper we propose solutions for simulation in virtual environments a few basic arthroscopic procedures including incision of the arthroscopic camera, positioning of the instrument in front of it, as well as using scissors and graspers. Our approach is based on both full 3D simulation and haptic interaction as well as image-based visualization and haptic interaction.
Robust and high-fidelity guidewire simulation with applications in percutaneous coronary intervention system BIBAFull-Text 27-30
  Yurun Mao; Fei Hou; Shuai Li; Aimin Hao; Mingjing Ai; Hong Qin
Real-time and realistic physics-based simulation of deformable objects is of great value to medical intervention, training, and planning in virtual environments. This paper advocates a virtual-reality (VR) approach to minimally-invasive surgery/therapy (e.g., percutaneous coronary intervention) in medical procedures. In particular, we devise a robust and accurate physics-based modeling and simulation algorithm for the guidewire interaction with blood vessels. We also showcase a VR-based prototype system for simulating percutaneous coronary intervention and mimicing the intervention therapy, which affords the utility of flexible, slender guidewires to advance diagnostic or therapeutic catheters into a patient's vascular anatomy, supporting various real-world interaction tasks. The slender body of guidewires are modeled using the famous Cosserat theory of elastic rods. We derive the equations of motion for guidewires with continuous energies and integrate them with the implicit Euler solver, that guarantees robustness and stability. Our approach's originality is primarily founded upon its power, flexibility, and versatility when interacting with the surrounding environment, including novel strategies in the hybrid of geometry and physics, material variability, dynamic sampling, constraint handling and energy-driven physical responses. Our experimental results have shown that this prototype system is both stable and efficient with real-time performance. In the long run, our algorithm and system are expected to contribute to interactive VR-based procedure training and treatment planning.
Emotion-enabled haptic-based serious game for post stroke rehabilitation BIBAFull-Text 31-34
  Xiyuan Hou; Olga Sourina
In this paper, we propose and develop a novel adaptive haptic-based serious game for post stroke rehabilitation. Real-time patients emotions monitoring based on the Electroencephalogram (EEG) is used as an additional game control. A subject-dependent algorithm recognizing negative and positive emotions from EEG is integrated. Force feedback is proposed and implemented in the game. The proposed EEG-enabled haptic-based serious game could help to promote rehabilitation of the patients with motor deficits after stroke. Such games could be used by the patients for post stroke rehabilitation even at home convenience without a nurse presence.
A two-arm coordination model for phantom limb pain rehabilitation BIBAFull-Text 35-38
  Eray Molla; Ronan Boulic
Following limb loss, patients usually continue having sensations on their missing limbs as if they were still present. Significant amount of such sensations are painful and referred as Phantom Limb Pain (PLP). Previous research has shown that providing the patient with the visual feedback of a limb at the place of the missing one in Virtual Reality (VR) can reduce PLP. In this paper we introduce a model to coordinate the arms allowing the exercising of a much broader range of reach tasks for alleviating the PLP more efficiently. Our Two-Arm Coordination Model (TACM) synthesizes the missing limb pose from the instantaneous variations of the intact opposite limb for a given reach task. Moreover, we propose a setup that makes use of a virtual mirror to enhance the full-body awareness of the patient in the virtual space.

Human factors

A methodology to assess the acceptability of human-robot collaboration using virtual reality BIBAFull-Text 39-48
  Vincent Weistroffer; Alexis Paljic; Lucile Callebert; Philippe Fuchs
Robots are becoming more and more present in our everyday life: they are already used for domestic tasks, for companionship activities, and soon they will be used to assist humans and collaborate with them in their work. Human-robot collaboration has already been studied in the industry, for ergonomics and efficiency purposes, but more from a safety than from an acceptability point of view. In this work, we focused on how people perceive robots in a collaboration task and we proposed to use virtual reality as a simulation environment to test different parameters, by making users collaborate with virtual robots. A simple use case was implemented to compare different robot appearances and different robot movements. Questionnaires and physiological measures were used to assess the acceptability level of each condition with a user study. The results showed that the perception of robot movements depended on robot appearance and that a more anthropomorphic robot, both in its appearance and movements, was not necessarily better accepted by the users in a collaboration task. Finally, this preliminary use case was also the opportunity to guarantee the relevance of using such a methodology -- based on virtual reality, questionnaires and physiological measures -- for future studies.
Usability benchmarks for motion tracking systems BIBAFull-Text 49-58
  Jean-Luc Lugrin; Dennis Wiebusch; Marc Erich Latoschik; Alexander Strehler
Precise, accurate, fast, and low-latency motion tracking is a core requirement of real-time human-computer interfaces. The choices of tracking systems for a particular set of 3D interaction techniques are manifold. Hence, guidance in this task is greatly beneficial. In this paper, we propose to establish a set of canonical and simple game-based benchmarks for a potentially standardised comparison of tracking systems. The benchmarks focus on usability scores given reoccurring interaction tasks without requiring potentially missing, incomplete, or complex latency or accuracy raw measurements. Our first two benchmarks evaluate three tracking systems regarding motion-parallax and 3D object manipulation techniques. Our usability comparisons confirmed an expected advantage of low-latency/high-accuracy systems, while they also demonstrated that certain tracking systems perform better than suggested by previous measurements of their raw performances. This indicates that our approach provides an adequate replacement and improvement over the pure comparison of technical specifications. We believe our benchmarks could benefit the research community by facilitating a usability-based comparison of motion tracking systems.
The effects of VEs on mobility impaired users: presence, gait, and physiological response BIBAFull-Text 59-68
  Rongkai Guo; Gayani Samaraweera; John Quarles
We are investigating if/how Mobility Impaired (MI) persons and healthy persons respond differently to Virtual Environments (VE). Previous research on healthy users has investigated a VE's effects on presence, gait (i.e., walking patterns), and physiological responses (e.g., heart rate). However, almost all of the previous research studies have been conducted only with healthy persons. Thus, it very little is known about how MI persons respond to a VE physiologically, how a VE will affect their gait (i.e., walking patterns), or how their sense of presence may differ from healthy persons. To begin investigating this, we designed a VE that included a range of multimodal feedback to induce a strong sense of presence and was novel to the participants. Using this VE, we conducted a study with two different populations: 8 MI persons and 8 healthy persons. The healthy participants were of similar demographics (e.g., age, weight, height) to the MI participants. The MI population was symptomatically homogeneous (e.g., they all walked with canes) and no participants had cognitive impairment. This is one of the first studies to investigate how a VE can affect MI users' gait, physiological response, and presence.
Can we use a brain-computer interface and manipulate a mouse at the same time? BIBAFull-Text 69-72
  Jonathan Mercier-Ganady; Émilie Loup-Escande; Laurent George; Colomban Busson; Maud Marchal; Anatole Lécuyer
Brain-Computer Interfaces (BCI) introduce a novel way of interacting with real and virtual environments by directly exploiting cerebral activity. However in most setups using a BCI, the user is explicitly asked to remain as motionless as possible, since muscular activity is commonly admitted to add noise and artifacts in brain electrical signals. Thus, as for today, people have been rarely let using other classical input devices such as mice or joysticks simultaneously to a BCI-based interaction. In this paper, we present an experimental study on the influence of manipulating an input device such as a standard computer mouse on the performance of a BCI system. We have designed a 2-class BCI which relies on Alpha brainwaves to discriminate between focused versus relaxed mental activities. The study uses a simple virtual environment inspired by the well-known Pac-Man videogame and based on BCI and mouse controls. The control of mental activity enables to eat pellets in a simple 2D virtual maze. Different levels of motor activity achieved with the mouse are progressively introduced in the gameplay: 1) no motor activity (control condition), 2) a semi-automatic motor activity, and 3) a highly-demanding motor activity. As expected the BCI performance was found to slightly decrease in presence of motor activity. However, we found that the BCI could still be successfully used in all conditions, and that relaxed versus focused mental activities could still be significantly discriminated even in presence of a highly-demanding mouse manipulation. These promising results pave the way to future experimental studies with more complex mental and motor activities, but also to novel 3D interaction paradigms that could mix BCI and other input devices for virtual reality and videogame applications.
Impact of graphical fidelity on physiological responses in virtual environments BIBAFull-Text 73-76
  Vivianette Ocasio-De Jesús; Andrew Kennedy; David Whittinghill
Higher quality computer graphics in interactive applications in the areas of virtual reality and games is generally assumed to create a more immersive experience for the end user. In this study we examined this assumption by testing to what degree graphical fidelity was associated with physiological arousal as measured by a galvanic skin response (GSR) sensor. Thirty-six subjects played two different video games at the highest and lowest graphical quality settings while their GSR activity was measured. No significant difference in GSR was observed that was associated with graphical quality. We conclude that, for applications in which an emotional response is desired, increased graphical quality alone does not predict a physiological arousal response.

3D interaction

Facetons: face primitives with adaptive bounds for building 3D architectural models in virtual environment BIBAFull-Text 77-82
  Naoki Sasaki; Hsiang-Ting Chen; Daisuke Sakamoto; Takeo Igarashi
We present faceton, a geometric modeling primitive designed for building architectural models, using a six degrees of freedom (DoF) input device in a virtual environment (VE). A faceton is given as an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple drag-and-drop and group interaction of faceton, users can easily create 3D architecture models in the VE. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling in VE, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry (CSG), but it is driven by a novel adaptive bounding algorithm and is specifically designed for the 3D modeling activities in an immersive virtual environment.
Interacting with danger in an immersive environment: issues on cognitive load and risk perception BIBAFull-Text 83-92
  Vitor A. M. Jorge; Wilson J. Sarmiento; Anderson Maciel; Luciana Nedel; César A. Collazos; Frederico Faria; Jackson Oliveira
Any human-computer interface imposes a certain level of cognitive load to the user task. Analogously, the task itself also imposes different levels of cognitive load. It is common sense in 3D user interfaces research that a higher number of degrees of freedom increases the interface cognitive load. If the cognitive load is significant, it might compromise the user performance and undermine the evaluation of user skills in a virtual environment. In this paper, we propose an assessment of two immersive VR interfaces with varying degrees of freedom in two VR tasks: risk perception and basic object selection. We examine the effectiveness of both interfaces in these two different tasks. Results show that the number of degrees of freedom does not significantly affect a basic selection task, but it affects risk perception task in an unexpected way.
ShoeSoleSense: proof of concept for a wearable foot interface for virtual and real environments BIBAFull-Text 93-96
  Denys J. C. Matthies; Franz Müller; Christoph Anthes; Dieter Kranzlmüller
ShoeSoleSense is a proof of concept, novel body worn interface -- an insole that enables location independent hands-free interaction through the feet. Forgoing hand or finger interaction is especially beneficial when the user is engaged in real world tasks. In virtual environments as moving through safety training applications is often conducted via finger input, which is not very suitable. To enable a more intuitive interaction, alternative control concepts utilize gesture control, which is usually tracked by statically installed cameras in CAVE-like-installations. Since tracking coverage is limited, problems may also occur. The introduced prototype provides a novel control concept for virtual reality as well as real life applications. Demonstrated functions include movement control in a virtual reality installation such as moving straight, turning and jumping. Furthermore the prototype provides additional feedback by heating up the feet and vibrating in dedicated areas on the surface of the insole.
Bubble bee, an alternative to arrow for pointing out directions BIBAFull-Text 97-100
  Jonathan Wonner; Jérôme Grosjean; Antonio Capobianco; Dominique Bechmann
We present Bubble Bee -- an extension for the 3D bubble cursor in Virtual Environments (VEs). This technique provides an alternative to arrows for pointing out a direction in a 3D scene.
   Bubble Bee is based on a ring concept. A circular ring in 3D appears like an ellipse, according to its orientation. This orientation is easy to infer by comparing the minor radius which varies with the view angle, to the reference major radius which is constant and equal to the radius of the ring. Bubble Bee is a sphere with several rings oriented towards the same direction. The rings give a natural axis to the sphere. A color gradient sets the direction of this axis.
   We compared the performance of Bubble Bee and a 3D arrow through an experiment. The participants were asked to indicate which object was pointed by the two competing techniques. No significant differences on decision time were found, while Bubble Bee was shown to be nearly as accurate as a 3D arrow.

Navigation

Scalable optical tracking for navigating large virtual environments using spatially encoded markers BIBAFull-Text 101-110
  Steven Maesen; Patrik Goorts; Philippe Bekaert
In this paper we present a novel approach for tracking the movement of a user in a large indoor environment. Many studies show that natural walking in virtual environments increases the feeling of immersion by the users. However, most tracking systems suffer from a limited working area or are expensive to scale up to a reasonable size for navigation.
   Our system is designed to be easily scalable both in working area and number of simultaneous users using inexpensive off-the-shelf components. To accomplish this, the system determines the 6 DOF pose using passive LED strips, mounted to the ceiling, which are spatially encoded using De Bruijn codes. A camera mounted to the head of the user records these patterns. The camera can determine its own pose independently, so no restriction on the number of tracked objects is required. The system is accurate to a few millimeters in location and less than a degree in orientation. The accuracy of the tracker is furthermore independent of the size of the working area which makes it scalable to enormous installations. To provide a realistic feeling of immersion, the system is developed to be real-time and is only limited by the framerate of the camera, currently at 60Hz.
6DoF navigation in virtual worlds: comparison of joystick-based and head-controlled paradigms BIBAFull-Text 111-114
  Weiya Chen; Anthony Plancoulaine; Nicolas Férey; Damien Touraine; Julien Nelson; Patrick Bourdot
6DoF navigation in a virtual world can usually be implemented by two types of navigation techniques: joystick-based input devices and steering metaphors based on movements of the user's body, e.g. head-controlled paradigms. These two different types of 6DoF navigation techniques provide users with the same level of control, but the latter introduces the user's physical movements in the navigation, which we believe will improve the navigation experience in immersive virtual environments. In this paper, we compare these two types of 6DoF navigation techniques in an immersive context, through an experiment using both objective and subjective measurements to assess user performance, the occurrence of cybersickness symptoms and the level of presence, when using either of these navigation paradigms.
Tangible windows for a free exploration of wide 3D virtual environment BIBAFull-Text 115-118
  Florimond Guéniat; Julien Christophe; Yoren Gaffary; Adrien Girard; Mehdi Ammi
Exploring virtual environment with immersive metaphor is still largely unexplored, with the costly CAVE exception. This question takes importance in lots of fields, such as fluid mechanics, where space and time resolved dataset become more and more common. For that reason, we present an interaction design study of an window exploration metaphor, for large 3D virtual environment. The metaphor is based on the use of a tablet as a tangible and movable window on a virtual environment. Rotations in the environment are tracker-less mapped on the rotations of the tablet. Our design is inspired by fluid mechanics issues, but is build keeping generalizability in mind. The study shows that mapping three degrees of freedom onto corresponding real three degrees of freedom of space raises the transparency, the efficiency of data exploration and the space awareness of users.
Robust prediction of auditory step feedback for forward walking BIBAFull-Text 119-122
  Markus Zank; Thomas Nescher; Andreas Kunz
Virtual reality systems supporting real walking as a navigation interface usually lack auditory step feedback, although this could give additional information to the user e.g. about the ground he is walking on. In order to add matching auditory step feedback to virtual environments, we propose a calibration-free and easy to use system that can predict the occurrence time of stepping sounds based on human gait data.
   Our system is based on the timing of reliably occurring characteristic events in the gait cycle which are detected using foot mounted accelerometers and gyroscopes. This approach not only allows us to detect but to predict the time of an upcoming step sound in realtime. Based on data gathered in an experiment, we compare different suitable events that allow a tradeoff between the maximum precision of the prediction and the maximum time by which the sound can be predicted.

Avatars and robots in telepresence

Persuading people in a remote destination to sing by beaming there BIBAFull-Text 123-132
  Pierre Bourdin; Josep Maria Tomàs Sanahuja; Carlota Crusafon Moya; Patrick Haggard; Mel Slater
We built a Collaborative Virtual Environment (CVE) allowing one person, the 'visitor' to be digitally transported to a remote destination to interact with local people there. This included full body tracking, vibrotactile feedback and voice. This allowed interactions in the same CVE between multiple people situated in different physical remote locations. This system was used for an experiment to study whether the conveyance of touch has an impact on the willingness of participants embodied in the CVE to sing in public.
   In a first experimental condition, the experimenter virtually touched the avatar of the participants on the shoulder, producing vibrotactile feedback. In another condition using the identical physical setup, the vibrotactile displays were not activated, so that they would not feel the touch. Our hypothesis was that the tactile touch condition would produce a greater likelihood of compliance with the request to sing. In a second part we examined the hypothesis that people might be more willing to sing (execute an embarrassing task) in a CVE, because of the anonymity provided by virtual reality. Hence we carried out a similar study in physical reality.
   The results suggest that the tactile intervention had no effect on the sensations of body ownership, presence or the behaviours of the participants, in spite of the finding that the sensation of touch itself was effectively realised. Moreover we found an overall similarity in responses between the VR and real conditions.
Human-virtual human interaction by upper body gesture understanding BIBAFull-Text 133-142
  Yang Xiao; Junsong Yuan; Daniel Thalmann
In this paper, a novel human-virtual human interaction system is proposed. This system supports a real human to communicate with a virtual human using natural body language. Meanwhile, the virtual human is capable of understanding the meaning of human upper body gestures and reacting with its own personality by the means of body action, facial expression and verbal language simultaneously. In total, 11 human upper body gestures with and without human-object interaction are currently involved in the system. They can be characterized by human head, hand and arm posture. In our system implementation, the wearable Immersion CyberGlove II is used to capture the hand posture and the vision-based Microsoft Kinect takes charge of capturing the head and arm posture. This is a new sensor solution for human-gesture capture, and can be regarded as the most important contribution of this paper. Based on the posture data from the CyberGlove II and the Kinect, an effective and real-time human gesture recognition algorithm is also proposed. To verify the effectiveness of the gesture recognition method, we build a human gesture sample dataset. Additionally, the experiments demonstrate that our algorithm can recognize human gestures with high accuracy in real time.
AMITIES: avatar-mediated interactive training and individualized experience system BIBAFull-Text 143-152
  Arjun Nagendran; Remo Pillat; Adam Kavanaugh; Greg Welch; Charles Hughes
This paper presents an architecture to control avatars and virtual characters in remote interaction environments. A human-in-the-loop (interactor) metaphor provides remote control of multiple virtual characters, with support for multiple interactors and multiple observers. Custom animation blending routines and a gesture-based interface provide interactors with an intuitive digital puppetry paradigm. This paradigm reduces the cognitive and physical loads on the interactor while supporting natural bi-directional conversation between a user and the virtual characters or avatar counterparts. A multi-server-client architecture, based on a low-demand network protocol, connects the user environment, interactor station(s) and observer station(s). The associated system affords the delivery of personalized experiences that adapt to the actions and interactions of individual users, while staying true to each virtual character's personality and backstory. This approach has been used to create experiences designed for training, education, rehabilitation, remote presence and other-related applications.
Multi-party interaction with a virtual character and a human-like robot BIBAFull-Text 153-156
  Zerrin Yumak; Nadia Magnenat-Thalmann
Research on interactive virtual characters and social robots focuses mainly on one-to-one interactions and multi-party interactions concept are rather less explored. As we are developing these characters to be helpful to us in our daily lives as guides, companions, assistants or receptionists, they should be aware of the existence of multiple people and address their requirements in a natural way and act according to the social rules and norms. In contrast with previous work, we are interested in multi-party and multi-modal interactions between 3D virtual characters, real humans and social robots. This means that any of these participants can interact with each other. In this paper we present our on-going work, provide a discussion on multi-party interaction, describe the overall system architecture and mention our future work.
A multimodal person-following system for telepresence applications BIBAFull-Text 157-164
  Wee Ching Pang; Gerald Seet; Xiling Yao
This paper presents the design and implementation of a multimodal person-following system for a mobile telepresence robot. A color histogram matching and position matching algorithm was developed for a person-recognition function using Kinect sensors. Robot motion was controlled by adjusting its velocity according to the humans position in relation to the robot. The robot was able to follow the targeted person in various person-following modes, such as the back-following mode, the side-by-side accompaniment mode as well as the front-guiding mode. An obstacle avoidance function was also implemented using the virtual potential field algorithm.
Supporting interoperability and presence awareness in collaborative mixed reality environments BIBAFull-Text 165-174
  Oyewole Oyekoya; Ran Stone; William Steptoe; Laith Alkurdi; Stefan Klare; Angelika Peer; Tim Weyrich; Benjamin Cohen; Franco Tecchia; Anthony Steed
In the BEAMING project we have been extending the scope of collaborative mixed reality to include the representation of users in multiple modalities, including augmented reality, situated displays and robots. A single user (a visitor) uses a high-end virtual reality system (the transporter) to be virtually teleported to a real remote location (the destination). The visitor may be tracked in several ways including emotion and motion capture. We reconstruct the destination and the people within it (the locals). In achieving this scenario, BEAMING has integrated many heterogeneous systems. In this paper, we describe the design and key implementation choices in the Beaming Scene Service (BSS), which allows the various processes to coordinate their behaviour. The core of the system is a light-weight shared object repository that allows loose coupling between processes with very different requirements (e.g. embedded control systems through to mobile apps). The system was also extended to support the notion of presence awareness. We demonstrate two complex applications built with the BSS.
Real time whole body motion mapping for avatars and robots BIBAFull-Text 175-178
  Bernhard Spanlang; Xavi Navarro; Jean-Marie Normand; Sameer Kishore; Rodrigo Pizarro; Mel Slater
We describe a system that allows for controlling different robots and avatars from a real time motion stream. The underlying problem is that motion data from tracking systems is usually represented differently to the motion data required to drive an avatar or a robot: there may be different joints, motion may be represented by absolute joint positions and rotations or by a root position, bone lengths and relative rotations in the skeletal hierarchy. Our system resolves these issues by remapping in real time the tracked motion so that the avatar or robot performs motions that are visually close to those of the tracked person. The mapping can also be reconfigured interactively at run-time. We demonstrate the effectiveness of our system by case studies in which a tracked person is embodied as an avatar in immersive virtual reality or as a robot in a remote location. We show this with a variety of tracking systems, humanoid avatars and robots.

Simulation

A pattern-based modeling framework for simulating human-like pedestrian steering behaviors BIBAFull-Text 179-188
  Nan Hu; Michael Harold Lees; Suiping Zhou
In this paper, we propose a new approach to modeling natural steering behaviors of virtual humans. We suspect that a small number of steering strategies are sufficient for generating typical pedestrian behaviors observed in daily-life situations. Through these limited strategies we show that complex steering behaviors are generated by executing appropriate steering strategies at the appropriate time. In our model, decisions on the selection, scheduling and execution of steering strategies in a given situation are based on the matching results between the currently perceived spatial-temporal patterns and the prototypical cases in an agent's experience base. From a modeler's point of view, our approach is intuitive to use. Our model is carefully evaluated through a three-stage validation process, using experimental studies on basic test scenarios, model comparisons under standard but more complex test scenarios, and sensitivity analysis on key model parameters. Experimental results show that our model is able to generate results that reflect the collective efficiency of crowd dynamics and is in agreement with existing literature on pedestrian studies.
Parallel cities BIBAFull-Text 189-192
  Yuichiro Takeuchi
In this paper we introduce the concept of Parallel Cities--live, 3D simulations of actual cities kept "true to life" using real-time information from their real-world counterparts. A key enabler of the concept is the ever-growing tide of SNS (e.g., Twitter) updates, that can be aggregated and analyzed to obtain a rough, but macroscopically accurate picture of the current state of a city. Metaphorically, Parallel Cities will allow users to instantly "open windows" to any place in the world, providing an attractive enhancement to existing online mapping services. The paper will offer a concise discussion of the technical details of the concept, centered around descriptions of a working prototype that targets the city of Tokyo, Japan.

Rendering

Refurbish a single user 3D application into a multi-user distributed service: a case study BIBAFull-Text 193-200
  Niels A. Nijdam; Yvain Tisserand; Nadia Magnenat-Thalmann
Through a multitude of different devices, such as phones, tablets, desktop systems etc., we are able to exchange data across the world, independently of location, time and the device used. Almost by default applications are extended with networking capabilities, either deployed locally on the client device and connecting to a server or, as the trend is now, fully hosted on the Internet (servers) as a service (cloud services). However many 3D applications are still restricted to a single platform, as it is costly in terms of developing, maintaining, adapting and providing support for multiple platforms (software as well for hardware dependencies). Therefore applications that we see now available on a variety of devices are either single-platform, single-user, non-real-time collaborative or graphically not demanding. By using an adaptive remote rendering approach it is feasible to take advantage of these new devices and provide means for old and new 3D oriented applications to be used in collaborative environments. In this paper, we look at the conversion of a single user 3D application into a multi-user service. Analyse the requirements needed for adapting the software for being integrated into the "Herd framework". Offering remote rendering to end devices, a single instance accessible to multiple users and in order to optimize each instance of the application for different devices the user interface representation is handled in a dynamically using a device profile, as well for handling different input techniques.
Perceptual radiometric compensation for inter-reflection in immersive projection environment BIBAFull-Text 201-208
  Yuqi Li; Qingshu Yuan; Dongming Lu
We present a fast perceptual radiometric compensation method for inter-reflection in immersive projection environment. Radiometric compensation is the inverse process of light transport. As light transport process can be described by a matrix-vector multiplication equation, radiometric compensation for inter-reflection can be achieved by solving the equation to get the vector, during which matrix inversion should be computed. As the dimensions of the matrix are equivalent to the resolution of images, such matrix inversion is both time and storage consuming. Unlike previous methods, our method adopts projector-camera system to simulate the inversion, and treats the compensation as a non-linear optimization problem which is formulated from full light transport matrix and non-linear color space conversion. To make physical multiplication simulation more practical, the method adjusts the range of projector-camera system adaptively and reduces the high-frequency errors caused by clipping error and measured error to make the compensated results smoother. We implement an immersive projection display prototype. The experiments show that our method achieves better results compared with the previous method.
Distorted shadow mapping BIBAFull-Text 209-214
  Nixiang Jia; Dening Luo; Yanci Zhang
In this paper, a novel algorithm named Distorted Shadow Maps (DSMs) is proposed to generate high-quality hard shadows in real-time. The method focuses on addressing the shadow aliasing caused by different sample distribution between light and camera space. Inspired by the fact that such aliasing occurs in the depth-discontinuous regions of shadow map, in DSMs, a sample redistribution mechanism is designed to enlarge the geometric shadow silhouette regions by shrinking the regions that are completely in light or in shadows. Consequently, more texels in the shadow map are covered by the geometric silhouettes, indicating that silhouettes get more samples. The experimental results show that the jagged edges of hard shadows are reduced by the DSMs algorithm.
New iterative ray-traced collision detection algorithm for GPU architectures BIBAFull-Text 215-218
  François Lehericey; Valérie Gouranton; Bruno Arnaldi
We present IRTCD, a novel Iterative Ray-Traced Collision Detection algorithm that exploits spatial and temporal coherency. Our approach uses any existing standard ray-tracing algorithm and we propose an iterative algorithm that updates the previous time step results at a lower cost with some approximations. Applied for rigid bodies, our iterative algorithm accelerate the collision detection by a speedup up to 33 times compared to non-iterative algorithms on GPU.
Video inlays: a system for user-friendly matchmove BIBAFull-Text 219-222
  Dmitry Rudoy; Lihi Zelnik-Manor
Digital editing technology is highly popular as it enables to easily change photos and add to them artificial objects. Conversely, video editing is still challenging and mainly left to the professionals. Even basic video manipulations involve complicated software tools that are typically not adopted by the amateur user. In this paper we propose a system that allows an amateur user to performs a basic matchmove by adding an inlay to a video. Our system does not require any previous experience and relies on a simple user interaction. We allow adding 3D objects and volumetric textures to virtually any video. We demonstrate the method's applicability on a variety of videos downloaded from the web.

VR display technologies

Drilling into complex 3D models with Gimlenses BIBAFull-Text 223-230
  Cyprien Pindat; Emmanuel Pietriga; Oliver Chapuis; Claude Puech
Complex 3D virtual scenes such as CAD models of airplanes and representations of the human body are notoriously hard to visualize. Those models are made of many parts, pieces and layers of varying size, that partially occlude or even fully surround one another. We introduce Gimlenses, a multi-view, detail-in-context visualization technique that enables users to navigate complex 3D models by interactively drilling holes into their outer layers to reveal objects that are buried, possibly deep, into the scene. Those holes get constantly adjusted so as to guarantee the visibility of objects of interest from the parent view. Gimlenses can be cascaded and constrained with respect to one another, providing synchronized, complementary viewpoints on the scene. Gimlenses enable users to quickly identify elements of interest, get detailed views of those elements, relate them, and put them in a broader spatial context.
Color correction for optical see-through displays using display color profiles BIBAFull-Text 231-240
  Srikanth Kirshnamachari Sridharan; Juan David Hincapié-Ramos; David R. Flatla; Pourang Irani
In optical see-through displays, light coming from background objects mixes with the light originating from the display, causing what is known as the color blending problem. Color blending negatively affects the usability of such displays as it impacts the legibility and color encodings of digital content. Color correction aims at reducing the impact of color blending by finding an alternative display color which, once mixed with the background, results in the color originally intended.
   In this paper we model color blending based on two distortions induced by the optical see-through display. The render distortion explains how the display renders colors. The material distortion explains how background colors are changed by the display material. We show the render distortion has a higher impact on color blending and propose binned-profiles (BP) -- descriptors of how a display renders colors -- to address it. Results show that color blending predictions using BP have a low error rate -- within nine just noticeable differences (JND) in the worst case. We introduce a color correction algorithm based on predictions using BP and measure its correction capacity. Results show light display colors can be better corrected for all backgrounds. For high intensity backgrounds light colors in the neutral and CyanBlue regions perform better. Finally, we elaborate on the applicability, design and hardware implications of our approach.
The impact of display bezels on stereoscopic vision for tiled displays BIBAFull-Text 241-250
  Jürgen Grüninger; Jens Krüger
In recent years high-resolution tiled display systems have gained significant attention in scientific and information visualization of large-scale data. Modern tiled display setups are based on either video projectors or LCD screens. While LCD screens are the preferred solution for monoscopic setups, stereoscopic displays almost exclusively consist of some kind of video projection. This is because projections can significantly reduce gaps between tiles, while LCD screens require a bezel around the panel. Projection setups, however, suffer from a number of maintenance issues that are avoided by LCD screens. For example, projector alignment is a very time-consuming task that needs to be repeated at intervals, and different aging states of lamps and filters cause color inconsistencies. The growing availability of inexpensive stereoscopic LCDs for television and gaming allows one to build high-resolution stereoscopic tiled display walls with the same dimensions and resolution as projection systems at a fraction of the cost, while avoiding the aforementioned issues. The only drawback is the increased gap size between tiles.
   In this paper, we investigate the effects of bezels on the stereo perception with three surveys and show, that smaller LCD bezels and larger displays significantly increase stereo perception on display wall systems. We also show that the bezel color is not very important and that bezels can negatively affect the adaption times to the stereoscopic effect but improve task completion times. Finally, we present guidelines for the setup of tiled stereoscopic display wall systems.
Seamless stitching of stereo images for generating infinite panoramas BIBAFull-Text 251-258
  Tao Yan; Zhe Huang; Rynson W. H. Lau; Yun Xu
A stereo infinite panorama is a panoramic image that may be infinitely extended by continuously stitching together stereo images that depict similar scenes, but are taken from different geographic locations. It can be used to create interesting walkthrough environment. An important issue underlying this application is to seamlessly stitch two stereo images together. Although many methods have been proposed for stitching 2D images, they may not work well on stereo images, due to the difficulty in ensuring disparity consistency. In this paper, we propose a novel method to stitch two stereo images seamlessly. We first apply the graph cut algorithm to compute a seam for stitching, with a novel disparity-aware energy function to both ensure disparity continuity and suppress visual artifacts around the seam. We then apply a modified warping-based disparity scaling algorithm to suppress the seam in depth domain. Experiments show that our stitching method is capable of producing high quality stereo infinite panoramas.
Improved pre-warping for wide angle, head mounted displays BIBAFull-Text 259-262
  Daniel Pohl; Gregory S. Johnson; Timo Bolkart
High-quality head mounted displays are becoming available in the consumer space. These displays provide an immersive gaming experience by filling the wearer's field of view. To achieve immersion with low cost, a commodity display panel is placed a short distance in front of each eye, and wide-angle optics are used to bring the image into focus. However, these optics introduce spatial and chromatic distortion into the image seen by the viewer. As a result, the images to be displayed must be pre-warped to cancel this distortion. This correction can be performed by warping the image in a post-processing step, by warping the scene geometry before rendering, or by modeling corrective optics in the virtual camera.
   Here, we examine the image quality and performance of several correction methods. Though image warping with a bilinear filter is common [Antonov et al. 2013], we find that bicubic filtering yields improved image quality with minimal performance impact. We also propose a new method for correcting chromatic distortion by warping the image using distortion meshes, and we propose a method for correcting spatial and chromatic distortion accurately in-camera.
Intercept tags: enhancing intercept-based systems BIBAFull-Text 263-266
  David J. Zielinski; Regis Kopper; Ryan P. McMahan; Wenjie Lu; Silvia Ferrari
In some virtual reality (VR) systems, OpenGL intercept methods are used to capture and render a desktop application's OpenGL calls within an immersive display. These systems often suffer from lower frame rates due to network bandwidth limitations, implementation of the intercept routine, and in some cases, the intercepted application's frame rate. To mitigate these issues and to enhance intercept-based systems in other ways, we present intercept tags, which are OpenGL geometries that are interpreted instead of rendered. We have identified and developed several uses for intercept tags, including hand-off interactions, display techniques, and visual enhancements. To demonstrate the value of intercept tags, we conducted a user study to compare a simple virtual hand technique implemented with and without intercept tags. Our results show that intercept tags significantly improve user performance and experience.

Poster abstracts

Are virtual patients effective to train diagnostic skills?: a study with bulimia nervosa virtual patients BIBAFull-Text 267
  Jose Gutierrez-Maldonado; Marta Ferrer-Garcia
Differential diagnosis is carried out early during the diagnostic interview, and this process requires a series of abilities that must be developed through sound training. The use of virtual reality in interactive simulations with Virtual Patients (VPs) enables students to learn by doing, through first-person experience, without interaction with real patients. VPs are interactive computer simulations of patient encounters used in health care education for learning and assessment. They typically include interactive features for illness history taking, explorations, tests, and features for suggesting diagnosis and treatment plans [Fors, Muntean, Botezatu and Zary, 2009]. VPs have been shown to have a great educational value especially for training clinical reasoning [Cook and Triola, 2009] and differential diagnosis [Peñaloza-Salazar et al. 2011]. The present study tested an application for teaching healthcare professionals the skills required to perform the differential diagnosis of bulimia nervosa.
Cue-elicited craving for food in virtual reality BIBAFull-Text 268
  Marta Ferrer-Garcia; Jose Gutierrez-Maldonado
This study explores the use of virtual reality technology as an alternative to in vivo exposure in cue-exposure therapy for bingeing behavior, and assesses the ability of different virtual environments to elicit craving for food in a non-clinical sample. Previous research has indicated that craving for food can be elicited by exposure to food cues [Ferriday and Brunstrom 2011; Sobik, Hutchinson and Craighead 2005]. Given that craving for food is considered a trigger of bingeing, cue-exposure therapy with response prevention of bingeing may be effective in extinguishing the craving response in patients with eating disorders and obesity. However, the application of the in vivo cue exposure technique in the therapist's office faces logistical difficulties and is hampered by a lack of ecological validity [Koskina, Campbell and Schmidt 2013]. The use of Virtual reality (VR) technology may overcome the difficulties described. Nevertheless, before VR-based cue-exposure can be used for therapeutic purposes, the ability of VR scenarios to elicit craving responses in participants must be assessed. This is the objective of the present study.