HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology

Fullname:Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology
Editors:Rynson Lau; Dinesh Manocha; Taku Komura; Aditi Majumder; Weiwei Xu
Location:Edinburgh, Scotland, United Kingdom
Dates:2014-Nov-11 to 2014-Nov-13
Publisher:ACM
Standard No:ISBN: 978-1-4503-3253-8; ACM DL: Table of Contents; hcibib: VRST14
Papers:44
Pages:242
Links:Conference Website
  1. Device and interface
  2. Graphics
  3. Tracking and recognition
  4. Character animation
  5. User study and data analysis
  6. Perception
  7. Poster abstracts

Device and interface

DigiTap: an eyes-free VR/AR symbolic input device BIBAFull-Text 9-18
  Manuel Prätorius; Dimitar Valkov; Ulrich Burgbacher; Klaus Hinrichs
In this paper we present DigiTap -- a wrist-worn device specially designed for symbolic input in virtual and augmented reality (VR/AR) environments. DigiTap is able to robustly sense thumb-to-finger taps on the four fingertips and the eight minor knuckles. These taps are detected by an accelerometer, which triggers capturing of an image sequence with a small wrist-mounted camera. The tap position is then extracted with low computational effort from the images by an image processing pipeline. Thus, the device is very energy efficient and may potentially be integrated in a smartwatch-like device, allowing an unobtrusive, always available, eyes-free input. To demonstrate the feasibility of our approach an initial user study with our prototype device was conducted. In this study the suitability of the twelve tapping locations was evaluated, and the most prominent sources of error were identified. Our prototype system was able to correctly classify 92% of the input locations.
Robust 6-DOF immersive navigation using commodity hardware BIBAFull-Text 19-22
  L. Carozza; F. Bosché; M. Abdel-Wahab
In this paper we present a novel visual-inertial 6-DOF localization approach that can be directly integrated in a wearable immersive system for simulation and training. In this context, while CAVE environments typically require complex and expensive set-up, our approach relies on visual and inertial information provided by commodity hardware, i.e. a consumer monocular camera and an Inertial Measurement Unit (IMU).
   We propose a novel robust pipeline based on state-of-the-art image-based localization and sensor fusion approaches. A loosely-coupled sensor fusion approach, which makes use of robust orientation information from the IMU, is employed to cope with failures in visual tracking (e.g. due to camera fast motion) in order to limit motion jitters. Fast and smooth re-localization is also provided to track position following visual tracking outage and guarantee continued operation. The 6-DOF information is then used to render consistently VR contents on a stereoscopic HMD. The proposed system, demonstrated in the context of Construction, runs at 30 fps on a standard PC and requires a very limited set-up for its intended application.
Navigating immersive virtual environments through a foot controller BIBAFull-Text 23-26
  Marcello Carrozzino; Giovanni Avveduto; Franco Tecchia; Pavel Gurevich; Benjamin Cohen
This paper introduces our ongoing work on the Foot Controller Device as an interface for navigation in Immersive Virtual Environments. The Foot Controller is a special, arduino-based control mat equipped with an array of pressure sensors and able to function as a touch surface for feet. What makes it special is that sensor cells can be accessed individually, allowing for a sophisticate control algorithms based on pressure distribution. The low-level software provides a set of recognition features which can be used to implement intuitive navigation metaphors. In this paper we introduce the device and its operating modes, present its use as a navigation interface, and discuss the results of a preliminary pilot user study aimed to evaluate its usability.
AnyHaptics: a haptic plug-in for existing interactive 3D graphics applications BIBAFull-Text 27-30
  Deok-Jae Song; Jinah Park
In this paper, we present a haptic plug-in system that operates on already-developed existing interactive 3D graphics applications. There are various high-quality graphics applications developed without taking into consideration the haptic features. Our system augments haptic interaction in a simple manner to enhance the experience. The proposed system consists of two separate modules. While one module captures the depth map from a graphics pipeline of the target application, the other module calculates the haptic feedback force based on the captured depth map. We address efficient and effective ways of capturing and transferring depth map data for haptic rendering and present a working solution.

Graphics

Multiphase surface tracking with explicit contouring BIBAFull-Text 31-40
  Xiaosheng Li; Xiaowei He; Xuehui Liu; Baoquan Liu; Enhua Wu
We introduce a novel framework for tracking multiphase interfaces with explicit contouring technique. In our framework, an unsigned distance function and an additional indicator function are used to represent the multiphase system. Our method maintains the explicit polygonal meshes that define the multiphase interfaces. At each step, distance function and indicator function are updated via semi-Lagrangian path tracing from the meshes of the last step. Interface surfaces are then reconstructed by polygonization procedures with precomputed stencils and further smoothed with a feature-preserving non-manifold smoothing algorithm to stay in good quality. Our method is easy to be implemented and incorporated into multiphase simulation, such as immiscible fluids, crystal grain growth and geometric flows. We demonstrate our method with several level set tests, including advection, propagation, etc., and couple it to some existing fluid simulators. The results show that our approach is stable, flexible, and effective for tracking multiphase interfaces.
Model topology change with correspondence using electrostatics BIBAFull-Text 41-44
  Peter Sandilands; Taku Komura
This paper introduces a method for finding a dense correspondence between objects of varying topology or connectivity by using a proxy, genus zero mesh alongside the technique of Blended Intrinsic Maps. Harmonic space parameterisation is used to create a closed, genus zero shape that approximates the geometry of the original object. This allows for noisy or topologically different representations of objects to be mapped to one another, with seams in the mapping falling in generally hidden concave areas and tunnels. The paper presents example mapping between objects with and without holes, as well as objects that consist of a number of disconnected segments.
Robust random dot markers: towards augmented unprepared maps with pure geographic features BIBAFull-Text 45-54
  Liming Yang; Jean-Marie Normand; Guillaume Moreau
Augmented maps have many important applications. However, no mature registration method exists to associate unprepared maps with a Geographical Information System (GIS) database which would be used to superimpose simulation results or route display on a paper map. In this paper, we propose a method called Robust Random Dot Markers (RRDM) that can robustly track coplanar random dot patterns, which can be used to address this problem. RRDM is based on the same idea of the Random Dot Markers (RDM) proposed by [Uchiyama and Saito 2011] and it can serve as fiducial markers as well as texture independent "natural markers". We conduct a series of experiments and show that RRDM is more robust than RDM in terms of jitter, perspective distortion, under and over detection of dots in the pattern. As an example of "natural marker", we show that RRDM can successfully register unprepared printed maps only with pure geographic features, i.e. road intersections coordinates, which we retrieve from a GIS. Our method does not suffer from the drawbacks of traditional "feature-point" based registration methods which mainly based on textures, since textures may change according to different maps.
Third person view and guidance for more natural motor behaviour in immersive basketball playing BIBAFull-Text 55-64
  Alexandra Covaci; Anne-Hélène Olivier; Franck Multon
The use of Virtual Reality (VR) in sports training is now widely studied with the perspective to transfer motor skills learned in virtual environments (VEs) to real practice. However precision motor tasks that require high accuracy have been rarely studied in the context of VE, especially in Large Screen Image Display (LSID) platforms. An example of such a motor task is the basketball free throw, where the player has to throw a ball in a 46cm wide basket placed at 4.2m away from her. In order to determine the best VE training conditions for this type of skill, we proposed and compared three training paradigms. These training conditions were used to compare the combinations of different user perspectives: first (1PP) and third-person (3PP) perspectives, and the effectiveness of visual guidance. We analysed the performance of eleven amateur subjects who performed series of free throws in a real and immersive 1:1 scale environment under the proposed conditions. The results show that ball speed at the moment of the release in 1PP was significantly lower compared to real world, supporting the hypothesis that distance is underestimated in large screen VEs. However ball speed in 3PP condition was more similar to the real condition, especially if combined with guidance feedback. Moreover, when guidance information was proposed, the subjects released the ball at higher -- and closer to optimal -- position (5-7% higher compared to no-guidance conditions). This type of information contributes to better understand the impact of visual feedback on the motor performance of users who wish to train motor skills using immersive environments. Moreover, this information can be used by exergames designers who wish to develop coaching systems to transfer motor skills learned in VEs to real practice.

Tracking and recognition

A hand posture recognition system utilizing frequency difference of infrared light BIBAFull-Text 65-68
  Soonchan Park; Moonwook Ryu; Ju Yong Chang; Jiyoung Park
Hand gesture is one of the most effective methods to perform interactions between humans and also between humans and computers. However, currently existing depth cameras do not provide sufficient resolution and precision for effectively recognizing hand postures in distance (>2 meters). Existing researches tried to solve the limitation by using a combination of depth information and color information. However, they all could not have stable performance, because the color information is naturally affected by visible light condition. In this paper, we introduce a hardware system and an algorithm to recognize hand postures of a distant user while guaranteeing its performance even in the dark. Specifically, by utilizing infrared (IR) lights and their frequency difference, our system simultaneously gathers a depth map from Kinect and a high resolution IR image of a scene from an additional IR camera without any interference. The system analyzes the IR image of a hand using histogram of oriented gradients and support vector machine. In addition, the recognition system has a technique to compensate errors of hand position estimation unavoidable in any hand detection algorithms. As a result, from the experiment on real-time data, the proposed system classifies seven different hand postures with an average precision rate of 92.17% and the precision rate is maintained in the dark (<5 lux) with an average precision rate of 93.28%.
Illumination independent marker tracking using cross-ratio invariance BIBAFull-Text 69-72
  Vincent Agnus; Stéphane Nicolau; Luc Soler
Marker tracking is used in numerous applications. Depending on the context and its constraints, tracking accuracy can be a crucial component of the application. In this paper, we firstly highlight that the tracking accuracy depends on the illumination, which is usually not controlled in most applications. Particularly, we show how corner detection can shift of several pixels when light power or background context change, even if the camera and the marker are static in the scene. Then, we propose a method, based on the cross ratio invariance, that allows to re-estimate the corner extraction so that the cross ratio of the marker model corresponds to the one computed from the extracted corners in the image. Finally, we show on real data that our approach improves the tracking accuracy, particularly along the camera depth axis, up to several millimeters, depending on the marker depth.
I'm in VR!: using your own hands in a fully immersive MR system BIBAFull-Text 73-76
  Franco Tecchia; Giovanni Avveduto; Raffaello Brondi; Marcello Carrozzino; Massimo Bergamasco; Leila Alem
This paper presents a novel fully immersive Mixed Reality system that we have recently developed where the user freely walks in a life-size virtual scenario wearing an HMD and can see and use her/his own body when interacting with objects. This form of natural interaction is made possible in our system because the user's hands are real-time captured by means of a RGBD camera on the HMD. This allow the system to have in real-time a texturized geometric mesh of the hands and body (as seen from her/his own perspective) that can be rendered like any other polygonal model in the scene. Our hypothesis is that by presenting to the users an egocentric view of the virtual environment "populated" by their own bodies, a very strong feeling of presence is developed as well.
Accelerating vision-based 3D indoor localization by distributing image processing over space and time BIBAFull-Text 77-86
  Doohee Yun; Hyunseok Chang; T. V. Lakshman
In a vision-based 3D indoor localization system, conducting localization of user's device at a high frame rate is important to support real-time augment reality applications. However, vision-based 3D localization typically involves 2D keypoint detection and 2D-3D matching processes, which are in general too computationally intensive to be carried out at a high frame rate (e.g., 30 fps) on commodity hardware such as laptops or smartphones. In order to reduce per-frame computation time for 3D localization, we present a new method that distributes required computation over space and time, by splitting a video frame region into multiple sub-blocks, and processing only a sub-block in a rotating sequence at each video frame. The proposed method is general enough that it can be applied to any keypoint detection and 2D-3D matching schemes. We apply the method in a prototype 3D indoor localization system, and evaluate its performance in a 120m long indoor hallway environment using 5,200 video frames of 640x480 (VGA) resolution and a commodity laptop. When SIFT-based keypoint detection is used, our method reduces average and maximum computation time per frame by a factor of 10 and 7 respectively, with a marginal increase of positioning error (e.g., 0.17 m). This improvement enables the frame processing rate to increase from 3.2 fps to 23.3 fps.
User-perspective augmented reality magic lens from gradients BIBAFull-Text 87-96
  Domagoj Baricevic; Tobias Höllerer; Pradeep Sen; Matthew Turk
In this paper we present a new approach to creating a geometrically-correct user-perspective magic lens and a prototype device implementing the approach. Our prototype uses just standard color cameras, with no active depth sensing. We achieve this by pairing a recent gradient domain image-based rendering method with a novel semi-dense stereo matching algorithm inspired by PatchMatch. Our stereo algorithm is simple but fast and accurate within its search area. The resulting system is a real-time magic lens that displays the correct user perspective with a high-quality rendering, despite the lack of a dense disparity map.

Character animation

A multi-resolution approach for adapting close character interaction BIBAFull-Text 97-106
  Edmond S. L. Ho; He Wang; Taku Komura
Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots.
Data-driven sequential goal selection model for multi-agent simulation BIBAFull-Text 107-116
  Wenxi Liu; Zhe Huang; Rynson W. H. Lau; Dinesh Manocha
With recent advances in distributed virtual worlds, online users have access to larger and more immersive virtual environments. Sometimes the number of users in virtual worlds is not large enough to make the virtual world realistic. In our paper, we present a crowd simulation algorithm that allows a large number of virtual agents to navigate around the virtual world autonomously by sequentially selecting the goals. Our approach is based on our sequential goal selection model (SGS) which can learn goal-selection patterns from synthetic sequences. We demonstrate our algorithm's simulation results in complex scenarios containing more than 20 goals.
Posture reconstruction using Kinect with a probabilistic model BIBAFull-Text 117-125
  Liuyang Zhou; Zhiguang Liu; Howard Leung; Hubert P. H. Shum
Recent work has shown that depth image based 3D posture estimation hardware such as Kinect has made interactive applications more popular. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. While previous research has shown that data-driven methods can be used to reconstruct the correct postures, they usually require a large posture database, which greatly limit the usability for systems with constrained hardware such as game console. To solve this problem, we present a new probabilistic framework to enhance the accuracy of the postures live captured by Kinect. We adopt the Gaussian Process model as a prior to leverage position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the observed input data from Kinect when its tracking result is good, we embed joint reliability into the optimization framework. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time posture based applications such as motion-based gaming and sport training.
Towards real-time credible and scalable agent-based simulations of autonomous pedestrians navigation BIBAFull-Text 127-136
  Patrick Simo Kanmeugne; Aurélie Beynier
In this paper, we focus on real-time simulation of autonomous pedestrians navigation. We introduce a Macroscopic-Influenced Microscopic (MIM) approach which aims at reducing the gap between microscopic and macroscopic approaches by providing credible walking paths for a potentially highly congested crowd of autonomous pedestrians. Our approach originates from a least-effort formulation of the navigation task, which allows us to consistently account for congestion at every level of decision. We use the multi-agent paradigm and describe pedestrians as autonomous and situated agents who plan dynamically for energy efficient paths and interact with each other through the environment. The navigable space is considered as a set of contiguous resources that agents use to build their paths. We emulate the dynamic path computation for each agent with an evolutionary search algorithm, especially designed to be executed in real-time, individually and autonomously. We have compared an implementation of our approach with the ORCA model, on low density and high density scenarios, and obtained promising results in terms of credibility and scalability. We believe that ORCA model and other microscopic models could be easily extended to embrace our approach, thus providing richer simulations of potentially highly congested crowd of autonomous pedestrians.

User study and data analysis

Simulator sickness and presence using HMDs: comparing use of a game controller and a position estimation system BIBAFull-Text 137-140
  Gerard Llorach; Alun Evans; Josep Blat
Consumer-grade head-mounted displays (HMD) such as the Oculus Rift have become increasingly available for Virtual Reality recently. Their high degree of immersion and presence provokes usually amazement when first used. Nevertheless, HMDs also have been reported to cause adverse reactions such as simulator sickness. As their impact is growing, it is important to understand such side effects. This paper presents the results of a relatively large scale user experiment which compares using a conventional game controller versus positioning in the virtual world based upon the signal of the internal Inertial Measurement Unit (IMU) using Oculus Rift DK1. We show that simulator sickness is significantly reduced when using a position estimation system rather than using the more traditional game controller for navigation. However the sense of presence was not enhanced by the possibility of 'real walking'. We also show the impact of other factors, such as prior experience or motion history, and discuss the results.
Desktop virtual reality for emergency preparedness: user evaluation of an aircraft ditching experience under different fear arousal conditions BIBAFull-Text 141-150
  Luca Chittaro; Fabio Buttussi; Nicola Zangrando
Virtual Reality (VR), in the form of 3D interactive simulations of emergency scenarios, is increasingly used for emergency preparedness training. This paper advances knowledge about different aspects of such virtual emergency experiences, showing that: (i) the designs we propose in the paper are effective in improving emergency preparedness of common citizens, considering aviation safety as a relevant case study, (ii) changing specific visual and auditory features is effective to create emotionally different versions of the same experience, increasing the level of fear aroused in users, and (iii) the protection motivation role of fear highlighted by psychological studies of traditional media applies to desktop VR too.
Profiling and benchmarking event- and message-passing-based asynchronous realtime interactive systems BIBAFull-Text 151-159
  Stephan Rehfeld; Henrik Tramberend; Marc Erich Latoschik
This article describes a set of metrics for a message-passing-based asynchronous Realtime Interactive System (RIS). Current trends in concurrent RISs are analyzed, several profiling tools are outlined, and common metrics are identified. A set of nine metrics is presented in a unified and formalized way. The implementation of a profiler that measures and calculates these metrics is illustrated. The implementation of an instrumentation and a visualization tool are described. A case study shows how this approach proved beneficial during the optimization of latency of an actual system.
Performance improvement using data tags for handheld spatial augmented reality BIBAFull-Text 161-165
  Andrew Irlitti; Stewart Von Itzstein; Ross T. Smith; Bruce H. Thomas
Mobile devices such as some recent phones are now fitted with projection capabilities that support Spatial Augmented Reality (SAR) and require investigation to uncover new interaction possibilities. This paper presents a study measuring user performance in a search and select task using a tracked handheld projector and data tags, a 3D physical cue. This physical cue is used to mark the location of hidden SAR information. The experiment required participants to search for virtual symbols presented on two 5ft, multi-sided control panels. Two methods of presenting AR information were employed, SAR alone and SAR with the inclusion of physical cues to indicate the location of the information. The results showed that attaching data tags, compared to virtual content alone lowered the overall task completion time and reduced handheld projector movement. Subjectively, participants also preferred the combination of virtual data with data tags across both task variations.
A usability scale for handheld augmented reality BIBAFull-Text 167-176
  Marc Ericson C. Santos; Takafumi Taketomi; Christian Sandor; Jarkko Polvi; Goshiro Yamamoto; Hirokazu Kato
Handheld augmented reality (HAR) applications must be carefully designed and improved based on user feedback to sustain commercial use. However, no standard questionnaire considers perceptual and ergonomic issues found in HAR. We address this issue by creating a HAR Usability Scale (HARUS).
   To create HARUS, we performed a systematic literature review to enumerate user-reported issues in HAR applications. Based on these issues, we created a questionnaire measuring manipulability -- the ease of handling the HAR system, and comprehensibility -- the ease of understanding the information presented by HAR. We then provide evidences of validity and reliability of the HARUS questionnaire by applying it to three experiments. The results show that HARUS consistently correlates with other subjective and objective measures of usability, thereby supporting its concurrent validity. Moreover, HARUS obtained a good Cronbach's alpha in all three experiments, thereby demonstrating internally consistency.
   HARUS, as well as its decomposition into individual manipulability and comprehensibility scores, are evaluation tools that researchers and professionals can use to analyze their HAR applications. By providing such a tool, they can gain quality feedback from users to improve their HAR applications towards commercial success.

Perception

Threefolded motion perception during immersive walkthroughs BIBAFull-Text 177-185
  Gerd Bruder; Frank Steinicke
Locomotion is one of the most fundamental processes in the real world, and its consideration in immersive virtual environments (IVEs) is of major importance for many application domains requiring immersive walkthroughs. From a simple physics perspective, such self-motion can be defined by the three components speed, distance, and time. Determining motions in the frame of reference of a human observer imposes a significant challenge to the perceptual processes in the human brain, and the resulting speed, distance, and time percepts are not always veridical. In previous work in the area of IVEs, these components were evaluated in separate experiments, i. e., using largely different hardware, software and protocols.
   In this paper we analyze the perception of the three components of locomotion during immersive walkthroughs using the same setup and similar protocols. We conducted experiments in an Oculus Rift head-mounted display (HMD) environment which showed that subjects largely underestimated virtual distances, slightly underestimated virtual speed, and we observed that subjects slightly overestimated elapsed time.
The influence of step frequency on the range of perceptually natural visual walking speeds during walking-in-place and treadmill locomotion BIBAFull-Text 187-190
  Niels Christian Nilsson; Stefania Serafin; Rolf Nordahl
Walking-In-Place (WIP) techniques make relatively natural walking experiences within immersive virtual environments possible when the physical interaction space is limited in size. In order to facilitate such experiences it is necessary to establish a natural connection between steps in place and virtual walking speeds. This paper details a study investigating the effects of movement type (treadmill walking and WIP) and step frequency (1.4, 1.8 and 2.2 steps per second) on the range of perceptually natural visual walking speeds. The results suggests statistically significant main effects of both movement type and step frequency but no significant interaction between the two variables.
Displaying shapes with various types of surfaces using visuo-haptic interaction BIBAFull-Text 191-196
  Yuki Ban; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose
In this paper, we proposed a visuo-haptic system for displaying various shapes which have curve, edge, and inclined surfaces, using a simple transmutative physical device and the effect of visuo-haptic interaction. We aim to construct a perception-based shape display system to provide users with the sensation of touching virtual objects of varying shapes using only a simple mechanism. We have confirmed that the perception of each primitive shape such as curvature and angle could be modified by displacing a user's hand image on the monitor as if s/he were touching the visual shape while actually touching another shape. In this study, we constructed the method to merge these findings for displaying more various shapes, including angular ones. We built a transmutative device, which the user touches. The device does not undergo significant transformation, but its surface can be slightly bumped in and out, and displayed various shapes with various angles, length and curvature. The results of experimental trials confirmed that our method for displaying each primitive shape can also worked as designed when we combine these findings to display more complex objects using this device which transforms slightly.
In touch with the remote world: remote collaboration with augmented reality drawings and virtual navigation BIBAFull-Text 197-205
  Steffen Gauglitz; Benjamin Nuernberger; Matthew Turk; Tobias Höllerer
Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.

Poster abstracts

A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality BIBAFull-Text 207-208
  Ali Samini; Karljohan Lundin Palmerius
Video see-through Augmented Reality (V-AR) displays a video feed overlaid with information, co-registered with the displayed objects. In this paper we consider the type of V-AR that is based on a hand-held device with a fixed camera. In most of the VA-R applications the view displayed on the screen is completely determined by the orientation of the camera, i.e., the device-perspective rendering; the screen displays what the camera sees. The alternative method is to use the relative pose of the user's view and the camera, i.e., the user-perspective rendering. In this paper we present an approach to the user perspective V-AR using 3D projective geometry. The view is adjusted to the user's perspective and rendered on the screen, making it an augmented window. We created and tested a running prototype based on our method.
A portable interface for tangible exploration of volumetric data BIBAFull-Text 209-210
  Paul Issartel; Florimond Guéniat; Mehdi Ammi
Exploration of volumetric data is an essential task in many scientific fields. However, the use of standard devices, such as the 2D mouse, leads to suboptimal interaction mappings. Several VR systems provide better interaction capabilities, but they remain dedicated and expensive solutions. In this work, we propose an interface that combines tangible tools and a handheld device. This configuration allows natural and full 6 DOF interaction in a convenient, fully portable and affordable system. This paper presents our design choices for this interface and associated tangible exploration techniques.
A projection-based mixed-reality display for exterior and interior of a building diorama BIBAFull-Text 211-212
  Ming Zhang; Itaru Kitahara; Yoshinari Kameda; Yuichi Ohta
This paper proposes an interactive display system that displays both of the exterior and interior construction of a building diorama by using a projection-based Mixed-Reality (MR) technique, which is useful for understanding the complex construction and the spatial relationships between outside and inside. The users can hold and move the diorama model using their hands/body motion, so that they can observe the model from their favorite viewpoint. Our system obtains both of the user's information (the viewpoint and the gesture) and the diorama model's information (the pose) in 3D space by using two RGB-D cameras. The CG image corresponding to the user's viewpoint, gesture and the pose of the diorama is rendered by Dual Rendering algorithm in real time. As the result, the generated CG image is projected onto the diorama to realize MR display. We confirm the effectiveness of our proposed method by developing a pilot system.
A view from the hill: where cross reality meets virtual worlds BIBAFull-Text 213
  C. J. Davies; Alan Miller; Colin Allison
We present the cross reality [Lifton 2007] system 'Mirrorshades', which enables a user to be present and aware of both a virtual reality environment and the real world at the same time. In so doing the challenge of the vacancy problem is addressed by lightening the cognitive load needed to switch between realities and to navigate the virtual environment. We present a case study in the context of a cultural heritage application wherein users are able to compare a reconstruction of an important 15th century chapel with its present day instantiation, whilst walking through them.
Braiding hair by braid theory BIBAFull-Text 215-216
  Gaoxiang Zeng; Taku Komura
In this paper, we propose a system based on braid theory that help users to generate customized hair braiding, which is a function that is lacking in most existing hair design software. Our user interface for braid design is built upon braid theory, which is a subarea of knot theory in mathematics. The user designs braid patterns using braid index, and specifies the amount of hair for each braid as well as the area over the head where the braid is to be made. Then, the system automatically braids the hair of the character and generates a realistic image of the designed hair style. Theoretically, our system can produce arbitrary kinds of braids. Our system can also judge if two braids are equivalent or not by making use of the transition rules of braid index, which helps to register designed braids to the database. The system is implemented as a Maya plugin, and can be combinedly used with various functions including physical simulation, hair rendering and hair texturing. Our user study shows that our toolkit is easy-to-use for novice users as well as experienced users.
Cost based estimation of intended locomotion targets using human locomotion models BIBAFull-Text 217-218
  Markus Zank; Andreas Kunz
This paper presents a novel approach to estimate a person's intended locomotion target. This estimation is based on models of human locomotion and the finding that locomotion is planned such that the movement minimizes a certain cost function. Using such a cost function, we calculate the expected costs for a path trajectory to a number of possible targets, and make an estimation of a person's intended target based on changes in these expected costs in relation to the cost of the path up to this point.
Dissection of hybrid soft tissue models using position-based dynamics BIBAFull-Text 219-220
  Junjun Pan; Junxuan Bai; Xin Zhao; Aimin Hao; Hong Qin
This paper describes an interactive dissection approach for hybrid soft tissue models governed by position-based dynamics. Our framework makes use of a hybrid geometric model comprising both surface and volumetric meshes. The fine surface triangular mesh is used to represent the exterior structure of soft tissue models. Meanwhile, the interior structure of soft tissues is constructed by coarser tetrahedral meshes, which are also employed as physical models participating in dynamic simulation. The less details of interior structure can effectively reduce the computational cost of deformation and geometric subdivision during dissection. For physical deformation, we design and implement a position-based dynamics approach that supports topology modification and enforces the volume-preserving constraint. Experimental results have shown that, this hybrid dissection method affords real-time and robust cutting simulation without sacrificing realistic visual performance.
Dual sensor filtering for robust tracking of head-mounted displays BIBAFull-Text 221-222
  Nicholas T. Swafford; Bastiaan J. Boom; Kartic Subr; David Sinclair; Darren Cosker; Kenny Mitchell
We present a low-cost solution for yaw drift in head-mounted display systems that performs better than current commercial solutions and provides a wide capture area for pose tracking. Our method applies an extended Kalman filter to combine marker tracking data from an overhead camera with onboard head-mounted display accelerometer readings. To achieve low latency, we accelerate marker tracking with color blob localisation and perform this computation on the camera server, which only transmits essential pose data over WiFi for an unencumbered virtual reality system.
ExpanD: a stereoscopic expanding technique for compound graphs BIBAFull-Text 223-224
  Ragaad AlTarawneh; Shah Rukh Humayoun; Achim Ebert
In this work, we present a new technique, called ExpanD, for expanding compound nodes in stereoscopic platforms. In this technique, stereoscopic depth is used to encode the structural relationships in compound graphs using the containment representation, while explicit edges are used to represent the adjacency relations between the graph nodes. Further, stereoscopic depth is used to show the parent-child relations in such a manner that the children level is rendered in a plane closer to the viewer than the parent node. The technique provides a novel interactive operation for expanding or contracting nodes in order to align graph nodes in the 3D space with minimum occlusion. Different visual cues are used together to encode other data aspects, e.g., color to encode node status, shape to encode node type, and stereoscopic depth to encode hierarchical relations between nodes.
FingerOscillation: clutch-free techniques for 3D object translation, rotation and scale BIBAFull-Text 225-226
  Siju Wu; Amine Chellali; Samir Otmane
In this paper, we present three freehand interaction techniques for 3D content manipulation, the FingerShake, for object translation, the FingerRotate, for object rotation and the FingerSwing for object scale. These three techniques refer to a more generic concept which we call FingerOscillation. The main contribution is to interact with the machine by using finger oscillation movements. We introduce the design and the implementation of these techniques.
FishEyA: live broadcasting around 360 degrees BIBAFull-Text 227-228
  Enrique Canessa; Livio Tenze
Our project aims to build up a low-cost prototype system for cognitive studies around a live 360 degrees vision. The final goal is to have an original broadcasting channel that could transmit and cover in real time a panoramic vision at a distance and with minimal computation. The first phase of our project named FishEyA is to develop the software optimized to run in mini-computers like Raspberry Pi, having a light GUI to easily configure the 360° visual field and activate the streaming signal.
Natural 7DoF navigation & interaction in 3D geovisualisations BIBAFull-Text 229-230
  Simon Stannus; Arko Lucieer; Wai-Tat Fu
Geospatial data is becoming increasingly important for many uses, from expert scientific domains to social media interaction. A lot of this data is three-dimensional in nature and in recent years virtual globe software like Google Earth has made visualising such data much easier. However there has not been an equivalent improvement in the way in which users interact with the visualisations. Recognising that navigating a virtual world encompassing data at scales across many orders of magnitude requires a 7-dimensional representation of the view state, we have implemented our AeroSpace 7DoF interaction method [Stannus et al. 2014] in a Geovisualisation testbed built around NASA's World Wind virtual globe software. Our approach expands the two-point touch method [Hancock et al.] to 3D with pinches replacing touches. It surpasses previous two-point work [Wang et al. 2011] [Song et al.] [Schultheis et al. 2012] by allowing direct, bimanual, integral, simultaneous navigation with index-finger-moderated roll around the ambiguous axis, though the virtual hands were offset indirectly to mitigate occlusion. Our system was implemented in an immersive environment with 3 large stereo screens for display and an ARTrack3 system for tracking the users head (via retroreflective markers on stereoscopic glasses) and hands. Custom gloves were used to report pinch and pointing gesture contact states via Bluetooth and provide 5DoF tracking via markers on the index fingers.
On the benefits of stereo graphics in virtual obstacle avoidance tasks BIBAFull-Text 231-232
  J. Andreas Bærentzen; Rasmus Stenholt
In virtual reality, stereo graphics is a very common way of increasing the level of perceptual realism in the visual part of the experience. However, stereo graphics comes at cost, both in technical terms and from a user perspective. In this paper, we present the preliminary results of an experiment to see if stereo makes any quantifiable, statistically significant difference in the ability to avoid collisions with virtual obstacles while navigating a 3-D space under constant acceleration. Our results indicate that for this particular application scenario, stereo does provide a significant benefit in terms of the amount of time that participants were able to avoid obstacles.
Optimum design of haptic seat for driving simulator BIBAFull-Text 233-234
  Osama Halabi; Mariam Ba Hameish; Latefa Al-Naimi; Amna Al-Kaabi
This work aims to design and develop an optimal vibrotactile seat to provide a high level of satisfaction to the driver. The main critical design parameters were considered and experiments were conducted to investigate the proper values of voltage, frequency, and amplitude that are specifically related to the developed haptic seat.
Poxels: polygonal voxel environment rendering BIBAFull-Text 235-236
  Mark Miller; Andrew Cumming; Kevin Chalmers; Benjamin Kenwright; Kenny Mitchell
We present efficient rendering of opaque, sparse, voxel environments with data amplified in local graphics memory with stream-out from a geomery shader to a cached vertex buffer pool. We show that our Poxel rendering primitive aligns with optimized rasterization hardware and so results in high visual quality over ray casting methods. Lossless run length encoding of occlusion culled voxels and coordinate quantization further reduces host data transfers.
Synchronized AR environment for multiple users using animation markers BIBAFull-Text 237-238
  Hirotake Yamazoe; Tomoko Yonezawa
In this paper, we propose an AR environment in which multiple users can synchronously share and see AR contents based on our proposed animation markers with time-sequence information. Many AR systems have been proposed, some of which consider simultaneous use by multiple users. However, these systems require a synchronization mechanism among users (devices) for simultaneous displaying. Such mechanisms complicate the systems. Thus, we propose an animation marker that can transmit temporal (frame) information in a synchronized AR environment among multiple users using our animation markers.
The collaborative design platform protocol: a protocol for a mixed reality installation for improved incorporation of laypeople in architecture BIBAFull-Text 239-240
  Tibor Goldschwendt; Christoph Anthes; Gerhard Schubert; Dieter Kranzlmüller; Frank Petzold
We present the conceptual design and implementation of the Collaborative Design Platform Protocol (CDPP), a communication protocol that offers the synchronisation of virtual worlds between two mixed reality peers. The CDPP is applied to connect the Collaborative Design Platform (CDP), a design tool which supports the architectural design process in an early stage, with the immersive Cave Automatic Virtual Environment (CAVE) display, where the design is visualised in life-size. This creates a prototype which enables a cost-efficient and easily comprehensible presentation of the early-staged design, and thus significantly simplifies the incorporation of laypeople in the early design process. By this means, the creative capabilities of laymen are exploited to a greater extent.
Virtualized welding: a new paradigm for tele-operated welding BIBAFull-Text 241-242
  Bo Fu; Yukang Liu; Yuming Zhang; Ruigang Yang
We present a new mixed reality system that supports tele-operation of a welding robot. We create a 3D mockup of the welding pieces and use projector-based displays to visualize the welding process directly on the 3D display. Multi-cameras are used to capture both the welding environment and the operator's motion. The welder can therefore monitor and control the welding process as if the welding is on the mock-up, which provides proper spatial and 3D cues. We evaluated our system with a number of control tasks and the results shows the effectiveness of our system as compared to traditional alternatives.