HCI Bibliography Home | HCI Conferences | VRIC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRIC Tables of Contents: 121314

Proceedings of the 2013 Virtual Reality International Conference

Fullname:Proceedings of the Virtual Reality International Conference: Laval Virtual
Editors:Simon Richir
Location:Laval, France
Dates:2013-Mar-20 to 2013-Mar-22
Publisher:ACM
Standard No:ISBN: 978-1-4503-1875-4; ACM DL: Table of Contents; hcibib: VRIC13
Papers:31
Links:Conference Website
  1. Sharing live user experience: how new mixed reality technologies and networks support real-time interactions
  2. VR, serious game and interactive storytelling based training/education
  3. Mobile immersion and augmented reality
  4. A new kind of art
  5. ReVolution session
  6. The progress and uncertainties of human-robot relationships

Sharing live user experience: how new mixed reality technologies and networks support real-time interactions

SmurVEbox: a smart multi-user real-time virtual environment for generating character animations BIBAFull-Text 1
  Rüdiger Beimler; Gerd Bruder; Frank Steinicke
Animating virtual characters is a complex task, which requires professional animators and performers, expensive motion capture systems, or considerable amounts of time to generate convincing results. In this paper we introduce the SmurVEbox, which is a cost-effective animating system that encompasses many important aspects of animating virtual characters by providing a novel shared user experience. SmurVEbox is a collaborative environment for generating character animations in real time, which has the potential to enhance the computer animation process. Our setup allows animators and performers to cooperate on the same virtual animation sequence in real time. Performers are able to communicate with the animator in the real space while simultaneously perceiving the effects of their actions on the virtual character in the virtual space. The animator can refine actions of a performer in real time so that both collaborate together on the same animation of a virtual character. We describe the setup and present a simple application.
Prototyping natural interactions in virtual studio environments by demonstration: combining spatial mapping with gesture following BIBAFull-Text 2
  Dionysios Marinos; Björn Wöldecke; Chris Geiger
A virtual studio enables the real-time combination of people or other real objects with computer generated environments. In this paper, a rapid prototyping strategy for natural real-time interactions in such virtual environments is introduced. Its name is ANID, for Authoring Natural Interactions by Demonstration, and focuses on tracking an actor and manipulating control parameters as a way of steering interactions. ANID has two main aspects. The first one is the authoring of spatial relationships between the actor and the virtual environment. The second aspect is the use of gesture following to synchronize various animations or event sequences inside the virtual environment to the corresponding actor's movements. Both aspects allow for a programming-by-demonstration approach, enabling developers to rapidly create the desired interactions by providing examples directly inside the blue or green box of the virtual studio. The methods and tools supporting the strategy are presented. A test case has been developed to demonstrate the applicability of the strategy, in which an interactive virtual robotic arm was created, that could be controlled by hand movements in a natural manner. The advantages and shortcomings of ANID are discussed based on this test case.
Three-dimensional monitoring of weightlifting for computer assisted training BIBAFull-Text 3
  Anargyros Chatzitofis; Nicholas Vretos; Dimitrios Zarpalas; Petros Daras
This paper investigates the use of 3D information in the context of sports training. More specifically, a weightlifting athlete attempt is monitored, helping any coach to train athletes as she/he prefers. Our aim is to detect, collect and extract every useful data which give insight about the body technique during the weightlifting attempt and therefore make all the necessary calculations. A Kinect sensor is used for tracking the weightlifter and collecting depth data during the attempt. Afterwards, data are processed, so that useful information is extracted. In addition, after any attempt, data can be saved and loaded at any time. 2D and 3D graphs are used to illustrate the relevant information. Finally, two attempts can be loaded in parallel for the comparison between two different athletes or between a new and an old registered attempt, giving the ability for matching and correcting different techniques.
Augmented sport: exploring collective user experience BIBAFull-Text 4
  Marc Pallot; Remy Eynard; Benjamin Poussard; Olivier Christmann; Simon Richir
This paper explores existing theories, frameworks and models for handling collective user experience in the context of Distributed Interactive Multimedia Environments (DIME) and more specifically Augmented Sport applications. Besides discussing previous experimental work in the domain of Augmented Sport, we introduce Future Media Internet (FMI) technologies in relation with Mixed Reality (MR) platforms, user experience (UX), quality of Service (QoS) and quality of Experience (QoE) within 3D Tele-Immersive Environments that are part of the broad DIME domain. Finally, we present the 3D LIVE project QoS-UX-QoE approach and model that will be applied along three use cases (Skiing, Jogging and Golfing) experiments for anticipating the potential user adoption.

VR, serious game and interactive storytelling based training/education

Virtual reality for skin exploration BIBAFull-Text 5
  Marie-Danielle Vazquez-Duchêne; Olga Freis; Alain Denis; Christophe Mion; Christine Jeanmaire; Solène Mine; Gilles Pauly
Background: We have developed and set up the SkinExplorer™ platform, a new tool to exploit and rebuild serial confocal images into 3D numerical models [1, 2]. The acquisitions using confocal microscopy allow visualizing cutaneous components as elastic fibers, melanocytes and keratinocytes etc... These diversified sources of data participate to create numerical 3D volume models with high quality of visualization.
   Objective: To create a Virtual Reality (VR) experience, to communicate and change the perception of skin structures by virtualization mode.
   Methods: The use of ART TRACKPACK system and ART SMARTTRACK device allow us to valorize new sensory images for the volumetric rendering of the 3D skin models.
   Results: We increase the perception and the understanding of skin components organization.
   Conclusion: The SkinExplorer™ platform seems to be a promising system for exploring the skin.
Low-cost simulation of robotic surgery BIBAFull-Text 6
  Kasper Grande; Rasmus Steen Jensen; Martin Kraus; Martin Kibsgaard
The high expenses associated with acquiring and maintaining robotic surgical equipment for minimally invasive surgery entail that training on this equipment is also expensive. Virtual reality (VR) training simulators can reduce this training time; however, the current simulators are also quite expensive. Therefore, we propose a low-cost simulation of minimally invasive surgery and evaluate its feasibility.
   Using off-the-shelf hardware and a commercial game engine, a prototype simulation was developed and evaluated against the use of a surgical robot. The participants of the evaluation were given a similar exercise to test with both the robot and the simulation. The usefulness of the simulation to prepare the participants for the surgical robot was rated "useful" by the participants, with an average of 3.1 on a scale of 1 to 5. The low-cost game controllers used in the prototype proved to effectively simulate the controls of the surgical robot.
   Another test was carried out to determine the benefits of various stereoscopic displays for this simulation. This test did not show significant improvements compared to a regular display but indicated that a mobile stereoscopic display might be the most suitable option for a low-cost simulation of robotic surgery.
The effect of guided and free navigation on spatial memory in mixed reality BIBAFull-Text 7
  Alberto Betella; Enrique Martínez Bueno; Ulysses Bernardet; Paul F. M. J. Verschure
The role of active and passive navigation strategies on human spatial cognition is not well understood. One problem in addressing this question is that combining free movement with controlled stimulus conditions in navigation tasks is difficult to achieve. We have constructed a unique mixed reality space that answers this challenge. In our experiment we expose human subjects to a virtual house where they can navigate following two different protocols: guided or free navigation. We want to assess how navigation mode affects spatial memory. Our results show that the participants that were assigned to the guided navigation condition display higher spatial memory performance, as opposed to those assigned to the free navigation paradigm.
The ghost in the shell paradigm for virtual agents and users in collaborative virtual environments for training BIBAFull-Text 8
  Thomas Lopez; Valérie Gouranton; Florian Nouviale; Bruno Arnaldi
In Collaborative Virtual Environment for Training (CVET), different roles need to be played by actors, i.e. virtual agents or users. We introduce in this paper a new entity, the Shell, which aims at abstracting an actor from its embodiment in the virtual world. Thus, using this entity, users and virtual agents are able to collaborate in the same manner during the training. In addition to the embodiment's control, the Shell gathers and carries knowledge and provides interaction inputs. This knowledge and those inputs can be accessed and used homogeneously by both users and virtual agents to help them to perform the procedure. In this paper, we detail the knowledge mechanism of the Shell as this knowledge is a crucial element for both collaboration and learning in the CVET context. Furthermore, we also validate our exposed model by presenting an operational implementation in an existing CVET and discuss of its possible usages.
HUMANS: a HUman Models based Artificial eNvironments software platform BIBAFull-Text 9
  Vincent Lanquepin; Kevin Carpentier; Domitile Lourdeaux; Margaux Lhommet; Camille Barot; Kahina Amokrane
Taking human-factors into account in training simulations enables these systems to address issues such as coactivity and management training. However, systems which use virtual reality technologies are usually designed so as to immerse the users in perfectly realistic virtual environment, focusing only on technical gestures and prescribed procedures. Therefore, they can only tackle situations with little complexity, where the user's activity is highly constrained; otherwise they can't ensure the pedagogic control and the relevance of the simulation. The HUMANS (HUman Models based Artificial eNvironments Software) platform is a generic framework, designed to build tailor-made virtual environments, which can be adapted to different application cases, technological configurations or pedagogical strategies. This suite rests upon the integration of multiple explicit models (domain, activity and risk model). In order to build ecologically valid virtual environments, these models represent not only the prescribed activity but the situated knowledge of operators about their tasks, including deviations from the procedures. Moreover, rather than a fixed world only populated by reactive characters, they are used to build a dynamic world populated with autonomous characters. These models can be used both by domain and procedures experts, and by computer experts. They are used both: to monitor learners actions, detecting errors and compromises; and to generate virtual characters behaviours.

Mobile immersion and augmented reality

DrillSample: precise selection in dense handheld augmented reality environments BIBAFull-Text 10
  Annette Mossel; Benjamin Venditti; Hannes Kaufmann
One of the primary tasks in a dense mobile augmented reality (AR) environment is to ensure precise selection of an object, even if it is occluded or highly similar to surrounding virtual scene objects. Existing interaction techniques for mobile AR usually use the multi-touch capabilities of the device for object selection. However, single touch input is imprecise, but existing two handed selection techniques to increase selection accuracy do not apply for one-handed handheld AR environments. To address the requirements of accurate selection in a one-handed dense handheld AR environment, we present the novel selection technique DrillSample. It requires only single touch input for selection and preserves the full original spatial context of the selected objects. This allows disambiguating and selection of strongly occluded objects or of objects with high similarity in visual appearance. In a comprehensive user study, we compare two existing selection techniques with DrillSample to explore performance, usability and accuracy. The results of the study indicate that DrillSampe achieves significant performance increases in terms of speed and accuracy. Since existing selection techniques are designed for virtual environments (VEs), we furthermore provide a first approach towards a foundation for exploring 3D selection techniques in dense handheld AR.
Reducing the SLAM drift error propagation using sparse but accurate 3D models for augmented reality applications BIBAFull-Text 11
  Maxime Boucher; Fakhr-Eddine Ababsa; Malik Mallem
SLAM is the generic name given to the class of methods allowing to incrementally build a 3D representation of an environment while simultaneously using this map to localize a mobile system evolving within this environment. Though quite a mature field, several scientific problems remain open and particularly the reduction of drift. Drift is inherent to SLAM since the task is fundamentally incremental and errors in model estimation are cumulative. In this paper we suggest to take advantage from sparse but accurate knowledge of the environment to periodically reinitialize the system, thus stopping the drift. As it may be of interest in a Augmented reality context, we show this knowledge can be propagated to past estimations through bundle adjustment and present three different strategies to perform this propagation. Experiments carried out in an urban environment are described and demonstrate the efficiency of our approach.
3DTouch and HOMER-S: intuitive manipulation techniques for one-handed handheld augmented reality BIBAFull-Text 12
  Annette Mossel; Benjamin Venditti; Hannes Kaufmann
Existing interaction techniques for mobile AR often use the multi-touch capabilities of the device's display for object selection and manipulation. To provide full 3D manipulation by touch in an integral way, existing approaches use complex multi finger and hand gestures. However, they are difficult or impossible to use in one-handed handheld AR scenarios and their usage requires prior knowledge. Furthermore, a handheld's touch screen offers only two dimensions for interaction and limits manipulation to physical screen size. To overcome these problems, we present two novel intuitive six degree-of-freedom (6DOF) manipulation techniques, 3DTouch and HOMER-S. While 3DTouch uses only simple touch gestures and decomposes the degrees of freedom, Homer-S provides full 6DOF and is decoupled from screen input to overcome physical limitations. In a comprehensive user study, we explore performance, usability and accuracy of both techniques. Therefore, we compare 3DTouch with HOMER-S in four different scenarios with varying transformation requirements. Our results reveal both techniques to be intuitive to translate and rotate objects. HOMER-S lacks accuracy compared to 3DTouch but achieves significant performance increases in terms of speed for transformations addressing all 6DOF.
Layered shadow: multiplexing invisible shadow using infrared lights with different wavelengths BIBAFull-Text 13
  Saki Sakaguchi; Takuma Tanaka; Mitsunori Matsushita
This paper proposes a multiplexing invisible shadow system named "Layered Shadow." The proposed system uses infrared lights, each of which radiates a certain wavelength of infrared light, and an object to which two different types of IR filters are attached. Directing the light toward the object causes the object's shadow to appear; the shape of the object then appears to change according to the wavelength of the radiated infrared light. With this system, a user is expected to attain a different viewpoint on shadows.
Mixed reality with multimodal head-mounted pico projector BIBAFull-Text 14
  Antti Sand; Ismo Rakkolainen
Many kinds of displays can be used for augmented reality (AR). Multimodal head-mounted pico projector is a concept, which is little explored for AR. It opens new possibilities for wearable displays. In this paper we present our proof-of-concept prototype of a multimodal head-mounted pico projector. Our main contributions are the display concept and some usage examples for it.
Leveraging technology to become a better lawyer BIBAFull-Text 15
  John Niman
In this paper, I describe the ways in which lawyers can leverage technology to perform their profession more effectively.

A new kind of art

Ideas about VR&AR as a new genre in fine arts BIBAFull-Text 16
  Suzanne Beer; Judith Guez
Virtual and augmented reality is a specific part of computer arts. Computer art is a wide term, which subsumes a few branches that have found their artistic expression, like Software art, Net art, Algorithmic art. Does a VR&AR art exist? VR&AR come from an artistic deflection of a technology. Combination of immersion, interactivity and virtual world gives opportunity for specific expressions of artistic themes like mixing of illusion and truth, blurring borders between virtuality and reality, imaginary and realistic representations, questioning nature, the world, the relationship between humans and their power, contemporaries techniques and modes of perceptions, and typically the uncanny valley limit.
On visual features and artistic digital images BIBAFull-Text 17
  Everardo Reyes-Garcia
Techniques and tools to create and manipulate digital images have always attracted the attention of artists. Hardware and software are not only tools but also environments for expressiveness: creators and users generate and interact with images; artistic styles are recreated; spaces are reconstructed; static images mix with animated images; 2D images blend with 3D models. Moreover, digitally produced images can also be printed as physical objects using rapid prototyping technologies. Within this context, artistic practices remind us their relationship with crafts and skills. Artists not only produce images and confront us with ideas about our society and the world, but they also invent materials, discover surfaces, combine techniques and define workflows. In this article we adopt visual features as the materials for artistic digital images. More precisely, we will try to describe the main characteristics of visual features as well as some of their implications. In the last part we present our current work on motion structures as an experimental art form directly inspired from visual features.
Scale based model for the psychology of crowds into virtual environments BIBAFull-Text 18
  Fabien Tschirhart
In this paper, we study a way to simulate crowds for virtual environments such as used in the cinema or the video game industries. Depending on its context, a crowd can adopt different behaviors: it can be ambulant, aggressive, expressive or in an emergency situation, get into panic and flee. The main idea is to hold upon observations and assumptions made by crowd psychology studies in order to formulate a model up to reproduce as close as possible their behavior, depending on the context. To do so, we are considering the use of a microscopic model based on a scale concept: the Scale Based Model (SBM). This model is based on the psychological profile of the individuals to build crowds of different natures, crowds that will then have an influence on their own members. We will also bring up with the question of the model implementation in a real time environment.
Chance and complexity: stochastic and generative processes in art and creativity BIBAFull-Text 19
  Alan Dorin
This paper examines the recurrence of stochastic processes as mechanisms to drive and enhance human creativity throughout the history of art. From prehistory up until the present day, random events, and technologically instantiated generative processes have operated in concert, extending the scope for the production of aesthetic objects of all kinds. In the last half-century of computational art, chance has played alongside generative computer programs -- a trend that looks set to continue. A range of works is explored here, highlighting the interaction between chance and dynamic processes to generate complex representations, virtual spaces and aesthetic artefacts. With this approach, the paper argues, chance and dynamics have the potential to continue as dominant creative forces into the future of art.
Hybridization between brain waves and painting BIBAFull-Text 20
  Huang Yiyuan
During the past decades, many new digital technologies were adopted in the field of art, leading to a variety of new art forms. As a result, the boundary between fiction and reality becomes more and more blurred. This paper introduces a new technology that captures virtual brain waves and, through this technology, investigates the interaction between human brain and the traditional arts, including Chinese philosophy, traditional Chinese medical science and traditional Chinese painting. Through this research, using consciousness to interact with digital Chinese ink painting (Figure 1) was carried out. Furthermore, the concept of Qi in ancient Chinese philosophy, namely the harmony between human and nature, was also introduced as the spirit of this work, and thus the concept was deeply understood.
Conduit d'Aération: writing and performing a narrative hypertext BIBAFull-Text 21
  Lucile Haute; Alexandra Saemmer; Odile Farge
Conduit d'Aération is a narrative hypertext loosely based on a true story. We conceive this narrative hypertext as an augmented interactive fiction that will be edited as a touchpad application, and exhibited as a participatory installation in festivals. In this paper we present the theoretical issues of the project and circumscribe some key concepts of the work in progress.
Virtual stage sets in live performing arts (from the spectator to the spect-actor) BIBAFull-Text 22
  Farah Jdid; Simon Richir; Alain Lioret
This paper studies the added value of VR to the art of stage setting through examples and experiments. It will also analyze the link between the audience and digital sets (VR, AR), from an aesthetic point of view as well as a practical one. It is important for us to succeed in creating a theatrical set which favors the spectator's presence for him to become a live performing spectator, and thus have a stronger link with the setting in the short as well as long run.

ReVolution session

Serious dietary education system for changing food preferences "food practice shooter" BIBAFull-Text 23
  Takayuki Kosaka; Takuya Iwamoto
We propose Serious Dietary Education System for Food Preferences system using eating food. "Eating food and mastication" and smiling, is required to clear the game. This system is a preferences shooting game. A user has to eat some food that most children may dislike, for example, tomatoes, carrots and bell peppers to beat monsters that on the screen. The number of bullets is settled by the number of chews (mastication) and the bullets are charged by a gun device by smiling. We aim to change food presences with this system.
AquaTop display BIBAFull-Text 24
  Yasushi Matoba; Yoichi Takahashi; Taro Tokui; Shin Phuong; Hideki Koike
AquaTop display is a projection system that uses white water as a screen surface. This system allows the user's limbs to freely move through, under and over the projection surface. Using the unique characteristics of fluid, we propose new interactions methods specific to the projection medium, water. Our system uses a depth camera to detect input on and over the water surface to allow for interactions such as protruding fingers out from under the water surface and scooping up the water with both hands. This type of interaction is not capable with current impenetrable, rigid body, flat surfaces. For example, by floating one's limbs on the water surface, it is also possible to fuse one's body with the displayed objects for further augmented interaction by 'becoming one' with the screen.
Resolution of sleep deprivation problems using ZZZoo Pillows BIBAFull-Text 25
  Shunsuke Yanaka; Motoki Ishida; Takayuki Kosaka; Motofumi Hattori; Hisashi Sato
Sleep is essential to human life. However, many people suffer from sleep deprivation, which adversely mental and physical health and results in poor productivity and even accidents. In this study, we propose the "ZZZoo Pillows" to help resolve sleep deprivation. This system comprises a huggable pillow with a built-in balloon supplied with air so that it replicates the breathing motion of a human chest. Furthermore, warm water circulating within the huggable pillow attempts to reproduce body heat, thereby giving the feeling of sleeping alongside another person. We aim to resolve sleep deprivation problems by improving emotional stability and increasing the time spent in deep sleep by providing the user with a sense of ease.
TSUMIKI CASTLE: interactive VR system using toy blocks BIBAFull-Text 26
  Junnosuke Nagai; Tsuyoshi Numano; Takafumi Higashi; Matthieu Tessier; Kazunori Miyata
This paper proposes a Virtual Reality application for playing with blocks. Players can create their own decorative castle in a virtual world, by only stacking simple physical blocks in the system.
   We designed a tangible interface such that a player can experience seamless interaction between the real world and a virtual world when playing with toy blocks. The system gives players a revolutionarily enjoyable experience where blocks are stack in the real world and blocks stacked in the real world are dynamically transformed into a castle in a virtual world. The system enables players to create a realistic castle that reflects the shape of the blocks. Moreover, the system smoothly connects the physical-world to the virtual-world by means of a tangible interface and real-time computer graphics. The system was exhibited at "Ishikawa Dream Festival" for two days. The evaluation of the system was done survey by a carried out using questionnaire at the event. The evaluation found that the system was easy to play and most of the players enjoyed the system.
Manga generator: immersive posing role playing game in manga world BIBAFull-Text 27
  Yuto Nara; Genki Kunitomi; Yukua Koide; Wataru Fujimura; Akihiko Shirai
This paper reports a methodology of automatically generating an immersive posing role playing game that reflects the personality of the player who acts out the part of the hero, using pre-installed speech bubbles, backgrounds, effects, and all the other elements that comprise manga.

The progress and uncertainties of human-robot relationships

In the image of the image?: from "image dei" to imaging the human in the robotic gaze BIBAFull-Text 28
  Scott A. Midson
In this paper, I wish to conceive of an alternative way that we can explore our technoscientific culture, one that more readily acknowledges our amalgamation within that culture, rather than placing ourselves above or beyond it. Taking the robot's gaze as both metaphor and more literally, I explore how seeing ourselves mediated by technology as an image may be a useful endeavour in highlighting our hybridity. Framing this discussion is a consideration of the image in theological terms, which is a significant undertaking, given that the human is made in the image of God, the robot in the image of the human, and yet, in the robot's gaze, we are imaged by the robot. Is it possible to reconcile these interpretations?
The metaphysical cyborg BIBAFull-Text 29
  Damien Patrick Williams
In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.
Pro and cons singularity: Kurzweil's theory and its critics BIBAFull-Text 30
  Bogdan Popoveniuc
The present paper reviews and resumes the "panoply" of criticism triggered by Ray Kurzweil's worrying scenario about the forthcoming Technological Singularity. These debates on technological evolution are re-considered and re-evaluated from the perspective of philosophical anthropology. Genuine enquiries on this subject are taken up as well.
Terminator niches BIBAFull-Text 31
  Tommaso Bertolotti; Lorenzo Magnani
The aim of this paper is to connect studies in cognitive niches with the diffusion of high-technologies, cyborgs and robots, so to obtain a new framework for analyzing some dilemmas of future technological developments. Digital technologies dramatically boosted the niche constructing dynamics by allowing the construction of new informational environments and by the addition of pseudo-minds that are able to carry on the niche-construction activity side-to-side with human beings. Cognitive niches, structured to ease the environmental selective pressure, may progressively degenerate causing an increase in selective pressure and hence a reduction in welfare for the individuals: yet, when the failure is caused exactly by what was meant to benefit the population, and when the reversal of niche is (or seems to be) unfeasible, it is possible to individuate a "terminator niche."