HCI Bibliography Home | HCI Conferences | VRIC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRIC Tables of Contents: 121314

Proceedings of the 2014 Virtual Reality International Conference

Fullname:Proceedings of the Virtual Reality International Conference: Laval Virtual
Location:Laval, France
Dates:2014-Apr-09 to 2014-Apr-11
Publisher:ACM
Standard No:ISBN: 978-1-4503-2626-1; ACM DL: Table of Contents; hcibib: VRIC14
Papers:38
Links:Conference Website
Spatializing experience: a framework for the geolocalization, visualization and exploration of historical data using VR/AR technologies BIBAFull-Text 1
  Daniel Pacheco; Sytse Wierenga; Pedro Omedas; Stefan Wilbricht; Habbo Knoch; Paul F. M. J. Verschure
In this study we present a novel ICT framework for the exploration and visualization of historical information using Augmented Reality (AR) and geolocalization. The framework facilitates the geolocalization of multimedia files, as well as their later retrieval and visualization through an AR paradigm in which a virtual reconstruction is matched to user's positions and viewing angle. The main objective of the architecture is to enhance human-data interaction with cultural heritage content in outdoor settings and generate more engaging and profound learning experiences by exploiting information spatialization and sequencing strategies.
Augmented historical scale model for museums: from curation to multi-modal promotion BIBAFull-Text 2
  Benjamin Hervy; Florent Laroche; Jean-Louis Kerouanton; Alain Bernard; Christophe Courtin; Laurence D'haene; Bertrand Guillet; Arnaud Waels
In this paper, we describe an interactive museum application dedicated to historical scale models. This comes from a joint work between multidisciplinary teams: industrial engineering researchers, historians, museum curators and interactive interface designer. We present here the result of the project, based on scientific methodology. Results include system architecture, hardware and software, some use cases and user evaluation figures. This paper also underlines some methodologies issues that illustrate future possibilities.
A framework to increase the video-mapping accuracy of an architectural heritage mock-up BIBAFull-Text 3
  Daniele Rossi; Enrica Petrucci; Simone Fazzini
This paper aims to describe a method to improve the mapping accuracy of a spatial-projection-based augmented mock-up in order to illustrate different surveys conducted of the church of Santa Maria Inter Vineas in Ascoli Piceno (Italy). These surveys are compared to the church's complex building history in order to better understand the changes that have occurred over the centuries and investigate different approaches to its restoration. A content-aware mesh-correction projection technique is illustrated, which can then project information concerning the cultural object of study onto a scaled mock-up, either for educational purposes and analytical knowledge, or to draft a restoration plan.
Virtual reality tools for the west digital conservatory of archaeological heritage BIBAFull-Text 4
  Jean-Baptiste Barreau; Ronan Gaugne; Yann Bernard; Gaétan Le Cloirec; Valérie Gouranton
In the continuation of the 3D data production work made by the WDCAH, the use of virtual reality tools allows archaeologists to carry out analysis and understanding research about their sites. In this paper, we focus on the virtual reality services proposed to archaeologists in the WDCAH, through the example of two archaeological sites, the Temple de Mars in Corseul and the Cairn of Carn Island.
Immaterial art stock project: digital preservation in 3D virtual museum BIBAFull-Text 5
  Aurélie Herbet
Immaterial art stock is a research project to preserve artworks created inside 3D digital spaces (such as Second Life, OpenSim, etc.). This project is initiated by the research program Spatial Media of Ecole nationale supérieure des Arts Décoratifs of Paris. The main objective is to establish a protocol and methodology to preserve this kind of art. Indeed, 3D digital artworks pose numerous challenges to the traditional art world. The fragility of these spaces, the changes in media and the program compatibility, require new model of preservation. In this paper, we present the project and issues raised. First of all, we will summarise digital conservation studies and describe specific nature of 3D digital artworks. Based on this, we present the components of the project.
Designing history learning games for museums: an alternative approach for visitors' engagement BIBAFull-Text 6
  Pablo Aguirrezabal; Rosa Peral; Ainhoa Pérez; Sara Sillaurren
In this paper we describe the work carried out in the PLAYHIST experiment in which we propose the use of gaming technologies to convert an interactive virtual reality film into a multiuser game with real 3D avatars to improve visitors' engagement when visiting a cultural center and museum about Hellenic history.
Virtual reality applications in forensic psychiatry BIBAFull-Text 7
  Massil Benbouriche; Kevin Nolet; Dominique Trottier; Patrice Renaud
Violent offending behaviours remain an important issue in particular when associated with mental illness. To prevent recidivism and protect society, investments are required to develop new tools that would provide decision makers with a better understanding of violent behaviours and ultimately improve treatment options for violent offenders. Recently, Virtual Reality (VR) is gaining recognition as promising tool in forensic psychiatry. Amongst other things, VR allows a renewal from both methodological and theoretical points of view. The aim of this paper is to introduce VR applications in the context of forensic psychiatry. After a brief introduction to the purpose of forensic psychiatry, examples will be given in order to illustrate how VR can help address some of the field's current issues.
Interactive theater-sized dome design for edutainment and immersive training BIBAFull-Text 8
  Sophia Li; Yazhou Huang; Vinh-Sang Tri; Johan Elvek; Samuel Wan; Jan Kjallstrom; Nils Andersson; Mats Johansson; Dan Lejerskar
In this work we present a novel design for theater-sized interactive fulldome system, the EONVision Idome. For edutainment, it combines Hollywood style story telling with cutting edge technologies, providing a group of audience the feast of immersive 4D theater experience. For industrial training, the fully immersive and interactive dome delivers real-time VR tutorials for enhanced training experience. Compared with traditional VR training facilities like CAVE and EON Icube, it hosts larger group of trainees for lower training cost per capita, and provides the means to conduct collaborative training with multiple trainees.
Improving the human skin microanatomy understanding and skin aging observation with the SkinExplorer™ platform BIBAFull-Text 9
  Vazquez-Duchêne Marie-Danielle; Mion Christophe; Mine Solène; Jeanmaire Christine; Freis Olga; Pauly Gilles; Courtois Aurélie; Denis Alain
Background: The skin includes 3 major layers: epidermis, dermis and hypodermis. With aging, the human skin becomes thinner and flattens with dermal papillae being less pronounced. The epidermis becomes fine-drawn and the dermal network deteriorates with decades. The three-dimensional structure of elastic network and the hyaluronic acid distribution are so complex, rich and dense that the conventional observation ways can only provide partial view of these components.
   Objective: The aim of this study was to demonstrate the morphological differences between two abdominal skin samples coming from young and aged donors based on elastic fibers organization and hyaluronic acid distribution.
   Methods: The SkinExplorer™ platform performs immersive and interactive exploration with virtual reality applications.
   Results: 2mm3 of histological data were obtained from 40 serial histological sections. Two three-dimensional numerical models from respectively young and aged skin biopsies were reconstructed. The observations were realized on the epidermis and dermal elastic fibers with the SkinExplorer™ platform and virtual reality (VR) applications. For hyaluronic acid repartition, two smaller models were reconstructed to see their richness and density by the same platform and VR tools.
   Conclusion: These results suggest that Virtual Reality applications can be a good tool for the observation of aging process with its assessing and understanding. Moreover, this tool of exploration offers new way of micro-anatomical research.
Adaptive augmented reality: plasticity of augmentations BIBAFull-Text 10
  Nehla Ghouaiel; Jean-Marc Cieutat; Jean-Pierre Jessel
An augmented reality system is used to complete the real world with virtual objects (computer generated) so they seem to coexist in the same space as the real world. The concept of plasticity [4][5] was first introduced for Human Computer Interaction (HCI). It denotes the ability of an HCI interface to fit the context of use defined by the user, the environment and the platform. We believe that plasticity is a very important notion in the domain of augmented reality. Therefore, we rely on it in order to introduce the concept of adaptive augmented reality. This concept is based on the triplet (user, environment and platform) constituting the context of use. Adaptive augmented reality can foster functional ability, ease of use and portability of new augmented reality applications. Thus, we describe in this paper three applications showing the adaptation of augmentation based on three variables: the scene illumination, the distance to the target and the ambient noise.
Interactive 3D subdomaining using adaptive FEM based on solutions to the dual problem BIBAFull-Text 11
  Holger Graf; Mats Larson; André Stork
This paper presents a new technique for automatic, interactive 3D subdomaining coupled to mesh and simulation refinements in order to enhance local resolutions of CAE domains. Numerical simulations have become crucial during the product development process (PDP) for predicting different properties of new products as well as the simulation of various kinds of natural phenomena. "What-if-scenarios" and conceptual changes to either the boundary or the domain are time consuming and cost intensive. Most of the time, engineers are interested in a deeper understanding of local quantities rather than being exposed to an iterative re-simulation of the overall domain. New techniques for automatic and interactive processes are then challenged by the cardinality and structural complexity of the CAE domain. This paper introduces a new interactive technique that automatically reduces the analysis space, and allows engineers to enhance the resolution of local problems without a need for recalculating the global problem. The technique, integrated into a VR based front end, achieves faster reanalysis cycles compared with traditional COTS tool chains and engineering workflows.
Virtual slicer: interactive visualizer for tomographic medical images based on position and orientation of handheld device BIBAFull-Text 12
  Sho Shimamura; Motoko Kanegae; Jun Morita; Yuji Uema; Masahiko Inami; Tetsu Hayashida; Hideo Saito; Maki Sugimoto
This paper introduces an interface that helps understand the correspondence between the patient and medical images. Surgeons determine the extent of resection by using tomographic images such as MRI (Magnetic Resonance Imaging) data. However, understanding the relationship between the patient and tomographic images is difficult. This study aims to visualize the correspondence more intuitively. In this paper, we propose an interactive visualizer for medical images based on the relative position and orientation of the handheld device and the patient. We conducted an experiment to verify the performances of the proposed method and several other methods. In the experiment, the proposed method showed the minimum error.
SeCG: Serendipity enabled cyber games project BIBAFull-Text 13
  E. Gressier; I. Astic; S. Natkin; J. Murray; M. Kim; C. Talcott; P. Gautier
In this paper, we describe important elements and key features of the next generation Alternate Reality Environments/Alternate Reality Games (AREs/ARGs). The aim of the project is to provide a framework able to help to solve research problems through cyber-games.
Omnemotion: the propagation of emotions BIBAFull-Text 14
  Jérémie Bordas
This paper presents an interactive installation exploring social emotional contagion. Based on the major psychological studies and computational models related to emotions used in the field of affective computing, algorithms are used to simulate the propagation of emotions through artificial characters in an artificial environment. The user interacts with the application on a tablet on which it is possible to trigger events in a 3D artificial world. Those events make the characters evolving in this world experience emotions and share them through the process of emotional contagion. The propagation of emotions through the characters is instantly graphically projected on a wall in front of the user.
"Which brew are you going to choose?": an interactive 'tea-decider-er' in a teahouse shop window BIBAFull-Text 15
  Robyn Taylor; Tom Bartindale; Qasim Chaudhry; Phil Heslop; John Bowers; Peter Wright; Patrick Olivier
We describe the design of an interactive shop window created and installed for use in an independent teahouse. Using cameras to track the gestures of customers on the street front, the system allowed visitors to interact with an animatronic character who helped them choose a 'brew' from over 80 unusual tea varieties. In this paper we describe how we worked with the business owners, observing their practices to develop an understanding of how they helped customers choose one tea out of a large array of appealing possibilities. We describe the design process we undertook when creating the window, and examine the functional, aesthetic, technical and commercial factors that pose challenges when creating a bespoke piece of interactive art for a functioning real-world business.
Digital creation: a state of the arts BIBAFull-Text 16
  Pierre Berger
Using a list of some 2,000 artists, the available data in print and online documentation included in or linked to our website diccan.com, we attempt here to review the whole gamut of digital arts, as they are at the beginning of 2014, from Literature to Transmedia. In spite of the blurred boundaries of the field, some patterns emerge. They can help us to appreciate -- and enjoy -- each work, each artist, each form of art, as well as the general trend: all arts tend to be digital and to merge into transdiscipinarity and transmedia. But artists' talent as well as spectators' attitudes differ. This preserves diversity in art.
The touch of the avatar, artistic research and performance with synthetic doubles BIBAFull-Text 17
  Lucile Haute
The evolution of digital technology created new relationships between body and their images. In performance, circumscribed body open itself to interactions with the environment and with other subjects, in different ways of participation or generativity. From 2010 to 2013, we conducted several experiments in this field including performances and artistic research around the figure of the avatar. These were designed to study the avatar, this extension acting and worth for oneself in digital networks, shared simulated platforms.
Technologie animiste et robotique: Nymphaea Alba Ballet BIBAFull-Text 18
  Pascale Weber; Jean Delsaux; Owen Appadoo
Dans cet article, nous décrivons un projet artistique en cours d'élaboration. Ce projet met en oeuvre des technologies de simulation, de robotique, de perception artificielle, ainsi que d'échange de données par le réseau et les liaisons satellite.
   La partie mécanique et automatique s'étend à la definition de matériaux "intelligents".
   Il met l'accent sur les relations entre Art, Science et Technologies, dans le cours de l'élaboration d'un programme complexe, compare les méthodologies, insiste sur la transdisciplinarité: robotique, arts, performance, ergonomie, automatisme, systèmes multiagents autonomes. Le projet vise une application à l'échelle de la planette, mettant en réseau 8 entités de par le monde susceptibles d'accueillir simultanément 2 à 3 personnes dans le cadre de performances. La première phase d'étude des interactions est réalisée, les suivantes sont en cours avec le soutien des équipes de l'Institut Pascal (Clermont-Ferrand). Initié par le duo d'artistes Hantu, le projet Nymphaea Alba Ballet se situe au confluent des arts, de la technologie, des sciences cognitives et de la physique.
Recognition: combining human interaction and a digital performing agent BIBAFull-Text 19
  John McCormick; Adam Nash; Steph Hutchison; Kim Vincs; Saeid Nahavandi; Douglas Creighton
Virtual and augmented environments are often dependent on human intervention for change to occur. However there are times when it would be advantageous for appropriate human-like activity to still occur when there are no humans present. In this paper, we describe the installation art piece Recognition, which uses the movement of human participants to effect change, and the movement of a performing agent when there are no humans present. The agent's Artificial Neural Network has learnt appropriate movements from a dancer and is able to generate suitable movement for the main avatar in the absence of human participants.
3DCG art expression on a tablet device using integral photography BIBAFull-Text 20
  Nahomi Maki; Akihiko Shirai; Kazuhisa Yanaka
In conventional three-dimensional computer graphic (3DCG) technologies, a rendered image is two-dimensional. No information except that seen from a single viewpoint is included in the rendered image, although 3D models are used to construct a 3D scene. Even when rendering for binocular stereopsis is performed, the viewpoints are only two. This characteristic is a limitation of the conventional 3DCG expression. In this study, we propose a new approach to 3DCG art expression. Our system uses integral photography (IP) consisting of a tablet device and a fly's eye lens. Stereoscopy is possible without the need for the user to wear special glasses even if the device is placed in any orientation because parallax is caused not only in a horizontal position but also in all directions in IP. A ready-made fly's eye lens can be combined with various tablet devices that have different screen resolutions because the extended fractional view method is used. The device is so small and lightweight that users can appreciate 3D arts at any time and place. Notably, IP can reproduce such glittering effects because each minute convex lens of the IP display emits light in different directions. We produced a 3DCG artwork, called "Frozen Time," that fully employs the characteristics of our technology in motifs of "floating ice," "crystallized fossils," and an "opal flower."
"Scritter" to "1p2x3D": application development using multiplex hiding imaging technology BIBAFull-Text 21
  Yannick Littfass; Yukua Koide; Hisataka Suzuki; Akihiko Shirai
In this paper, we establish a roadmap of Scritter, a promising multiplex hidden imaging technology enabling multiple users to watch different contents on the same display at the same time. After explaining how we adapted Scritter to current home display technologies, we present the major applications developed in order to promote the multiplex hidden imaging technique to the public and content creators. We then introduce a plug-in designed for the Unity3D Game Engine to help content creators and artist get more easily involved in the search for innovative content using multiplex hidden imaging. We eventually review the potential applications we explored so far, and suggest new fields of investigation where the Scritter series could add a significant value regarding entertainment, social experiences, and utility.
Multi-sensor data fusion for hand tracking using Kinect and Leap Motion BIBAFull-Text 22
  Benoît Penelle; Olivier Debeir
Often presented as competing products on the market of low cost 3D sensors, the Kinect™ and the Leap Motion™ (LM) can actually be complementary in some scenario. We promote, in this paper, the fusion of data acquired by both LM and Kinect sensors to improve hand tracking performances. The sensor fusion is applied to an existing augmented reality system targeting the treatment of phantom limb pain (PLP) in upper limb amputees. With the Kinect we acquire 3D images of the patient in real-time. These images are post-processed to apply a mirror effect along the sagittal plane of the body, before being displayed back to the patient in 3D, giving him the illusion that he has two arms. The patient uses the virtual reconstructed arm to perform given tasks involving interactions with virtual objects. Thanks to the plasticity of the brain, the restored visual feedback of the missing arm allows, in some cases, to reduce the pain intensity. The Leap Motion brings to the system the ability to perform accurate motion tracking of the hand, including the fingers. By registering the position and orientation of the LM in the frame of reference of the Kinect, we make our system able to accurately detect interactions of the hand and the fingers with virtual objects, which will greatly improve the user experience. We also show that the sensor fusion nicely extends the tracking domain by supplying finger positions even when the Kinect sensor fails to acquire the depth values for the hand.
Understanding large network datasets through embodied interaction in virtual reality BIBAFull-Text 23
  Alberto Betella; Enrique Martínez Bueno; Wipawee Kongsantad; Riccardo Zucca; Xerxes D. Arsiwalla; Pedro Omedas; Paul F. M. J. Verschure
The intricate web of information we generate nowadays is more massive than ever in the history of mankind. The sheer enormity of big data makes the task of extracting semantic associations out of complex networks more complicated. Stemming this "data deluge" calls for novel unprecedented technologies. In this work, we engineered a system that enhances a user's understanding of large datasets through embodied navigation and natural gestures. This system constitutes an immersive virtual reality environment called the "eXperience Induction Machine" (XIM). One of the applications that we tested using our system is the exploration of the human connectome: the network of nodes and connections that underlie the anatomical architecture of the human brain. As a comparative validation of our technology, we then exposed participants to a connectome dataset using both our system and a state-of-the-art software for visualization and analysis of the same network. We systematically measured participants' understanding and visual memory of the connectomic structure. Our results showed that participants retained more information about the structure of the network when using our system. Overall, our system constitutes a novel approach in the exploration and understanding of large complex networks.
Wind and warmth in virtual reality: implementation and evaluation BIBAFull-Text 24
  Felix Hülsmann; Julia Fröhlich; Nikita Mattar; Ipke Wachsmuth
One possibility to make virtual worlds more immersive is to address as many human senses as possible. This paper presents a system for creating wind and warmth simulations in Virtual Reality (VR). Therefore, suitable hardware and an implemented software model applied in a three-sided CAVE are described. Technical evaluations of the hardware and software components demonstrate the usability of the system in VR applications. A pilot user study underlines users' acceptance and indicates a positive influence of wind and warmth stimuli on perceived presence.
Integrating virtual agents in BCI neurofeedback systems BIBAFull-Text 25
  Marc Cavazza; Fred Charles; Stephen W. Gilroy; Julie Porteous; Gabor Aranyi; Gal Raz; Nimrod Jakob Keynan; Avihay Cohen; Gilan Jackont; Yael Jacob; Eyal Soreq; Ilana Klovatch; Talma Hendler
The recent development of Brain-Computer Interfaces (BCI) to Virtual World has resulted in a growing interest in realistic visual feedback. In this paper, we investigate the potential role of Virtual Agents in neurofeedback systems, which constitute an important paradigm for BCI. We discuss the potential impact of virtual agents on some important determinants of neurofeedback in the context of affective BCI. Throughout the paper, we illustrate our presentation with two fully implemented neurofeedback prototypes featuring virtual agents: the first is an interactive narrative in which the user empathises with the character through neurofeedback; the second recreates a natural environment in which crowd behaviour becomes a metaphor for arousal and the user engages in emotional regulation.
XIM-engine: a software framework to support the development of interactive applications that uses conscious and unconscious reactions in immersive mixed reality BIBAFull-Text 26
  Pedro Omedas; Alberto Betella; Riccardo Zucca; Xerxes D. Arsiwalla; Daniel Pacheco; Johannes Wagner; Florian Lingenfelser; Elisabeth Andre; Daniele Mazzei; Antonio Lanatá; Alessandro Tognetti; Danilo de Rossi; Antoni Grau; Alex Goldhoorn; Edmundo Guerra; Rene Alquezar; Alberto Sanfeliu; Paul F. M. J. Verschure
The development of systems that allow multimodal interpretation of human-machine interaction is crucial to advance our understanding and validation of theoretical models of user behavior. In particular, a system capable of collecting, perceiving and interpreting unconscious behavior can provide rich contextual information for an interactive system. One possible application for such a system is in the exploration of complex data through immersion, where massive amounts of data are generated every day both by humans and computer processes that digitize information at different scales and resolutions thus exceeding our processing capacity. We need tools that accelerate our understanding and generation of hypotheses over the datasets, guide our searches and prevent data overload. We describe XIM-engine, a bio-inspired software framework designed to capture and analyze multi-modal human behavior in an immersive environment. The framework allows performing studies that can advance our understanding on the use of conscious and unconscious reactions in interactive systems.
Evaluating RGB+D hand posture detection methods for mobile 3D interaction BIBAFull-Text 27
  Daniel Fritz; Annette Mossel; Hannes Kaufmann
In mobile applications it is crucial to provide intuitive means for 2D and 3D interaction. A large number of techniques exist to support a natural user interface (NUI) by detecting the user's hand posture in RGB+D (depth) data. Depending on a given interaction scenario, each technique has its advantages and disadvantages. To evaluate the performance of the various techniques on a mobile device, we conducted a systematic study by comparing the accuracy of five common posture recognition approaches with varying illumination and background. To be able to perform this study, we developed a powerful software framework that is capable of processing and fusing RGB and depth data directly on a handheld device. Overall results reveal best recognition rate of posture detection for combined RGB+D data at the expense of update rate. Finally, to support users in choosing the appropriate technique for their specific mobile interaction task, we derived guidelines based on our study.
Navigation and interaction in a real-scale digital mock-up using natural language and user gesture BIBAFull-Text 28
  M. A. Mirzaei; J.-R. Chardonnet; F. Mérienne; A. Genty
This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.
Collaborating & being together: influence of screen size and viewing distance during video communication BIBAFull-Text 29
  Virginie Dagonneau; Elise Martin; Mathilde Cosquer
Through videoconferencing, people search to interact and communicate with their remote friends or family as they were together in the same place. The influence of form variables such as screen size has been mainly investigated on the sense of physical presence (presence as transportation) in virtual environment and in television area, but less attention has been paid to how these factors can influence the sense of co-presence in videoconferencing. In addition, preferred viewing distance is well known to be a key parameter in order to convey a sense of presence and enjoyment when people watch television, but there is currently no data regarding preferred viewing distance in videoconferencing. This paper presents a user study which explores the influence of screen size on the participant's sense of co-presence and on their preferred viewing distance. The main results of this study revealed that users preferred to get closer to the screen when they communicated in videoconferencing than when they watched TV program. Our study suggests that screen size has an effect on the preferred viewing distance and on the participant's sense of co-presence, with higher scores of co-presence with larger screen.
Manipulating complex network structures in virtual reality and 3D printing of the results BIBAFull-Text 30
  Alberto Betella; Alex Escuredo; Enrique Martínez; Pedro Omedas; Paul F. M. J. Verschure
We present an immersive VR system that allows to manipulate network data structures by creating, removing or reconfiguring their elements and export the results at any time for direct 3D printing.
Investigating the main characteristics of 3D real time tele-immersive environments through the example of a computer augmented golf platform BIBAFull-Text 31
  Benjamin Poussard; Guillaume Loup; Olivier Christmann; Rémy Eynard; Marc Pallot; Simon Richir; Franck Hernoux; Emilie Loup-Escande
This paper aims to identify and define the characteristics of 3D Real Time Tele-Immersive Environments (RT-TIE), which is central to the 3D-LIVE European Research Project. A RT-TIE allows a "twilight space" which is a space where users can be physically and virtually present. The main characteristics of these kinds of environments are: the use of real time interactions and immersive technologies, high costs (in most of the cases), a design process oriented on end-users and a disruptive user experience. Finally, a list of guidelines based on literature is suggested for the design of an augmented golf platform that is implemented in the context of the 3D-LIVE project.
From driving simulation to virtual reality BIBAFull-Text 32
  Andras Kemeny
Driving simulation from the very beginning of the advent of VR technology uses the very same technology for visualization and similar technology for head movement tracking and high end 3D vision. They also share the same or similar difficulties in rendering movements of the observer in the virtual environments. The visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems, induce the so-called simulation sickness, when driving or displacing using a control device (ex. Joystick). Another cause for simulation sickness is the transport delay, the delay between the action and the corresponding rendering cues.
   Another similarity between driving simulation and VR is need for correct scale 1:1 perception. Correct perception of speed and acceleration in driving simulation is crucial for automotive experiments for Advances Driver Aid System (ADAS) as vehicle behavior has to be simulated correctly and anywhere where the correct mental workload is an issue as real immersion and driver attention is depending on it. Correct perception of distances and object size is crucial using HMDs or CAVEs, especially as their use is frequently involving digital mockup validation for design, architecture or interior and exterior lighting.
   Today, the advents of high resolution 4K digital display technology allows near eye resolution stereoscopic 3D walls and integrate them in high performance CAVEs. High performance CAVEs now can be used for vehicle ergonomics, styling, interior lighting and perceived quality. The first CAVE in France, built in 2001 at Arts et Metiers ParisTech, is a 4 sided CAVE with a modifiable geometry with now traditional display technology. The latest one is Renault's 70M 3D pixel 5 sides CAVE with 4K x 4K walls and floor and with a cluster of 20 PCs. Another equipment recently designed at Renault is the motion based CARDS driving simulator with CAVE like 4 sides display system providing full 3D immersion for the driver.
   The separation between driving simulation and digital mockup design review is now fading though different uses will require different simulation configurations.
   New application domains, such as automotive AR design, will bring combined features of VR and driving simulation technics, including CAVE like display system equipped driving simulators.
Stereoscopic augmented reality system for supervised training on minimal invasive surgery robots BIBAFull-Text 33
  Florin Octavian Matu; Mikkel Thøgersen; Bo Galsgaard; Martin Møller Jensen; Martin Kraus
Training in the use of robot-assisted surgery systems is necessary before a surgeon is able to perform procedures using these systems because the setup is very different from manual procedures. In addition, surgery robots are highly expensive to both acquire and maintain -- thereby entailing the need for efficient training. When training with the robot, the communication between the trainer and the trainee is limited, since the trainee often cannot see the trainer.
   To overcome this issue, this paper proposes an Augmented Reality (AR) system where the trainer is controlling two virtual robotic arms. These arms are virtually superimposed on the video feed to the trainee, and can therefore be used to demonstrate and perform various tasks for the trainee. Furthermore, the trainer is presented with a 3D image through a stereoscopic display. Because of the added depth perception, this enables the trainer to better guide and help the trainee.
   A prototype has been developed using low-cost materials and the system has been evaluated by surgeons at Aalborg University Hospital. User feedback indicated that a 3D display for the trainer is very useful as it enables the trainer to better monitor the procedure, and thereby enhances the training experience. The virtual overlay was also found to work as a good and illustrative approach for enhanced communication. However, the delay of the prototype made it difficult to use for actual training.
Sharing 3D object with multiple clients via networks using vision-based 3D object tracking BIBAFull-Text 34
  Yukiko Shinozuka; Hideo Saito
This paper proposes a new system for 3D object sharing with multiple clients using networks. Our system aims to support group communication across devices over the same 3D object and puts the annotations with augmented reality automatically. In order to keep putting the annotations during the interaction, we apply the vision-based 3D object tracking algorithm which estimates 3D position and pose of the object just by capturing the object with a single RGB camera. By employing the tracking algorithm robust to viewpoint changes and occlusion, we develop the 3D object sharing system with a desktop as a server and a laptop and a smart phone as clients. We conducted the experiments to show the tracking algorithm is tolerant of interaction and can be applied to various objects with texture.
Powder box: an interactive synthesizer with sensor based replaceable interface BIBAFull-Text 35
  Yoshihito Nakanishi; Seiichiro Matsumura; Chuichi Arakawa
In this paper, the authors introduce an interactive synthesizer, "POWDER BOX" for use by novices in musical sessions [1]. "POWDER BOX" is equipped with sensor-based replaceable interfaces, which enable participants to discover and select their favorite playing styles of musical instruments during a musical session. In addition, it has a wireless communication function that synchronizes musical scale and BPM between multiple devices. "POWDER BOX" provides novice participants with opportunities to experience a cooperative music performance. Here, the interaction design and the configuration of the device are presented.
Virtual rope slider BIBAFull-Text 36
  Tatsuya Kodera; Naoto Tani; Jun Morita; Naoya Maeda; Kazuna Tsuboi; Motoko Kanegae; Yukiko Shinozuka; Sho Shimamura; Kadoki Kubo; Yusuke Nakayama; Jaejun Lee; Maxime Pruneau; Hideo Saito; Maki Sugimoto
This paper proposes "Virtual Rope Slider", which expands a rope sliding experience by stimulating sense of sight, hearing, wind and vestibular sensation. A rope slide in a real world has physical restrictions in terms of scale and location whereas our "Virtual Rope Slider" provides scale and location independent experiences in the virtual environment. The user is able to perceive a different sense of scale in the virtualized scenes by multi-modal stimulation with physical simulation.
BrainX³: embodied exploration of neural data BIBAFull-Text 37
  Alberto Betella; Ryszard Cetnarski; Riccardo Zucca; Xerxes D. Arsiwalla; Enrique Martínez; Pedro Omedas; Anna Mura; Paul F. M. J. Verschure
We present BrainX3 as a novel immersive and interactive technology for exploration of large biological data, which in this paper is customized towards brain networks. Unlike traditional machine-inference systems, BrainX3 posits a two-way coupling of human intuition to powerful machine computation to tackle the big data challenge.
   Furthermore, through unobtrusive wearable sensors, BrainX3 can infer user's states in terms of arousal and cognitive workload, thus changing the visualization and the sonification parameters to boost the exploration process.
FamiLinkTV: expanding the social value of the living room with multiplex imaging technology BIBAFull-Text 38
  Hisataka Suzuki; Yannick Littfass; Rex Hsieh; Hiroki Taguchi; Fujimura Wataru; Yukua Koide; Akihiko Shirai
In this paper, we will be introducing a technological solution enabling the use of multiplex imaging technique on the living room television screen. FamiLinkTV allows all the family members to enjoy an experience that goes far beyond the simple screen sharing together: instead of having different family members watching their own contents in their own rooms, or on separate time slots, FamiLinkTV aims to get people to sit on the same couch at the same time. The sharing of physical space will allow family bonding to take place by acting as a catalyst for social interaction. In order to involve more developers and artists in the search for innovative interaction-enhancing content, we opened the field for content creators by proposing a Unity3D plug-in that enables multiplex imaging on 3D flat panels. Current development allows contents such as games, movies and camera feeds to be displayed on the screen, and be seen independently in real-time, either on the naked eye (main content) or through cheap polarizing glasses (hidden content). The implementation of image hiding is likely to have a great impact on the current multiplex technologies.