| In quest of realism and interactivity for virtual environments | | BIBA | Full-Text | 15 | |
| Ming C. Lin | |||
| The realism of a computer simulated system for virtual environments often depends heavily on three main components: graphics, behavior, and sound. Thanks to four decades of research in modeling, rendering, and advances in VLSI technologies for graphics hardware, today's game systems are able to render near photorealistic images at interactive rates. To further increase the player's experience and immersion, the recent trend has been on introduction of physics-based simulation and behaviors. However, many computational challenges remain due to the simultaneous quest for sensory realism and performance requirements of these systems. Some of the key research issues include interactive motion synthesis of physically-plausible behavior of soft and articulated bodies, fast simulation of large-scale heterogeneous crowds, and real-time multi-sensory interaction. In this talk, I will present a few highlights of our recent efforts on addressing these problems. I will also demonstrate the results on several interactive applications, including cloth simulation for feature animation, sound rendering for computer games, 3D virtual painting for training and education, catheterization procedure for liver chemoembolization, and crowd simulation for virtual cityscapes. I will conclude by suggesting some research opportunities and applications. | |||
| Cognitive strategies for spatial memory of navigation: studies combining virtual reality and brain imaging | | BIBA | Full-Text | 16 | |
| Alain Berthoz | |||
| This talk will deal with the neural basis of spatial memory during navigation. When navigating or trying to remember a traveled path the brain uses different cognitive strategies. It can use, among others, an egocentric, (topo-kinesthétic), memory of the travel involving kinaesthetic memories of the route and episodic memory, but it can also use allocentric, (topo-graphic), map like, memories. Different brain systems are involved in these strategies and they develop during ontogeny. I will describe studies using virtual reality in normal subjects and patient with hippocampal lesions. In addition I will describe results obtained with fMRI and intracranial recordings in epileptic patients which identify the brain areas involved in these strategies. Virtual reality allows us to selectively identify the brain areas involved in such tasks as perspective change, manipulation of reference frames, decision making and some aspects of social interaction such as empathy etc. The paradigms we have designed for these fundamental studies can also be used for diagnosis of these deficits in Schizophrenia, autism, and other psychiatric or neurological diseases. They could also be used for remediation in these diseases or others like agoraphobia. A new field is now opened in which neuroscientists, neurologists, psychiatrists, otolaryngologists and roboticians can cooperate to try to compensate for this category of deficits in patients, during development or aging. | |||
| Semantic modelling for virtual worlds a novel paradigm for realtime interactive systems? | | BIBA | Full-Text | 17-20 | |
| Marc Erich Latoschik; Roland Blach | |||
| Engineering systems plays a central role in the development of successful Virtual Reality (VR) and Augmented Reality (AR) applications. Increasing computational resources are utilized to build increasingly complex artificial environments and extensive Human-Computer Interaction (HCI) systems. These types of Realtime Interactive Systems (RIS) establish a closed HCI loop. They are characterized as systems continuously analyzing users' input while concurrently synthesizing appropriate output for several of the human senses in real-time. | |||
| Advantages of velocity-based scaling for distant 3D manipulation | | BIBAK | Full-Text | 23-29 | |
| Curtis Wilkes; Doug A. Bowman | |||
| Immersive virtual environments (VEs) have the potential to offer rich
three-dimensional interaction to users. In many instances, however, 3D
interaction tasks are difficult due to both the imprecision of tracking devices
and the inability of users to achieve and maintain precise hand positions in 3D
space. One way to improve upon existing interaction techniques is to
dynamically change the sensitivity of the interaction technique based on user
input. Previous research has applied this principle to virtual hand-based
manipulation techniques; when the user slows down the movement of her physical
hand, the virtual hand slows down even more to allow precise manipulation. In
this study we extend the prior research by applying the velocity-based scaling
principle to HOMER, an existing at-a-distance manipulation technique based on
ray-casting. The scaled HOMER technique offers the user the freedom to
accomplish both long- and short-distance manipulation tasks with higher levels
of precision without compromising speed. We present results from a user study
that shows that the addition of scaling to HOMER significantly improves user
performance on 3D manipulation tasks. Keywords: 3D interaction, usability, user studies | |||
| Video agent: interactive autonomous agents generated from real-world creatures | | BIBAK | Full-Text | 30-38 | |
| Yoshifumi Kitamura; Rong Rong; Yoshinori Hirano; Kazuhiro Asai; Fumio Kishino | |||
| We present a novel approach for interactive multimedia content creation that
establishes an interactive environment in cyberspace in which users interact
with autonomous agents generated from video images of real-world creatures.
Each agent has autonomy, personality traits, and behaviors that reflect the
results of various interactions determined by an emotional model with fuzzy
logic. After an agent's behavior is determined, a sequence of video images that
best match the determined behavior is retrieved from the database in which a
variety of video image sequences of the real creature's behaviors are stored.
The retrieved images are successively displayed on the cyberspace to make it
responsive. Thus the autonomous agent behaves continuously. In addition, an
explicit sketch-based method directly initiate the reactive behavior of the
agent without involving the emotional process. This paper describes the
algorithm that establishes such an interactive system. First, an image
processing algorithm to generate a video database is described. Then the
process of behavior generation using emotional models and sketch-based
instruction are introduced. Finally, two application examples are demonstrated:
video agents with humans and goldfish. Keywords: characters, computer animation, fuzzy logic, image processing, interactive
multimedia content, video database | |||
| ACTIF: an interactor centric interaction framework | | BIBAK | Full-Text | 39-42 | |
| Nicolai Hess; Jan D. S. Wischweh; Kirsten Albrecht; Kristopher J. Blom; Steffi Beckhaus | |||
| The design and implementation of interactions in 3D environments remains a
challenge. This is especially true for novices. Mechanisms to support the
creation of interaction have been developed, but they lack a central metaphor
that fits the natural way in which developers conceptualize interaction
techniques. In this paper, we introduce a new framework whose design mirrors
the essence of interaction throughout the Virtual Reality spectrum, where the
user is literally in the center. It also reflects the way in which interactions
are actually understood and described, based on the interactor and her actions.
Based on the central metaphor of the interactor, an implementation that is composed of three phases is developed. Those phases are: input retrieval and shaping, interpretation of user intentions, and execution of changes to the environment. Through these divisions, software requirements like composition and reusability of components are satisfied. The resultant system ACTIF, an ACTor centric Interaction Framework, structures interaction development in a meaningful and understandable way and at the same time eases the design and creation of new and experimental interactions. Keywords: 3D user interfaces, user centered HCI, virtual reality | |||
| Overcoming eye-hand visibility mismatch in 3D pointing selection | | BIBAK | Full-Text | 43-46 | |
| Ferran Argelaguet; Carlos Andujar; Ramon Trueba | |||
| Most pointing techniques for 3D selection on virtual environments rely on a
ray originating at the user's hand whose direction is controlled by the hand
orientation. In this paper we study the potential mismatch between visible
objects (those which appear unoccluded from the user's eye position) and
selectable objects (those which appear unoccluded from the user's hand
position). We study the impact of such eye-hand visibility mismatch on
selection performance, and propose a new technique for ray control which
attempts to overcome this problem. We present an experiment to compare our ray
control technique with classic raycasting in selection tasks with complex 3D
scenes. Our user studies show promising results of our technique in terms of
speed and accuracy. Keywords: 3D selection, raycasting, virtual pointer | |||
| Navidget for immersive virtual environments | | BIBAK | Full-Text | 47-50 | |
| Sebastian Knödel; Martin Hachet; Pascal Guitton | |||
| We present a novel interaction technique called Immersive Navidget for
navigation in immersive virtual environments (VE), see Figure 1. This
technique, based on Navidget, allows fast and easy 3D camera positioning from
simple controls. In this paper, we focus on the technical issues that are
induced by the VR setups and we propose solutions to adapt Navidget to the
immersive context. We show that this new approach has many advantages for
navigation in immersive VEs. Keywords: interaction techniques, virtual reality | |||
| HandNavigator: hands-on interaction for desktop virtual reality | | BIBAK | Full-Text | 53-60 | |
| Paul G. Kry; Adeline Pihuit; Adrien Bernhardt; Marie-Paule Cani | |||
| This paper presents a novel interaction system, aimed at hands-on
manipulation of digital models through natural hand gestures. Our system is
composed of a new physical interaction device coupled with a simulated
compliant virtual hand model. The physical interface consists of a
SpaceNavigator, augmented with pressure sensors to detect directional forces
applied by the user's fingertips. This information controls the position,
orientation, and posture of the virtual hand in the same way that the
SpaceNavigator uses measured forces to animate a virtual frame. In this manner,
user control does not involve fatigue due to reaching gestures or holding a
desired hand shape. During contact, the user has a realistic visual feedback in
the form of plausible interactions between the virtual hand and its
environment. Our device is well suited to any situation where hand gesture,
contact, or manipulation tasks need to be performed in virtual. We demonstrate
the device in several simple virtual worlds and evaluate it through a series of
user studies. Keywords: hands, interaction, virtual reality | |||
| Digital foam interaction techniques for 3D modeling | | BIBAK | Full-Text | 61-68 | |
| Ross T. Smith; Bruce H. Thomas; Wayne Piekarski | |||
| Digital Foam is a new input sensor developed to support clay like sculpting
and modeling operations. We present techniques facilitating navigation and
manipulation operations performed using Spherical Digital Foam as a sole input
device. Our free-form sculpting technique allows manipulation of new and
existing 3D models using accumulated sculpting like motions. Digital Foam's
multi-point pressure sensitive surface captures the separate locations of a
user's fingertips allowing controlled manipulation of multiple model vertices
simultaneously. Additionally, we developed a technique that allows the camera
view and zoom to be controlled by applying varying pressure to the Digital Foam
surface. Furthermore, we have designed a menu system tailored for operation
using Spherical Digital Foam as a sole input device using both the internal
orientation sensor and the pressure sensitive surface.
A new higher resolution Spherical Digital Foam input device with 162 unique pressure sensors is presented. This is a significant improvement in comparison to the previous Spherical Digital Foam version with only 21 sensors. We discuss the design issues and how an increased resolution affects the operation and design of the algorithms used. We propose a new dynamic button allocation technique made possible using the new high resolution Spherical Digital Foam. Finally, we performed a trial study using the new 162 sensor Spherical Digital Foam input device to evaluate elements of the menu system. Keywords: 3D input device, augmented reality, digital foam, interaction techniques,
interactive modeling, virtual reality | |||
| CubTile: a multi-touch cubic interface | | BIBAK | Full-Text | 69-72 | |
| Jean-Baptiste de la Rivière; Cédric Kervégant; Emmanuel Orvain; Nicolas Dittlo | |||
| On the one hand, multitouch tactile interfaces offer many advantages and are
more and more widespread. Their use as 3D application interfaces are rather
limited though, since they offer large horizontal flat projection surfaces that
are not suited to many kinds of 3D operations. On the other hand, despite the
many propositions that have been made over the past years, no single interface
has proven to tackle the numerous specificities related to the 3D interaction
constraints. Through the CubTile device proposal, our preliminary work tries to
bring the strengths related to multitouch tactile surfaces into a device aimed
at 3D interactions. Consisting in a medium-sized cube where 5 out of 6 sides
are multitouch, our prototype senses several fingers, offers interaction
redundancy and lets a user handle 3D manipulation thanks to single handed and
bimanual input. Keywords: input interface, interaction in 3D virtual environments, multi-touch,
tactile | |||
| A rigid-body target design methodology for optical pose-tracking systems | | BIBAK | Full-Text | 73-76 | |
| Thomas Pintaric; Hannes Kaufmann | |||
| The standard method for estimating the rigid-body motion of arbitrary
interaction devices with an infrared-optical tracking system involves attaching
pre-defined geometric constellations of retro-reflective or light-emitting
markers, commonly referred to as "targets", to all tracked objects. Optical
markers of the same type are typically indistinguishable from each other,
requiring the tracking system to establish their identities through known
spatial relationships. Consequently, the specific geometric arrangement of
markers across multiple targets has a considerable impact on the system's
overall performance and robustness.
In this paper, we propose a simple new methodology for constructing optically tracked rigid-body targets. Our practically-oriented approach employs an optimization heuristic to compute near-optimal marker arrangements. Using prefabricated mounting fixtures, the assembly step requires only basic hobbyist tools and skills. Keywords: 6-DOF pose-tracking, marker constellation, model-based object tracking,
optical tracking, rigid body, target design | |||
| GPU techniques for creating visually diverse crowds in real-time | | BIBAK | Full-Text | 79-86 | |
| R. Galvao; R. G. Laycock; A. M. Day | |||
| Real-time crowds significantly improve the realism of virtual environments,
therefore their use has increased considerably over the last few years in a
variety of applications, including real-time games and virtual tourism.
However, due to current hardware limitations, crowd variety tends to be
sacrificed in order for the crowd simulation to execute in real-time, which
decreases the quality and realism of the crowd.
Currently the little variety that is incorporated in real-time crowds tends to be applied by modulating each avatar with random colours, which has a detrimental effect on the texture quality. Furthermore, the existing crowd variety is often hard to define and control. To overcome these problems a set of techniques are presented, which defines and controls crowd variety, to further improve on current variety and quality of crowds. These techniques permit variety to be introduced: by changing the body mass via the application of a displacement map onto the mesh; by scaling the skeleton of the avatar; by applying HSV colour shifts to different parts of the avatar; and by transferring textures between avatar models. The appearance of the avatars under animation is also improved via the use of muscle displacement within the mesh. With the new techniques, the visual quality of the crowd is improved due to the increase in diversity. Keywords: crowd, real-time, variety | |||
| Haptic simulations based on non-smooth dynamics for rigid-bodies | | BIBAK | Full-Text | 87-90 | |
| Loïc Tching; Georges Dumont | |||
| In the context of virtual reality, haptic interfaces are coupled with
simulations, which treat interactions between objects. To simulate contacts or
impacts, we focus our attention on simulators based on dynamics for rigid
models. In this article, we first propose a brief state of the art on
closed-loop haptic interaction. Then, we discuss the use of non-smooth dynamics
methods for interactive, haptic-based simulations. We finally present our
research software, which proposes a haptic interface coupled with non-smooth
dynamics algorithms. Keywords: haptics, non-smooth dynamics, rigid-body simulations | |||
| A VR framework for interacting with molecular simulations | | BIBAK | Full-Text | 91-94 | |
| Nicolas Férey; Olivier Delalande; Gilles Grasseau; Marc Baaden | |||
| Molecular Dynamics is nowadays routinely used to complement experimental
studies and overcome some of their limitations. In particular, current
experimental techniques do not allow to directly observe the full dynamics of a
macromolecule at atomic detail. Molecular simulation provides time-dependent
atomic positions, velocities and system energies according to biophysical
models. Many molecular simulation engines can now compute a molecular dynamics
trajectory of interesting biological systems in interactive time. This progress
has lead to a new approach called interactive molecular dynamics. It allows to
control and visualise a molecular simulation in progress. We have developed a
generic library, called MDDriver, to facilitate the implementation of such
interactive simulations. It allows to easily create a network between a
molecular user interface and a physically-based simulation. We use this library
in order to study an interesting biomolecular system, simulated by various
interaction-enabled molecular engines and models. We use a classical molecular
visualisation tool and a haptic device to control the dynamic behavior of the
molecule. This approach provides encouraging results for interacting with a
biomolecule and understanding its dynamics. Starting from this initial success,
we decided to use VR functionalities more intensively, by designing a VR
framework dedicated to immersive and interactive molecular simulations. This
framework is based on MDDriver, on the visualisation toolkit VTK, and on the
vtkVRPN library, which encapsulates the VRPN library into VTK. Keywords: VRPN, VTK, haptic feedback, interactive molecular dynamics, scientific
visualisation | |||
| SAILOR: a 3-D medical simulator of loco-regional anaesthesia based on desktop virtual reality and pseudo-haptic feedback | | BIBAK | Full-Text | 97-100 | |
| Lazar Bibin; Anatole Lécuyer; Jean-Marie Burkhardt; Alain Delbos; Madeleine Bonnet | |||
| Anaesthesia is a medical act which eliminates the feeling of pain as well as
the motor reactions of a person, before performing a surgical operation.
Loco-Regional Anaesthesia (LRA) concerns only a part of the body such as the
front arm or leg. This practice is increasingly used today notably because the
patient can remain conscious and can recover more rapidly. However, LRA still
remains a risky procedure. In this paper, we introduce a novel medical
simulator called SAILOR for the training for LRA with neurostimulation. SAILOR
is based on desktop virtual reality, realistic 3-D rendering and interactive
techniques with a classical mouse and keyboard. It simulates the various
biological phenomena which can occur during an anaesthesia procedure. We also
introduce a novel pseudo-haptic effect to enhance the palpation of the virtual
patient's body and feel the inner organs. The first feedback from users of the
commercialized DVD version of SAILOR as well as the results of pilot tests
suggest that this simulator is a very promising tool for education and training
for LRA. Keywords: desktop virtual reality, interactive technique, loco-regional anaesthesia,
medical simulator, pseudo-haptic | |||
| A VR simulator for training and prototyping of telemanipulation of nanotubes | | BIBAK | Full-Text | 101-104 | |
| Zhan Gao; Anatole Lécuyer | |||
| This paper describes a virtual reality (VR) simulator, for the purpose of
education, training and prototyping of telemanipulation of carbon nanotubes.
Major challenges in interfacing a human operator with tasks of manipulating
nanotubes via a haptic VR interface are outlined. After a review of previous
efforts, we present the current state of our VR simulator for nanotube
manipulation. The collision detection, interaction force modeling, deformation
simulation and haptic rendering of nanotubes are then discussed. Results of
virtual manipulation of carbon nanotubes are presented within an immersive VR
set-up. Keywords: VR, nanotube, simulation, telemanipulation | |||
| A real-time simulator for interventional radiology | | BIBAK | Full-Text | 105-108 | |
| Lindo Duratti; Fei Wang; Evren Samur; Hannes Bleuler | |||
| Interventional radiologists manipulate guidewires and catheters and steer
stents through the patient's vascular system under X-ray imaging for treatment
of vascular diseases. The complexity of these procedures makes training
mandatory in order to master hand-eye coordination, instrument manipulation and
procedure protocols for each radiologist. In this paper we present a simulator
for interventional radiology, which deploys a model of guidewire/catheter based
on the Cosserat theory applied to one-dimensional structures. This model starts
from the energetic formulation of the flament considering the Hook laws of
continuum mechanics. The Lagrange formulations are used to describe the model
deformation. This model takes (self-) collisions into account and it is
revealed to be very efficient for interactive applications. The simulation
environment allows to carry out the most common procedures: guidewire and
catheter navigation, contrast dye injection to visualize the vessels, balloon
angioplasty and stent placement. Moreover, heartbeat as well as breathing are
also simulated visually. Keywords: Cosserat rod theory, X-ray, interventional radiology, minimally invasive
surgery, real-time simulation | |||
| Scenario sharing in a collaborative virtual environment for training | | BIBAK | Full-Text | 109-112 | |
| Stéphanie Gerbaud; Bruno Arnaldi | |||
| In this paper, we describe a system used in the context of virtual training
on collaborative maintenance procedures where the focus is on the learning of
the industrial procedure rather than technical gestures. In existing
collaborative virtual environments for training the distribution of scenario
actions among actors is fixed: only one role can be associated with a given
scenario action. In this paper, we propose to overcome this limitation and to
add a mechanism to deal with this new flexibility. This mechanism is able to
dynamically select the best actor for an action, based on various criteria, and
to propose a distribution of actions among actors. We also propose to add
collaborative profiles to virtual humans to guide them in order to select the
next action to perform, possibly following the distribution suggestion.
Trainees and virtual humans can then adapt their activities while respecting
the reference procedure. Keywords: collaborative scenario, training, virtual environment | |||
| An image-warping VR-architecture: design, implementation and applications | | BIBAK | Full-Text | 115-122 | |
| F. A. Smit; R. van Liere; B. Fröhlich | |||
| We describe an architecture that provides a programmable display layer in
order to allow the execution of custom programs on consecutive display frames.
This replaces the default display behavior of repeating application frames
until an update is available. The architecture is implemented using a multi-GPU
system. We will show three applications of the architecture typical to VR.
First, smooth motion is provided by generating intermediate display frames by
per-pixel depth-image warping using 3D motion fields. Smooth motion can be
beneficial for walk-throughs of large scenes. Second, we implement fine-grained
latency reduction at the display frame level using a synchronized prediction of
simulation objects and the viewpoint. This improves the average quality and
consistency of latency reduction. Third, a crosstalk reduction algorithm for
consecutive display frames is implemented, which improves the quality of
stereoscopic images. Keywords: VR, crosstalk, image-warping, judder, latency, motion, stereoscopic display | |||
| A simple method for estimating the latency of interactive, real-time graphics simulations | | BIBAK | Full-Text | 123-129 | |
| Anthony Steed | |||
| One of the critical determinants of the effectiveness and usability of
interactive graphics simulations is the latency with which visual updates can
be made based on input from interaction devices. High latency can diminish
performance and can lead to simulator sickness. We demonstrate a new method for
measuring latency using a standard video camera. The method is simple to
configure, sensitive and rapid to use. This is in contrast to previous methods
which required specialized equipment, were laborious or could only determine
gross changes in latency. We attach a tracker to a pendulum and move a
simulated image on the screen using the tracker positions. We video both the
pendulum and simulated image together, and fit two sine curves, one to centre
of motion of pendulum and one to the centre of motion of the simulated image.
From the phase difference between these two sine curves we can determine
latency changes significantly less than the frame rate of the camera. We
demonstrate the method by comparing the latency of a two different systems for
a CAVE™-like display. Keywords: interactive systems, latency, performance, real-time graphics, system design | |||
| Radiometric compensation for a low-cost immersive projection system | | BIBAK | Full-Text | 130-133 | |
| Julien Dehos; Eric Zeghers; Christophe Renaud; François Rousselle; Laurent Sarry | |||
| Catopsys is a low-cost projection system aiming at making mixed reality
(virtual, augmented or diminished reality) affordable. It combines a
videoprojector, a camera and a convex mirror and works in a non-specific room.
This system displays an immersive environment by projecting an image onto the
different parts of the room. However, the presence of an uncalibrated
projector, heterogeneous materials and light inter-reflections influence the
colors of the environment displayed in the room. Radiometric compensation of
the projection process enables the system to reduce this problem.
In this paper, we present our low-cost immersive projection system and propose a radiometric model and a compensation method which handle the projector response, surface materials and inter-reflections between surfaces. Our method works in two stages. First, the radiometric response of the projection process is evaluated. Then, this radiometric response is used to compensate the projection process in the desired environments. Keywords: immersive projection, mixed reality, radiometric compensation, virtual
reality | |||
| Using laser projectors for augmented reality | | BIBAK | Full-Text | 134-137 | |
| Björn Schwerdtfeger; Daniel Pustka; Andreas Hofhauser; Gudrun Klinker | |||
| The paper explores the use of laser projectors as an alternative to
head-mounted displays for Augmented Reality. We describe the development of an
Augmented Reality Laser Projector and report on experiences setting up AR
systems that use laser projectors, reasoning about several design criteria. Keywords: augmented reality, industrial augmented reality, laser projector | |||
| User boresight calibration precision for large-format head-up displays | | BIBAK | Full-Text | 141-148 | |
| Magnus Axholt; Stephen Peterson; Stephen R. Ellis | |||
| The postural sway in 24 subjects performing a boresight calibration task on
a large format head-up display is studied to estimate the impact of human
limits on boresight calibration precision and ultimately on static registration
errors. The dependent variables, accumulated sway path and omni-directional
standard deviation, are analyzed for the calibration exercise and compared
against control cases where subjects are quietly standing with eyes open and
eyes closed. Findings show that postural stability significantly deteriorates
during boresight calibration compared to when the subject is not occupied with
a visual task. Analysis over time shows that the calibration error can be
reduced by 39% if calibration measurements are recorded in a three second
interval at approximately 15 seconds into the calibration session as opposed to
an initial reading. Furthermore parameter optimization on experiment data
suggests a Weibull distribution as a possible error description and estimation
for omni-directional calibration precision. This paper extends previously
published preliminary analyses and the conclusions are verified with experiment
data that has been corrected for subject inverted pendulum compensatory head
rotation by providing a better estimate of the position of the eye. With
correction the statistical findings are reinforced. Keywords: augmented reality, boresight, calibration, line of sight, postural sway | |||
| Analyses of human sensitivity to redirected walking | | BIBA | Full-Text | 149-156 | |
| Frank Steinicke; Gerd Bruder; Jason Jerald; Harald Frenz; Markus Lappe | |||
| Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. In a constant stimuli experiment with a two-alternative-forced-choice task we have quantified how much humans can unknowingly be redirected on virtual paths which are different from the paths they actually walk. 18 subjects have been tested in four different experiments: (E1a) discrimination between virtual and physical rotation, (E1b) discrimination between two successive rotations, (E2) discrimination between virtual and physical translation, and discrimination of walking direction (E3a) without and (E3b) with start-up. In experiment E1a subjects performed rotations to which different gains have been applied, and then had to choose whether or not the visually perceived rotation was greater than the physical rotation. In experiment E1b subjects discriminated between two successive rotations where different gains have been applied to the physical rotation. In experiment E2 subjects chose if they thought that the physical walk was longer than the visually perceived scaled travel distance. In experiment E3a subjects walked a straight path in the IVE which was physically bent to the left or to the right, and they estimate the direction of the curvature. In experiment E3a the gain was applied immediately, whereas the gain was applied after a start-up of two meters in experiment E3b. Our results show that users can be turned physically about 68% more or 10% less than the perceived virtual rotation, distances can be up- or down-scaled by 22%, and users can be redirected on an circular arc with a radius greater than 24 meters while they believe they are walking straight. | |||
| A psychophysically calibrated controller for navigating through large environments in a limited free-walking space | | BIBAK | Full-Text | 157-164 | |
| David Engel; Cristóbal Curio; Lili Tcheang; Betty Mohler; Heinrich H. Bülthoff | |||
| Experience indicates that the sense of presence in a virtual environment is
enhanced when the participants are able to actively move through it. When
exploring a virtual world by walking, the size of the model is usually limited
by the size of the available tracking space. A promising way to overcome these
limitations are motion compression techniques, which decouple the position in
the real and virtual world by introducing imperceptible visual-proprioceptive
conflicts. Such techniques usually precalculate the redirection factors,
greatly reducing their robustness. We propose a novel way to determine the
instantaneous rotational gains using a controller based on an optimization
problem. We present a psychophysical study that measures the sensitivity of
visual-proprioceptive conflicts during walking and use this to calibrate a
real-time controller. We show the validity of our approach by allowing users to
walk through virtual environments vastly larger than the tracking space. Keywords: motion-compression, rotational gains, virtual reality | |||
| The effect of self-embodiment on distance perception in immersive virtual environments | | BIBAK | Full-Text | 167-170 | |
| Brian Ries; Victoria Interrante; Michael Kaeding; Lee Anderson | |||
| Previous research has shown that egocentric distance estimation suffers from
compression in virtual environments when viewed through head mounted displays.
Though many possible variables and factors have been investigated, the source
of the compression is yet to be fully realized. Recent experiments have hinted
in the direction of an unsatisfied feeling of presence being the cause. This
paper investigates this presence hypothesis by exploring the benefit of
providing self-embodiment to the user through the form of a virtual avatar,
presenting an experiment comparing errors in egocentric distance perception
through direct-blind walking between subjects with a virtual avatar and
without. The result of this experiment finds a significant improvement with
egocentric distance estimations for users equipped with a virtual avatar over
those without. Keywords: egocentric distance perception, immersive virtual environments, virtual
avatar | |||
| Use of auditory cues for wayfinding assistance in virtual environment: music aids route decision | | BIBAK | Full-Text | 171-174 | |
| Janki Dodiya; Vassil N. Alexandrov | |||
| This paper addresses the crucial problem of wayfinding assistance in the
Virtual Environments (VEs). A number of navigation aids such as maps, agents,
trails and acoustic landmarks are available to support the user for navigation
in VEs, however it is evident that most of the aids are visually dominated.
This work-in-progress describes a sound based approach that intends to assist
the task of 'route decision' during navigation in a VE using music.
Furthermore, with use of musical sounds it aims to reduce the cognitive load
associated with other visually as well as physically dominated tasks. To
achieve these goals, the approach exploits the benefits provided by music to
ease and enhance the task of wayfinding, whilst making the user experience in
the VE smooth and enjoyable. Keywords: auditory navigation, sound and music perception, virtual environments,
wayfinding aids | |||
| Use of virtual reality for spatial knowledge transfer: effects of passive/active exploration mode in simple and complex routes for three different recall tasks | | BIBAK | Full-Text | 175-178 | |
| Grégory Wallet; Hélène Sauzéon; Jérôme Rodrigues; Bernard N'Kaoua | |||
| The use of virtual reality in the area of spatial cognition raises the
question of the quality of learning transfer from a virtual to a real
environment. Among the challenges, one is to determine the best cognitive aids
to improve the quality of transfer and the conditions in which this is best
achieved. The purpose of this study was to investigate the impact of passive
and active exploration mode on quality of transfer in three different spatial
recall tasks when the route was simple or complex.
Ninety subjects (45 men and 45 women) participated in the experiment. Spatial learning was evaluated by 3 tasks: Wayfinding (route reproduction in reality), Sketch-mapping (free hand drawing) and Scene-classification (make a series of pictures in chronological order) in the context of the district of Bordeaux. In the Wayfinding task, active learning in a Virtual Environment (VE) increased performances compared to the passive learning condition, irrespective of the route complexity factor. In the Sketch-mapping task, active learning in a VE induced better performances than the passive condition, but only for complex routes. In the Picture classification task, no benefit was observed from active learning with both simple and complex routes. These results are discussed in terms of the functional demands of the three tasks and the route complexity dimension. Keywords: exploration mode, knowledge transfer, recall tasks, route complexity,
spatial cognition, virtual reality | |||
| Virtual reality as a tool for assessing episodic memory | | BIBAK | Full-Text | 179-182 | |
| Gaën Plancher; Serge Nicolas; Pascale Piolino | |||
| The principal attraction of virtual reality is its potential to create
experiments close to daily life with perfect experimental control. We performed
an experiment in a virtual town in order to develop a better episodic memory
assessment. We tested all components of episodic memory. Young and elderly
adults participated in the virtual test: they were either in an active
exploration or in a passive exploration of the town. The results showed that
older persons recalled the spatiotemporal context and the details of the events
in a lower proportion compared to younger ones regardless of the active or
passive condition. But no difference was found between active and passive
exploration in measures of episodic memory. Finally, correlations mainly
appeared between memory complaint and virtual scores, but not with a classical
verbal episodic memory test. The virtual test seems to allow a better
assessment of episodic memory compared to classical studies, especially because
of its components of spatiotemporal memory assessment. In conclusion, virtual
reality appears to offer the possibility of developing neuropsychological tools
closer to the daily life of patients. Keywords: action, ageing, episodic memory, virtual reality | |||
| Image-based texture replacement using multiview images | | BIBAK | Full-Text | 185-192 | |
| Doron Tal; Ilan Shimshoni; Ayellet Tal | |||
| Augmented reality is concerned with combining real-world data, such as
images, with artificial data. Texture replacement is one such task. It is the
process of painting a new texture over an existing textured image patch, such
that depth cues are maintained. This paper proposes a general and automatic
approach for performing texture replacement, which is based on multiview stereo
techniques that produce depth information at every pixel. The use of several
images allows us to address the inherent limitation of previous studies, which
are constrained to specific texture classes, such as textureless or
near-regular textures. To be able to handle general textures, a modified dense
correspondence estimation algorithm is designed and presented. Keywords: multiview stereo, texture replacement | |||
| VirtualizeMe: interactive model reconstruction from stereo video streams | | BIBAK | Full-Text | 193-196 | |
| Daniel Knoblauch; Falko Kuester | |||
| Tele-Immersion depends on face-to-face, viewpoint corrected, stereoscopic,
virtual environments, allowing users to naturally interact with each other and
the digital environment surrounding them, via realistic avatars. This paper
presents an extraction, modeling and communication methodology for the creation
of the needed avatars, based on a scalable focused disparity map (FDM)
approach. The FDM enables variable distance reconstruction of dynamic target
objects despite restrictions in disparity range. Combined with a pixel-based
disparity cost interpolation, a sub-pixel disparity refinement is achieved,
providing a high depth-resolution and smooth reconstruction of the target
objects. This technique approaches real-time characteristics, extracting avatar
models and communicating them to remote render nodes at 15 frames per second. Keywords: 3D from video, background extraction, disparity maps, point cloud streaming,
tele-immersion | |||
| Feature points based facial animation retargeting | | BIBAK | Full-Text | 197-200 | |
| Ludovic Dutreve; Alexandre Meyer; Saïda Bouakaz | |||
| We present a method for transferring facial animation in real-time. The
source animation may be an existing 3D animation or 2D data providing by a
video tracker or a motion capture system. Based on two sets of feature points
manually selected on the source and target faces (the only manual work
required), a RBF network is trained and provides a geometric transformation
between the two faces. At each frame, the RBF transformation is applied on the
new feature points positions of the source face, resulting in new positions for
target feature points according with the expression of the source face and the
morphology of the target face. According to their displacements along time, we
deform the target mesh on the GPU with the linear blend skinning (LBS) method.
In order to make our approach attractive to novice user, we propose a
procedural technique to automatically rig the target face by generating
vertices weights for the skinning deformation. To summarize, our method
provides interactive expression transfer with a minimal human intervention
during setup and accepts various kind of animation sources. Keywords: facial animation, performance-driven facial animation, retargeting,
skeleton-subspace deformation | |||
| A novel method based on color information for scanned data alignment | | BIBAK | Full-Text | 201-204 | |
| Shen Yang; Yue Qi; Fei Hou; Xukun Shen; Qinping Zhao | |||
| This paper presents a rapid and robust method to align large sets of range
scans captured by a 3D scanner automatically. The method incorporates the color
information from the range data into the pairwise registration. Firstly, it
detects the features using SIFT (Scale-Invariant Feature Transform) on
grayscale images generated from two range scans to align. Then a quasi-dense
matching algorithm, based on the match propagation principle, is applied to
specify the matching pixel pairs between two images. All matches obtained are
mapped to 3D space but in different world coordinates, and filtered by the 3D
geometry constraint discovered from the range data. The remaining set of point
correspondences is used to estimate the rigid transformation. Finally, a
modified ICP (Iterative Closest Point) algorithm is applied to refine the
result. The paper also describes a framework to use this alignment method for
object reconstruction. The reconstruction proceeds by acquiring several range
scans with color information from different directions, following which
pair-wise of range data are aligned with the above method selectively and
iteratively. Then a model graph containing the correct pair-wise matches is
created and a span tree specifying a complete model is constructed. Finally a
global optimization is performed to refine the result. This reconstruction
technique achieves a robust and high performance in the application of
rebuilding the 3D models of culture heritages for virtual museum automatically. Keywords: 3D scanning, automatic registration, coarse registration, multi-view
registration | |||
| Segmented gesture recognition for controlling character animation | | BIBAK | Full-Text | 205-208 | |
| En-Wei Huang; Li-Chen Fu | |||
| In this paper, we propose a method which uses vision-based gesture
recognition to control character animation. Each animation sequence has a
corresponding gesture to be recognized, and we focus on upper-body motions and
use one camera to capture images. Human gestures are modeled by a learned graph
model whose nodes are key frames of these gestures. The animation sequences are
pre-processed to generate a motion graph, and the mapping between the gesture
model and the animation motion graph is created. At run time, the recognized
node sequence in the gesture model will guide the animation to traverse the
animation motion graph. Our method avoids complex process of completely
reconstructing the human motion and still holds the advantages such as being
intuitive, quickly responsive and versatile. The proposed method can be applied
to control avatar actions in a large virtual environment. Our experiments show
that the segmented gesture recognition can robustly control the animation with
quick response even there are ambiguities in the initial poses of some
gestures. Keywords: character animation, interactive control | |||
| Opportunistic controls: leveraging natural affordances as tangible user interfaces for augmented reality | | BIBAK | Full-Text | 211-218 | |
| Steven J. Henderson; Steven Feiner | |||
| We present Opportunistic Controls, a class of user interaction techniques
for augmented reality (AR) applications that support gesturing on, and
receiving feedback from, otherwise unused affordances already present in the
domain environment. Opportunistic Controls leverage characteristics of these
affordances to provide passive haptics that ease gesture input, simplify
gesture recognition, and provide tangible feedback to the user. 3D widgets are
tightly coupled with affordances to provide visual feedback and hints about the
functionality of the control. For example, a set of buttons is mapped to
existing tactile features on domain objects. We describe examples of
Opportunistic Controls that we have designed and implemented using optical
marker tracking, combined with appearance-based gesture recognition. We present
the results of a user study in which participants performed a simulated
maintenance inspection of an aircraft engine using a set of virtual buttons
implemented both as Opportunistic Controls and using simpler passive haptics.
Opportunistic Controls allowed participants to complete their tasks
significantly faster and were preferred over the baseline technique. Keywords: 3D interaction, augmented reality, selection metaphor, tangible user
interfaces | |||
| Detection of moving objects and cast shadows using a spherical vision camera for outdoor mixed reality | | BIBAK | Full-Text | 219-222 | |
| Tetsuya Kakuta; Lu Boun Vinh; Rei Kawakami; Takeshi Oishi; Katsushi Ikeuchi | |||
| This paper presents a method to detect moving objects and remove their
shadows for superimposing them on Mixed Reality (MR) systems. We cut out the
foreground from a real image using a probability-based segmentation method.
Using color, spatial, and temporal priors, we can improve the accuracy of the
segmentation. Energy minimization is executed by graph cuts. Then we remove the
shadow region from the foreground with F-value calculated from the pixel value
and the spectral sensitivity characteristic of the camera. Finally we
superimpose virtual objects using the stencil buffer, which is used to limit
the area of rendering for each pixel. Synthesized images of an outdoor scene
show the efficiency of the proposed method. Keywords: augmented reality, foreground extraction, mixed reality, shadow removal | |||
| Napkin sketch: handheld mixed reality 3D sketching | | BIBAK | Full-Text | 223-226 | |
| Min Xin; Ehud Sharlin; Mario Costa Sousa | |||
| This paper describes, Napkin Sketch, a 3D sketching interface which attempts
to support sketch-based artistic expression in 3D, mimicking some of the
qualities of conventional sketching media and tools both in terms of physical
properties and interaction experience. A portable tablet PC is used as the
sketching platform, and handheld mixed reality techniques are employed to allow
3D sketches to be created on top of a physical napkin. Intuitive manipulation
and navigation within the 3D design space is achieved by visually tracking the
tablet PC with a camera and mixed reality markers. For artistic expression
using sketch input, we improve upon the projective 3D sketching approach with a
one stroke sketch plane definition technique. This coupled with the hardware
setup produces a natural and fluid sketching experience. Keywords: 3D design, mixed reality, sketch-based design | |||
| Mutual occlusions on table-top displays in mixed reality applications | | BIBAK | Full-Text | 227-230 | |
| Daniel Kurz; Kiyoshi Kiyokawa; Haruo Takemura | |||
| This paper describes an approach to dealing with mutual occlusions between
virtual and real objects on a table-top display. Display tables use stereoscopy
to make virtual content appear to exist in 3 dimensions on or above a table
top. The actual image, however, lies on the physical plane of the display
table. Any real physical object introduced above this plane therefore obstructs
our view of the display surface and disrupts the illusion of the virtual scene.
The occlusions result between real objects and the display surface, not between
real objects and virtual objects. For the same reason virtual objects cannot
occlude real ones. Our approach uses an additional projector located near the
user's head to project those parts of virtual objects that should occlude real
ones directly onto the real objects. We describe possible applications and
limitations of the approach and its current implementation. Despite its
limitations, we believe that the proposed approach can significantly improve
interaction quality and performance for mixed reality scenarios. Keywords: mixed reality, mutual occlusions, table-top displays | |||
| Watercolor inspired non-photorealistic rendering for augmented reality | | BIBAK | Full-Text | 231-234 | |
| Jiajian Chen; Greg Turk; Blair MacIntyre | |||
| Non-photorealistic rendering (NPR) is an attractive approach for seamlessly
blending virtual and physical content in Augmented Reality (AR) applications.
Simple NRP techniques, that use information from a single rendered image, have
been demonstrated in real-time AR systems. More complex NRP techniques require
visual coherence across multiple frames of video, and typical offline
algorithms are expensive and/or require global knowledge of the video sequence.
To use such techniques in real-time AR, fast algorithms must be developed that
do not require information past the currently rendered frame. This paper
presents a watercolor-like NPR style for AR applications with some degree of
visual coherence. Keywords: Voronoi diagrams, augmented reality (AR), non-photorealistic rendering (NPR) | |||
| A framework for scalable virtual worlds using spatially organized P2P networks | | BIBAK | Full-Text | 237-238 | |
| Romain Cavagna; Maha Abdallah; Christian Bouville | |||
| The general craze for virtual environments, the potential of augmented
reality applications and the announced revolution of the Internet world (Web
2.0, Web 3D.0) are key points for the emergence of an 'ambient' Web which will
make it possible for users to communicate, collaborate, entertain, work and
exchange content. In this context, content storage, delivery, and reproduction
are among the essential points for the deployment of a highly scalable platform
of wide reality. In this paper, we propose a self-scalable peer-to-peer
architecture for the navigation in network-based virtual worlds. To reach this
goal, we propose a fully distributed and adaptive streaming method that quickly
adapts the reproduced content according to user interaction. Our content
delivery strategy has been implemented and tested on a dedicated simulator with
a large 3D city model. Keywords: peer-to-peer, self-adaptation, self-distribution, self-repartition,
self-scalabity, virtual environments | |||
| Gesture recognition in flow based on PCA and using multiagent system | | BIBAK | Full-Text | 239-240 | |
| Ronan Billon; Alexis Nédélec; Jacques Tisseau | |||
| In our context of Virtual Theater, a virtual actor performs with a real
actor. They communicate through movements and choreography. The system has to
interpret the real actor's gesture into a symbolic representation. Therefore,
we present a method for real-time recognition. We use properties from Principal
Component Analysis (PCA) to create signature for each gesture and a multiagent
system to perform the recognition. Keywords: gesture recognition, motion-capture, synthetic actor, virtual theatre | |||
| Clutter-aware adaptive projection inside a dynamic environment | | BIBAK | Full-Text | 241-242 | |
| Thitirat Siriborvornratanakul; Masanori Sugimoto | |||
| This paper presents a framework for a computationally adaptive projection
metaphor using a handheld projector inside a dynamic cluttered environment. In
addition to conventional self-correcting projection features, the framework
uses multiple clutter tracking and adaptive target generation to define the
clutter-aware target area for projection in a reliable manner. Using a paired
projector-camera system, the framework first builds high spatial frequency
feature maps using a Laplacian pyramid approach. The feature maps are then
passed to a rejection step to eliminate spurious features caused by contents of
the projected image. After the resulting features representing clutters are
processed by the appropriated design tracker, the target area for projection is
generated. Finally, the desired information for projection is rendered and sent
back to the projector. The framework can be used effectively for a
clutter-aware handheld projector-based system without the need for a complex
hardware setup or with any prior need to clean up the environment. Keywords: adaptive projection, clutter-aware, handheld projector, laplacian pyramids,
multiple target tracking, particle filters | |||
| A hybrid approach towards fully automatic 3D marker tracking | | BIBAK | Full-Text | 243-244 | |
| Matthias Weber | |||
| Motion Capture is a powerful approach to track 3D position, usually
utilizing markers. Especially passive markers do not hinder natural motion.
Unfortunately, such systems do not provide any information about which
anatomical landmark their markers belong to. Multiple manual actions are often
required to guide the tracking process. This work presents a hybrid approach
for nearly fully automatic identification and tracking of such markers. It
encompasses three methods for identification, using PCA-based alignment or
tree-based optimization, and tracking, using a neural network with
self-organizing characteristics. Keywords: hybrid tracking, motion capture, neural networks | |||
| Development and evaluation of a virtual reality simulator for training of thyroid gland nodules needle biopsy | | BIBAK | Full-Text | 245-246 | |
| Ilana de Almeida Souza; Claudiney, Jr. Sanches; Marcia N. S. Kondo; Marcelo Knorich Zuffo | |||
| The fine needle biopsy is an important procedure for investigations in
tumors, low-cost considered, minimally invasive and ideal for supplying an
accurate diagnosis in cases of thyroid gland nodules. This work investigated
the possibility of the development of a virtual reality simulator for the
training of the ultrasound guided needle biopsy of thyroid gland nodules, using
3D models and haptic devices. The developed system is also an educative tool,
because besides practicing the procedure, the user can visualize a thyroid 3D
model and touch it to feel its texture, as well as rotate a complete model of
the neck with transparancy in order to study all its internal organs. Keywords: medical simulation, needle biopsy, thyroid gland, virtual reality | |||
| Software platform for real-time room acoustic visualization | | BIBAK | Full-Text | 247-248 | |
| Rami Ajaj; Lauri Savioja; Christian Jacquemin | |||
| This paper presents a novel platform for interactive virtual room acoustics
simulation and visualization. The platform is based on two autonomous modules:
EVERTims for acoustic simulation and Audio2graphical for graphical rendering
and interaction management. The system is a tool for acousticians and
architects to understand better the acoustic properties of a closed space.
Beside visualizing sound reflections, it allows navigation as well as
geometrical properties modification of a given sound source and a listener. Keywords: 3D visualization, real-time audio beam-tracing, room acoustic | |||
| On the hybrid aid-localization for outdoor augmented reality applications | | BIBAK | Full-Text | 249-250 | |
| I. M. Zendjebil; F. Ababsa; J-Y. Didier; M. Mallem | |||
| In mobile outdoor augmented reality applications, accurate localization is
critical to register virtual augmentations over a real scene. Vision-based
approaches provide accurate localization estimates but are still too sensitive
to outdoor conditions (brightness changes, occlusions, etc.). This drawback can
be overcome by adding other types of sensors. In this work, we combine a GPS
and an inertial sensor with a camera to provide accurate localization. We will
present the calibration process and we will discuss how to quantify the 3D
localization accuracy. Experimental results on real data are presented. Keywords: 3D localization, calibration, error prediction, hybrid sensor, outdoor
augmented reality | |||
| FlowVR-VRPN: first experiments of a VRPN/FlowVR coupling | | BIBAK | Full-Text | 251-252 | |
| Sébastien Limet; Sophie Robert | |||
| This paper describes a generic coupling between the middleware FlowVR
designed to develop modular VR applications for distributed architectures like
PC clusters and VRPN which is a library that manages a large number of physical
devices. This generic coupling takes the advantages of the FlowVR programming
model to provide all the services offered by VRPN in FlowVR applications. Keywords: VR system, interaction, middleware | |||
| MRPipeline: a module based architecture for self adaptive mixed reality applications | | BIBAK | Full-Text | 253-254 | |
| Christian Reimann; Florian Klompmaker; Holger Santelmann | |||
| This paper introduces MRPipeline -- a module based, configurable and self
adaptive approach for developing Mixed Reality applications. This work
describes how our system can be configured and used for application creation by
defining replaceable modules via XML. Each module has parameters that may be
manipulated during run time. Self adaptation is applied by permanently
measuring, evaluating and optimising the output of the whole system. We
therefore created measurement methods, evaluation functions and optimisation
algorithms that especially focus on the requirements for mobile Mixed Reality
applications. Keywords: augmented reality, mixed reality, self adaptation | |||
| Hand's 3D movement detection with one handheld camera | | BIBAK | Full-Text | 255-256 | |
| Mingming Fan; Liang Zhang; Yuanchun Shi | |||
| This paper presents a scheme to create a real-time and reliable method for
recognizing vision-based hand's 3D movement and to use the movement parameters
for controlling 3D objects. The algorithm for 3D movement detection is totally
based on analyzing feature points from the only camera in user's hand. As the
algorithm is based on frames captured from one camera in untrained environment,
it's difficult to distinguish similar movements on optical flow images,
especially between shifting and rotating. A novel differentiation algorithm by
voting from some weak classifiers is used. The algorithm provides a method of
direct mapping user's hand movement to object control. We design an application
of controlling a virtual 3D cube's movement and estimate the accuracy of the
algorithm. And the experiments' result presents that the 3D movement detection
algorithm is efficient and robust enough for real-time interaction. Keywords: 3D movement, classifiers, features points, handheld camera, virtual
interaction | |||
| Wind turbines' landscape: using virtual reality for the assessment of multisensory perception in motion | | BIBAK | Full-Text | 257-258 | |
| Jihen Jallouli; Guillaume Moreau; Ronan Querrec | |||
| Wind turbines (WT) are socially controversial because of their visual and
acoustic impacts on landscape. Virtual reality (VR) is here proposed -- thanks
to immersion and interaction potentialities -- as an immersive and multisensory
approach in order to assess WT impacts. For that, a comparison between a real
park and the same virtual one is needed to evaluate VR for landscape impacts
restitution. The parks are evaluated using an urban path-based method
(perception in motion): the real walking is simulated by a Wiimote in vitro.
The results, while very similar to the in situ study show the limits of the
lack of free motion. Keywords: impacts, landscape, motion, perception, wind turbines | |||
| Multimodal prop-based interaction with virtual mock-up: CAD model integration and human performance evaluation | | BIBAK | Full-Text | 259-260 | |
| Damien Chamaret; Paul Richard | |||
| This paper presents a methodology for the efficient integration of CAD
models in a physical-based virtual reality simulation that provides the user
with multi-modal feedback. User interacts with virtual mock-up using a
string-based haptic interface. Hand tracking is realized using a motion capture
system. Stereoscopic images are displayed on a 2m x 2.5m retro-projected screen
and viewed using polarized glasses. The proposed methodology implemented in a
low-cost system, has been validated through an experimental study. Six
participants were instructed to remove a car lamp from the virtual mockup and
replace it in correct position. A prop was used to provide local haptic
sensation related to the car lamp. Three experimental conditions were tested
concerning sensory feedback from collisions: (1) no feedback (graphics only),
(2) visual feedback and (3) haptic feedback. Results show that visual and
haptic feedback allowed to increase performance, as compared with the open-loop
case (no feedback), by respectively 17.8% and 35.2%. Keywords: CAD model, human-scale haptics, multimodality, virtual mock-up, virtual
reality | |||
| VRPN and Qwerk: fast MR device prototyping and testing | | BIBAK | Full-Text | 261-262 | |
| Camilo A. Perez; Pablo Figueroa | |||
| We present a platform that offers designers flexibility on device design,
fast prototyping, and integration of new devices to a mixed reality
infrastructure. Our solution is based on the integration of a commercial
embedded system, the Qwerk, and the Virtual Reality Peripheral Network (VRPN),
a network-transparent interface between applications and typical virtual
reality (VR) devices. This solution creates a hardware and software layer
between new devices and VR applications that facilitate development. We show
here a design process for new MR devices. With our hardware and software layer
we allow designers concentrate more in the interaction rather than the way
sensors are connected. To test our design process and our platform we implement
three simple examples. Keywords: augmented reality, embedded system, prototyping, virtual reality | |||
| Multi-touch gestural interaction in X3D using hidden Markov models | | BIBAK | Full-Text | 263-264 | |
| Sabine Webel; Jens Keil; Michael Zoellner | |||
| Multi-touch interaction on tabletop displays is a very active field of
todays HCI research. However, most publications still focus on tracking
techniques or develop gesture configurations for a specific application setup
using a small set of simple gestures. In this work we present a new approach to
easily set up the recognition of even complex gestures for multi-touch
applications. Our gesture recognition module is based on Hidden Markov Models,
which offer a robust recognition of multiple gestures in real-time. An X3D
interface of the recognition module is provided to qualify designers and other
non-programmers to apply gesture recognition functionalities to multi-touch
applications in an easy and straightforward manner. Keywords: X3D, gesture recognition, multi-touch, multi-user, tabletop interaction | |||
| Identification judgment of self-viewpoint movie | | BIBAK | Full-Text | 265-266 | |
| T. Kayahara | |||
| Does self-viewpoint "life-log" movie taken by a CCD camera at a person's
forehead contain any cue information by which one can identify his/her own
movie from others? In particular, do dynamic and non-episodic aspects of the
self-viewpoint movie (scene shake by walking) make it possible to distinguish
"my" movie from others? To examine this question, subjects were asked to
distinguish a movie taken by a CCD camera placed at their forehead and body
from a movie taken at the head and body of others. Before experiment, all
subjects walked through a gymnasium whose visual condition was kept constant
between subjects to take a movie of subjects' viewpoint as experimental
stimuli. Any episodic visual event was eliminated from the content of the
movie. The rate of correct judgment of distinguishing the movie from "my"
viewpoint from others was examined with 2IFC procedure and was significantly
higher than that of control condition in which subjects' judgment were executed
with still image, suggesting that self-viewpoint movie might contain some
non-episodic ID information. Keywords: identification judgment, self-viewpoint movie, wearable computing | |||
| Size estimation in product visualization using augmented reality | | BIBAK | Full-Text | 267-268 | |
| Alberto Gómez; Pablo Figueroa | |||
| This paper describes an experiment on how AR-based product visualization has
an impact on product size selection, compared to traditional techniques in
sales. Our study suggests that the use of our AR system is preferred in
comparison to using a tape measure and a printed catalog with size information.
We also found that users had difficulty in the selection of adequately sized
models when the space to be filled is large, both with our catalog and our AR
based solution. Keywords: ARToolKit, augmented reality, user interfaces, user studies on size
estimation | |||
| SqueezeOrb: a low-cost pressure-sensitive user input device | | BIBAK | Full-Text | 269-270 | |
| Thomas Pintaric; Thomas Kment; Wolfgang Spreicer | |||
| This paper introduces a new low-cost pressure-sensitive user input device
called "SqueezeOrb". The device is built from an assembly force-sensing
resistors embedded in an elastic hand exerciser. A USB-enabled microcontroller
continuously samples the sensors, applies a double-exponential noise-reduction
filter and streams the resulting "handgrip strength" measurement to an attached
host computer at a frequency of up to 1000 Hz. When combined with optical
motion-tracking, the SqueezeOrb becomes a pressure-sensing input device for
three-dimensional interaction. Keywords: force sensor, handgrip measurement, haptic monitoring, pressure-sensitive
input device | |||
| Compared distortion effects between real and virtual ophthalmic lenses with a simulator | | BIBAK | Full-Text | 271-272 | |
| Gildas Marin; Edith Terrenoire; Martha Hernandez | |||
| In this study we have compared the subjective effect of distortions
simulated ophthalmic lenses in a virtual lens simulator to the equivalent real
ophthalmic test lenses, in static monocular and dynamic, monocular and
binocular conditions, taking care of matching as best as possible virtual and
real conditions. Though visual perception was found to be similar in static
condition, distortions were judged to be exaggerated by the virtual lenses in
dynamic conditions. Keywords: distortion, ophthalmic simulator, subjective comparison, subjective
perception, virtual reality | |||
| Real-time rendering of solvent-accessible surfaces for molecular models | | BIBAK | Full-Text | 273-274 | |
| Jun Lee; Sungjun Park; Youngjin Choi; Hyung Seok Kim; Jee-In Kim | |||
| In molecular modeling, it is quite useful to exercise real-time rendering of
solvent-accessible surfaces for three dimensional models. The real-time
rendering helps researchers in analyzing three dimensional structures and
behaviors of molecular simulations. The researchers can determine whether
critical parts of molecular models are correctly visualized and properly
combined at right locations. However, it is quite difficult to render
solvent-accessible surfaces in real-time using conventional molecular modeling
tools. It is because the surfaces are visualized as isosurfaces which express
chemical information and three dimensional positions. In this paper, we propose
a method which facilitates real-time rendering of solvent-accessible surfaces
for three dimensional molecular models. We evaluated real-time interactivity of
our method with molecular models. Therefore, researchers can observe and
manipulate solvent-accessible surfaces of three dimensional molecular models in
real-time. Keywords: animation, metaballs, solvent-accessible surface | |||
| A movable-screen immersive projection display | | BIBAK | Full-Text | 275-276 | |
| Yuichi Tamura; Hiroaki Nakamura; Atsushi Ito | |||
| We propose a new room-sized immersive projection display. The display
consists of a cylindrical screen that can be moved horizontally and vertically,
allowing the user to easily change his/her field of view by moving the screen
to any angle. The angle of the screen is measured by a motion sensor, and the
projected stereo images are changed in response to the measured angle.
With this cylindrical screen system, it is necessary to project a distorted image onto the screen to produce a correct image. We make the distorted image using a multi-pass rendering method. In addition, a magnetic sensor measures the position and angle of the user and the screen images are changed if the user moves. Keywords: 3D display, immersive projection display, virtual reality, visualization | |||
| An attempt of real-time CG control with multi-touch devices | | BIBAK | Full-Text | 277-278 | |
| Asako Soga; Masahito Shiba; Tetsuya Kawamoto | |||
| We have been developing real-time CG control systems with Lemurs, which are
multi-touch devices. We have developed two prototype systems that control CG
objects and animation and a practical system that supports creating of TV
contents. Our system, which provides an easy and intuitive way to control two
or more parameters simultaneously, allows users to control such complicated
data as real-time CG contents. We verified that the system can be used in
actual broadcasting. Keywords: CG control, multi-touch, user interface | |||
| AVACHAT: a new comic-based chat system for virtual avatars | | BIBAK | Full-Text | 279-280 | |
| Soo-Hyun Park; Seung-Hyun Ji; Dong-Sung Ryu; Hwan-Gue Cho | |||
| We propose AVACHAT, a new comic-stylized communication interface for avatar
agents. We show that the 3-D word balloon could be successfully exploited to
depict chat dialogues and the atmosphere of groups talking, such as cheerful
laughing or loud quarrelling without adding any multimedia functions. Finally,
we propose a new data structure to manage the chat dialogues among virtual
avatars. It can be used to reconstruct the social-graph for chat agents in a
virtual world. Keywords: chat communication, stylized comics, virtual avatar | |||
| Cross-modal information display to improve driving performance | | BIBAK | Full-Text | 281-282 | |
| Shin'ichi Onimaru; Taro Uraoka; Naoyuki Matsuzaki; Michiteru Kitazaki | |||
| We developed a driving simulator with visual and/or auditory information
display to enhance the perception of lateral position of the driving car in
real-time. The purpose of this study was to test effects of the cross-modal
assistance information on the driving performance. We found the discrete visual
assistance improved the driving accuracy, but increased the driving load. For
auditory and audio-visual assistances, the continuous information improved the
accuracy without increasing the load. Thus, the cross-modal information is
useful for assisting and improving driver's performance with fewer loads. Keywords: auditory perception, cross-modal information, driving simulator, steering
control, vision | |||
| Force model for CAD selection | | BIBAK | Full-Text | 283-284 | |
| F. Picon; M. Ammi; P. Bourdot | |||
| In this paper we investigate attraction techniques, using the haptic
modality, for CAD applications. An important issue in CAD systems is the
modification of the Boundary Representation (B-Rep) of 3D objects. However,
this fundamental task is possible only if a geometric element (vertex, edge,
face) has already been selected. This paper focuses on a generic force feedback
model of haptic attraction for CAD applications. Our model is flexible and
allows the experimentation of several behaviours. Several evaluation studies
are carried out to compare the impact of the model parameters on the user
performance and comfort of use. Keywords: CAD-VR integration, haptic attraction | |||
| Comparing disparity based label segregation in augmented and virtual reality | | BIBAK | Full-Text | 285-286 | |
| Stephen D. Peterson; Magnus Axholt; Stephen R. Ellis | |||
| Recent work has shown that overlapping labels in far-field AR environments
can be successfully segregated by remapping them to predefined stereoscopic
depth layers. User performance was found to be optimal when setting the
interlayer disparity to 5-10 arcmin. The current paper investigates to what
extent this label segregation technique, label layering, is affected by
important perceptual defects in AR such as registration errors and mismatches
in accommodation, visual resolution and contrast. A virtual environment matched
to a corresponding AR condition but lacking these problems showed a reduction
in average response time by 10%. However, the performance pattern for different
label layering parameters was not significantly different in the AR and VR
environments, showing robustness of this label segregation technique against
such perceptual issues. Keywords: label placement, mixed reality, stereoscopic displays, user interfaces,
visual clutter | |||
| A surround interface using the Wii controller with multiple sensor bars | | BIBAK | Full-Text | 287-288 | |
| Torben Schou; Henry J. Gardner | |||
| A previous paper [Schou and Gardner 2007] has described a project to port a
games engine into a two-walled Immersive Projection Theatre (IPT) and to
interact with that environment using the Nintendo "Wii" Remote[Nintendo 2008].
In the present work, we update this project to describe how Wii controllers
have now been demonstrated to work with a custom-built, multiple "Sensor Bar"
array to achieve a greater coverage of the IPT. Keywords: Wii remote, game engine, immersive projection theatre, sensor bar, virtual
reality | |||
| Using the Wii Balance Board as a low-cost VR interaction device | | BIBAK | Full-Text | 289-290 | |
| Gerwin de Haan; Eric J. Griffith; Frits H. Post | |||
| We demonstrate the use of the Wii Balance Board™ as a low-cost virtual
reality input device. We provide an overview of obtaining and working with the
sensor input. By processing the sensor values from the balance board, we are
able to use it for both discrete and continuous input, which can be used to
drive a variety of VR interaction metaphors. Using continuous input, the
balance board is well suited for interactions requiring two simultaneous
degrees of freedom and up to three total degrees of freedom, such as navigation
or rotation. The discrete input is suitable for control input, such as mode
switching or object selection. Keywords: balance board, input devices, virtual reality | |||
| Alternative online extrinsic calibration techniques for minimally invasive surgery | | BIBAK | Full-Text | 291-292 | |
| Arun Kumar Raj Voruganti; Dirk Bartz | |||
| One of the main challenges in external optical tracking device based
augmented reality (AR) is the extrinsic calibration. Such an example is
calibration of the endoscope camera with the tracking device to estimate the
transformation between the endoscope-mounted marker and endoscope sensor. In
this paper, we describe two alternative techniques to the Hand-Eye method for
online calibration. First, we describe a direct technique based on estimating
rigid transformation from corresponding point sets and our idea of improving
the calibration efficiency by collecting corresponding points covering broader
area of the endoscope and tracker field-of-view (FOV). Then, we describe a
technique based on estimation of position and orientation of a planar object
from camera image. The main advantage of these techniques is that they are
easily repeatable in applications where a change in the relation between camera
sensor and camera-mounted marker is possible during the run-time of the AR
application. Keywords: extrinsic calibration, hand-eye, pose estimation | |||
| VR spray painting for training and design | | BIBAK | Full-Text | 293-294 | |
| Jonathan Konieczny; Gary Meyer; Clement Shimizu; John Heckman; Mark Manyen; Marty Rabens | |||
| A system is introduced for the simulation of spray painting. Head mounted
display goggles are combined with a tracking system to allow users to paint a
virtual surface with a spray gun. Ray tracing is used to simulate droplets
landing on the surface of the object, allowing arbitrary shapes and spray gun
patterns to be used. This system is combined with previous research on spray
gun characteristics to provide a realistic simulation of the spray paint
including the effects of viscosity, air pressure, and paint pressure. The
simulation provides two different output modes: a non-photorealistic display
that gives a visual representation of how much paint has landed on the surface,
and a photorealistic simulation of how the paint would actually look on the
object once it dried. Useful feedback values such as overspray are given.
Experiments were performed to validate the system. Keywords: VR applications, paint, user training, visualization | |||
| Localization system for large indoor environments using invisible markers | | BIBAK | Full-Text | 295-296 | |
| Yusuke Nakazato; Masayuki Kanbara; Naokazu Yokoya | |||
| We propose a user localization system that uses invisible markers for
wearable augmented reality (AR) in large indoor environments. Wearable AR
systems have received a great deal of attention as a new method for displaying
location-based information in the real world. For using wearable AR systems, it
is necessary to measure the position and orientation of a user using a
positioning infrastructure without the undesirable visual effects that arise
from merging real and virtual worlds. In addition, the infrastructure of the
localization environment must be constructed easily and cheaply. The proposed
system can estimate the position and orientation of a user precisely by
affixing wallpapers containing printed invisible markers on ceilings or walls.
The user's position and orientation are estimated by recognizing the markers
using an infrared camera with infrared LEDs. To construct environments for the
localization system, we developed an initialization tool that calibrates the
alignment of the markers from photographs taken by flash illumination using a
digital still camera. Keywords: augmented reality, invisible marker, localization | |||
| Grimage: 3D modeling for remote collaboration and telepresence | | BIBAK | Full-Text | 299-300 | |
| Benjamin Petit; Jean-Denis Lesage; Jean-Sébastien Franco; Edmond Boyer; Bruno Raffin | |||
| Real-time multi-camera 3D modeling provides full-body geometric and
photometric data on the objects present in the acquisition space. It can be
used as an input device for rendering textured 3D models, and for computing
interactions with virtual objects through a physical simulation engine. In this
paper we present a work in progress to build a collaborative environment where
two distant users, each one 3D modeled in real-time, interact in a shared
virtual world. Keywords: PC cluster, collaborative 3D interactions, marker-less 3D modeling,
multi-cameras, telepresence | |||
| Parallel LOD for static and dynamic generic geo-referenced data | | BIBAK | Full-Text | 301-302 | |
| Simon Arvaux; Joeffrey Legaux; Sébastien Limet; Emmanuel Melin; Sophie Robert | |||
| We propose to illustrate a generic Levels Of Details (LOD) technique which
is a key point to construct distributed VR application involving large static
or dynamic datasets, issued from measurements or big simulations. This demo
focus on a parallel LOD algorithm for GIS and presents a 3D textured section of
the Loire river near Orléans. Since all visualized data are handle in
RAM memory without complex pre-processing, this method is transposable into
more dynamic context where texture data are generated from simulations. Keywords: GIS, VR applications, collaborative and distributed VR, scientific
visualization | |||
| Collaborative exploration of 3D scientific data | | BIBAK | Full-Text | 303-304 | |
| Thierry Duval; Cédric Fleury; Bernard Nouailhas; Laurent Aguerreche | |||
| This demonstration introduces new ways for exploring Collaborative Virtual
Environments (CVE) that contain 3D scientific data sets obtained by simulation.
In order to make decisions accordingly to their collective knowledge and
understanding of the simulation, the users must collaborate and share
experiences and comments. We provide tools to enable a good coordination
between the users, and to make each user aware of the activity of others. Each
user can navigate within the CVE: change her own position, orientation and
scale. Each user can also add annotations within the virtual universe. We
propose several 3D layouts for the presentation of the data, associated with
different 3D navigation tools. Consequently, the user can explore the data
according to various parameters such as time or temperature. Last we propose a
new 3D interaction tool, called 2D Cursor / 3D Pointer, dedicated to selection
and manipulation of 3D objects, and application control. This 2D cursor is
associated with a 3D geometry in order to make people aware of the activity of
the users who are using this tool. Keywords: 3D interaction, 3D scientific visualization, collaborative virtual
environments | |||
| Wired gloves for every one | | BIBAK | Full-Text | 305-306 | |
| Hernando Ortega-Carrillo; Erika Martínez-Mirón | |||
| Wired gloves are one of the most useful tools in the field of Virtual
Reality. By using them, the users can interact more realistically with Virtual
Environments than when using a joystick, mouse, trackball or so. Despite some
attempts to develop low-budget wired gloves have been made, these useful
devices remain as very expensive tools for the common user.
Motivated by this situation, we propose the design and implementation of wired gloves based on a low budget novel technology. This technology uses indirect video-analysis to detect joint movements, which are transferred via a set of wires attached to the joints. As a result, these gloves can be acquired at a very low price or even enthusiast people could try to reproduce them. Keywords: flexible sensors, opto-mechanical, wired gloves | |||
| ArcheoTUI -- tangible interaction with foot pedal declutching for the virtual reassembly of fractured archaeological objects | | BIBAK | Full-Text | 307-308 | |
| Patrick Reuter; Guillaume Rivière; Nadine Couture; Stéphanie Mahut; Nicolas Sorraing; Loïc Espinasse | |||
| In this demonstration, we present ArcheoTUI, a new tangible user interface
for the efficient assembly of the 3D scanned fragments of fractured
archaeological objects. The key idea is to use tangible props for the
manipulation of the virtual fragments. In each hand, the user manipulates an
electromagnetically tracked prop, and the translations and rotations are
directly mapped to the corresponding virtual fragments on the display. For each
hand, a corresponding foot pedal is used to clutch the movements of the hands.
Hence, the hands of the user can be repositioned, or the user can be switched.
The software of ArcheoTUI is designed to easily change assembly hypotheses,
beyond classical undo/redo, by using a scene graph. Keywords: 3D interaction, tangible user interfaces | |||