| Binocular eye tracking in VR for visual inspection training | | BIBA | Full-Text | 1-8 | |
| Andrew T. Duchowski; Eric Medlin; Anand Gramopadhye; Brian Melloy; Santosh Nair | |||
| This paper presents novel software techniques for binocular eye tracking within Virtual Reality and discusses their application to aircraft inspection training. The aesthetic appearance of the environment is driven by standard graphical techniques augmented by realistic texture maps of the physical environment. The user's gaze direction, as well as head position and orientation, are tracked to allow recording of the user's fixations within the environment. Methods are given for (1) integration of the eye tracker into a Virtual Reality framework, (2) stereo calculation of the user's 3D gaze vector, (3) a new 3D calibration technique developed to estimate the user's inter-pupillary distance post-facto, and (4) a new technique for eye movement analysis in 3-space. The 3D eye movement analysis technique is an improvement over traditional 2D approaches since it takes into account the 6 degrees of freedom of head movements and is resolution independent. Results indicate that although the current signal analysis approach is somewhat noisy and tends to underestimate the identified number of fixations, recorded eye movements provide valuable human factors process measures complementing performance statistics used to gauge training effectiveness. | |||
| Inertial and magnetic posture tracking for inserting humans into networked virtual environments | | BIBAK | Full-Text | 9-16 | |
| Eric R. Bachmann; Robert B. McGhee; Xiaoping Yun; Michael J. Zyda | |||
| Rigid body orientation can be determined without the aid of a generated
source using nine-axis MARG (Magnetic field, Angular Rate, and Gravity) sensor
unit containing three orthogonally mounted angular rate sensors, three
orthogonal linear accelerometers and three orthogonal magnetometers. This paper
describes a quaternion-based complementary filter algorithm for processing the
output data from such a sensor. The filter forms the basis for a system
designed to determine the posture of an articulated body in real-time. In the
system the orientation relative to an Earth-fixed reference frame of each limb
segment is individually determined through the use of an attached MARG sensor.
The orientations are used to set the posture of an articulated body model.
Details of the fabrication of a prototype MARG sensor are presented.
Calibration algorithms for the sensors and the human body model are also
presented. Experimental results demonstrate the effectiveness of the tracking
system and verify the correctness of the underlying theory. Keywords: body tracking, complementary filtering, inertial/magnetic sensors,
quaternions, virtual environments | |||
| Tracking based structure and motion recovery for augmented video productions | | BIBAK | Full-Text | 17-24 | |
| Kurt Cornelis; Marc Pollefeys; Luc Van Gool | |||
| Augmented Reality (AR) can hardly be called uncharted territory. Much
research in this area revealed solutions to the three most prominent challenges
of AR: accurate camera state retrieval, resolving occlusions between real and
virtual objects and extraction of environment illumination distribution.
Solving these three challenges improves the illusion of virtual entities
belonging to our reality. This paper demonstrates an elaborated framework that
recovers accurate camera states from a video sequence based on feature
tracking. Without prior calibration knowledge, it is able to create AR Video
products with negligible/invisible jitter or drift of virtual entities starting
from general input video sequences. Together with the referenced papers, this
work describes a readily implementable and robust AR-System. Keywords: accurate registration, augmented reality, jitter reduction | |||
| SmartCU3D: a collaborative virtual environment system with behavior based interaction management | | BIBAK | Full-Text | 25-32 | |
| Weihua Wang; Qingping Lin; Jim Mee Ng; Chor Ping Low | |||
| To support real-time natural interaction in the Collaborative Virtual
Environment (CVE) with limited network bandwidth and computer processing power,
the development of an efficient interaction management mechanism is a key
issue. In this paper, we propose a behavior based interaction management
mechanism for managing the collaborative interactions among the distributed
users in our developed SmartCU3D, an Internet CVE system. With this mechanism,
message routing in the system becomes adaptive to the application and the
users' runtime interaction. It is achieved by giving the Object-Oriented style
collaborative behavior description along with the 3D environment definition for
every individual CVE application. The motivation of this research is to develop
a framework to support a flexible and adaptive interaction management mechanism
for developing different CVE applications. Keywords: behavior, collaborative virtual environment, interaction management | |||
| An abstraction for awareness management in collaborative virtual environments | | BIBA | Full-Text | 33-39 | |
| Miguel Antunes; António Rito Silva; Jorge Martins | |||
| This paper describes an object-oriented abstraction for the problem of awareness management in Collaborative Virtual Environments (CVEs). The described abstraction allows for different types of awareness information and awareness management policies to be used. It is also described how the defined abstraction was used to support the awareness management policies of two demo CVEs application. | |||
| Audience interaction for virtual reality theater and its implementation | | BIBAK | Full-Text | 41-45 | |
| Sang Chul Ahn; Ig-Jae Kim; Hyoung-Gon Kim; Yong-Moo Kwon; Heedong Ko | |||
| Recently we have built a VR (Virtual Reality) theater in Kyongju, Korea. It
combines the advantages of VR and IMAX theater. The VR theater can be
characterized by a single shared screen and by multiple inputs from several
hundreds of people. In this case, multi-user interaction is different from that
of networked VR systems and must be reconsidered. This paper defines the
multi-user interaction in such a VR theater as Audience Interaction, and
discusses key issues for the implementation of the Audience Interaction. This
paper also presents a real implementation example in the Kyongju VR theater. Keywords: audience interaction, multi-user interaction, theater, virtual reality | |||
| An open software architecture for virtual reality interaction | | BIBAK | Full-Text | 47-54 | |
| Gerhard Reitmayr; Dieter Schmalstieg | |||
| This article describes OpenTracker, an open software architecture that
provides a framework for the different tasks involved in tracking input devices
and processing multi-modal input data in virtual environments and augmented
reality application. The OpenTracker framework eases the development and
maintenance of hardware setups in a more flexible manner than what is typically
offered by virtual reality development packages. This goal is achieved by using
an object-oriented design based on XML, taking full advantage of this new
technology by allowing to use standard XML tools for development, configuration
and documentation. The OpenTracker engine is based on a data flow concept for
multi-modal events. A multi-threaded execution model takes care of tunable
performance. Transparent network access allows easy development of decoupled
simulation models. Finally, the application developer's interface features both
a time-based and an event based model, that can be used simultaneously, to
serve a large range of applications. OpenTracker is a first attempt towards a
"'write once, input anywhere"' approach to virtual reality application
development. To support these claims, integration into an existing augmented
reality system is demonstrated. We also show how a prototype tracking equipment
for mobile augmented reality can be assembled from consumer input devices with
the aid of OpenTracker. Once development is sufficiently mature, it is planned
to make Open-Tracker available to the public under an open source software
license. Keywords: XML, mobile augmented reality, tracking, virtual reality | |||
| VRPN: a device-independent, network-transparent VR peripheral system | | BIBAK | Full-Text | 55-61 | |
| Russell M., II Taylor; Thomas C. Hudson; Adam Seeger; Hans Weber; Jeffrey Juliano; Aron T. Helser | |||
| The Virtual-Reality Peripheral Network (VRPN) system provides a
device-independent and network-transparent interface to virtual-reality
peripherals. VRPN's application of factoring by function and of layering in the
context of devices produces an interface that is novel and powerful. VRPN also
integrates a wide range of known advanced techniques into a publicly-available
system. These techniques benefit both direct VRPN users and those who implement
other applications that make use of VR peripherals. Keywords: input devices, interactive graphics, library, peripherals, virtual
environments, virtual worlds | |||
| Exploring the past: a toolset for visualization of historical events in virtual environments | | BIBAK | Full-Text | 63-70 | |
| Stanislav L. Stoev; Matthias Feurer; Michael Ruckaberle | |||
| In this paper, we present a set of tools and techniques for visualization of
and interaction with historical data. First, we briefly describe the data
acquisition and preparation. Afterwards, we discuss in detail the interaction
approaches for exploration of historical data, containing a time dimension. In
particular, we propose (1) a continuous time increment approach for visualizing
3D slices continuously moving in the 4D space; (2) a fly-with mode for viewing
the scene from the perspective of the participants in the migration, we are
visualizing; (3) a time lens, as an extension of the traditional magic lens for
viewing arbitrary event times; (4) a time-space exploration tool for
simultaneous viewing more than one event time and location of interest; (5) a
guided exploration tool, which allows for viewing events on an interactively
defined or pre-defined path in space. Finally, we conclude the paper with a
discussion and comparison of the proposed tools. Keywords: four-dimensional data visualization, human-computer interface, interaction
techniques, virtual environment applications | |||
| A fast implicit integration method for solving dynamic equations of movement | | BIBAK | Full-Text | 71-76 | |
| Laurent Hilde; Philippe Meseure; Christophe Chaillou | |||
| The use of physics is a way to improve realism in Virtual Reality. In order
to calculate virtual objects movement in a real-time physically based
simulation, you have to solve ODE (Ordinary Differential Equations) relative to
time. This solution has to be fast enough to display the scene at an
interactive rate or to control an haptic device and to be as reliable as
possible.
We show that the use of the Euler implicit method is possible in a real-time environment. This one offers control about stability and thus has a good behaviour with stiff ODE, which are hard to solve. For this purpose, we propose to solve the non-linear system from Euler's integration in a way less costly than usual Newton like techniques by applying Broyden's method. Keywords: Broyden's method, ODE, control of haptic devices, implicit resolution,
physically based animation, real-time simulation | |||
| Temporal and spatial level of details for dynamic meshes | | BIBA | Full-Text | 77-84 | |
| Ariel Shamir; Valerio Pascucci | |||
| Multi-resolution techniques enhance the ability of graphics and visual systems to overcome limitations in time, space and transmission costs. Numerous techniques have been presented which concentrate on creating level of detail models for static meshes. Time-dependent deformable meshes impose even greater difficulties on such systems. In this paper we describe a solution for using level of details for time dependent meshes. Our solution allows for both temporal and spatial level of details to be combined in an efficient manner. By separating low and high frequency temporal information, we gain the ability to create very fast coarse updates in the temporal dimension, which can be adaptively refined for greater details. | |||
| Real-time shadows for animated crowds in virtual cities | | BIBAK | Full-Text | 85-92 | |
| Céline Loscos; Franco Tecchia; Yiorgos Chrysanthou | |||
| In this paper, we address the problem of shadow computation for large
environments including thousands of dynamic objects. The method we propose is
based on the assumption that the environment is 2.5D, which is often the case
for virtual cities, thus avoiding complex visibility computation. We apply our
method for virtual cities populated by thousands of walking humans, which we
render with impostors, allowing real time simulation.
In this paper, we treat the cases of shadows cast by buildings on humans, and by humans on the ground. To avoid 3D computation, we represent the shadows cast by buildings onto the environment with a 2.5D shadow map. When humans move, we quickly access the shadow information at the current location with a 2D grid. For each new position of a human, we compute its coverage by the shadow, and we render the shadow on top of the impostor with low cost using multi-texturing hardware. We also use the property of an impostor to display the shadow of humans on the ground plane, by projecting the impostor relatively to the light source. The method is currently limited to sharp shadows and a single light source. However approximations could be made to allow non-accurate soft-shadows. We show in the results that the computation of the shadows, as well as the display is done in real time, and that the method could be easily extended to real time moving light sources. Keywords: dynamic shadows, multi-texturing, populated virtual cities, real time
rendering, shadow computation | |||
| Life-sized projector-based dioramas | | BIBAK | Full-Text | 93-101 | |
| Kok-Lim Low; Greg Welch; Anselmo Lastra; Henry Fuchs | |||
| We introduce an idea and some preliminary results for a new projector-based
approach to re-creating real and imagined sites. Our goal is to achieve
re-creations that are both visually and spatially realistic, providing a small
number of relatively unencumbered users with a strong sense of immersion as
they jointly walkaround the virtual site.
Rather than using head-mounted or general-purpose projector-based displays, our idea builds on previous projector-based work on spatially-augmented reality and shader lamps. Using simple white building blocks we construct a static physical model that approximates the size, shape, and spatial arrangement of the site. We then project dynamic imagery onto the blocks, transforming the lifeless physical model into a visually faithful reproduction of the actual site. Some advantages of this approach include wide field-of-view imagery, real walking around the site, reduced sensitivity to tracking errors, reduced sensitivity to system latency, auto-stereoscopic vision, the natural addition of augmented virtuality and the provision of haptics. In addition to describing the major challenges to (and limitations of) this vision, in this paper we describe some short-term solutions and practical methods, and we present some proof-of-concept results. Keywords: augmented virtuality, diorama, immersive visualization, multiprojector
display system, shader lamp, spatially-augmented reality, user interface,
virtual environment, virtual reality | |||
| Balance NAVE: a virtual reality facility for research and rehabilitation of balance disorders | | BIBA | Full-Text | 103-109 | |
| Jeffrey Jacobson; Mark S. Redfern; Joseph M. Furman; Susan L. Whitney; Patrick J. Sparto; Jeffrey B. Wilson; Larry F. Hodges | |||
| We are currently developing an immersive virtual environment display for research into the rehabilitation of balance disorders, called the Balance NAVE (BNAVE). Using this system, the therapist can create varying degrees of sensory conflict and congruence in persons with balance disorders. With the capability of changing visual scenes based on the needs of the therapist, the BNAVE is a promising tool for rehabilitation. The system uses four PC's, three stereoscopic projectors, and three rear-projected screens, which surround the patient's entire horizontal field of view. The BNAVE can accommodate many types of sensors and actuators for a wide range of experiments. | |||
| Stereoscopic video system with embedded high spatial resolution images using two channels for transmission | | BIBAK | Full-Text | 111-118 | |
| Takafumi Ienaga; Katsuya Matsunaga; Kazunori Shidoji; Kazuaki Goshi; Yuji Matsuki; Hiroki Nagata | |||
| Teleoperation requires both wide vision to recognize a whole workspace and
fine vision to recognize the precise structure of objects which an operator
wants to see. In order to achieve high operational efficiency in teleoperation,
we have developed the Q stereoscopic video system which is constructed of four
sets of video cameras and monitors. It requires four video channels to transmit
video signals. However, four channels are not always available for a video
system because of the limitation of the number of radio channels when multiple
systems are used at the same time. Therefore we have tried to reduce the number
of channels on this system by sending images from the right and left cameras
alternately by field. In experiment 1, we compared the acuity of depth
perception under three kinds of stereoscopic video systems, the original Q
stereoscopic video system, the Q stereoscopic video system with two channel
transmission, and the conventional stereoscopic video system. As the result of
the experiment, the original Q stereoscopic video system enabled us to perceive
depth most precisely, the Q stereoscopic video system with two channel
transmission less so, and the conventional stereoscopic video system even less.
In experiment 2, we compared the Q stereoscopic video system with two channel
transmission to the original Q stereoscopic video system. The result showed
that the operators were able to work more efficiently under the original Q
stereoscopic video system than under the Q stereoscopic video system with two
channel transmissions. In experiment 3, we compared the Q stereoscopic video
system with two channel transmission to the conventional stereoscopic video
system. It was found out in this study that the new stereoscopic video system
we developed enabled operators to work more efficiently and to perceive depth
more precisely than the conventional stereoscopic video system, although the
number of channels for image transmission of this system was equal to that of
the conventional stereoscopic video system. Keywords: Q stereoscopic video system, compound image, operational efficiency,
teleoperation, temporal resolution | |||
| Surround aesthetics: VR as an art form | | BIBA | Full-Text | 119 | |
| Diane Gromala; Rebecca Allen | |||
| VR has been explored as an artistic medium from its early beginnings in the last century. It has drawn off various other forms, be these theatre or the panorama. What are the current aesthetics of VR? How has content matured, engaged and created a new sense of awe, beauty, social engagement. This is an opportunity to share their work with leading virtual reality artists from around the world. | |||
| Virtual reality and public spaces | | BIB | Full-Text | 119 | |
| Doug MacLeod; Pierre Boulanger | |||
| Scalable data management using user-based caching and prefetching in distributed virtual environments | | BIBAK | Full-Text | 121-126 | |
| Sungju Park; Dongman Lee; Mingyu Lim; Chansu Yu | |||
| For supporting real-time interaction in distributed virtual environments
(DVEs), it is common to replicate virtual world data at clients from the
server. For efficient replication, two schemes are used together in general --
prioritized transfer of objects and a caching and prefetching technique.
Existing caching and prefetching approaches for DVEs exploit spatial
relationship based on distances between a user and objects. However, spatial
relationship fails to determine which types of objects are more important to an
individual user, not reflecting user's interests. We propose a scalable data
management scheme using user-based caching and prefetching exploiting the
object's access priority generated from spatial distance and individual user's
interest in objects in DVEs. We also further improve the cache hit rate by
incorporating user's navigation behavior into the spatial relationship between
a user and the objects in the cache. By combining the interest score and
popularity score of an object with the spatial relationship, we improve the
performance of caching and prefetching since the interaction locality between
the user and objects are reflected in addition to spatial locality. The
simulation results show that the proposed scheme outperforms the hit rate of
existing caching and prefetching by 10% on average when the cache size is set
to basic cache size, the size of expected number of objects included in the
user's viewing range. Keywords: DVEs, caching and prefetching, distributed virtual environments, scalable
data management, user interest | |||
| Prediction-based concurrency control for a large scale networked virtual environment supporting various navigation speeds | | BIBAK | Full-Text | 127-134 | |
| Eunhee Lee; Dongman Lee; Seunghyun Han; Soon J. Hyun | |||
| Shared sense of a virtual world is often enhanced by replicating the
information at each user's site since replication provides acceptable
interactive performance, especially when users are geographically distributed
over large networks like the Internet. However, multiple concurrent updates may
lead to inconsistent views among replicas. Therefore concurrency control is a
key factor to maintaining a consistent state among replicas. We proposed a
scalable prediction-based scheme in which an ownership request is multicasted
to only the users surrounding a target entity. In our previous work, we assumed
that all the users navigate a virtual world with a single speed. It, however,
is quite common in a networked virtual environment like a network game that
users are allowed to change their navigation speed as they interact with a
virtual world for adding more realism. This paper proposes an enhancement to
support users with various speeds. The enhanced scheme allows as many Entity
Radii as the number of different speed and allocates a separate queue for users
of each speed. Each queue is examined in parallel to predict the next owner
candidate and among the selected candidates is chosen the final candidate,
which has a minimum predicted collision time. It contributes to the timely
advanced transfer of ownership by using appropriate Entity Radius based on a
user's speed, fair granting of ownership by reducing the interference between
users with different speed and latency, and high prediction accuracy by
reducing the redundant ownership transfer. Keywords: advance ownership request and transfer, concurrency control, prediction,
entity radius, generality, scalability, various navigation speed | |||
| A hybrid motion prediction method for caching and prefetching in distributed virtual environments | | BIBAK | Full-Text | 135-142 | |
| Addison Chan; Rynson W. H. Lau; Beatrice Ng | |||
| Although there are a few methods proposed for predicting 3D motion, most of
these methods are primarily designed for predicting the motion of specific
objects, by assuming certain object motion behaviors. We notice that in desktop
distributed 3D applications, such as virtual walkthrough and computer games,
the 2D mouse is still the most popular device being used as navigation input.
Through studying the motion behavior of a mouse during 3D navigation, we
propose a hybrid motion model for predicting the mouse motion during a 3D
walkthrough. At low motion velocity, we use a linear model for prediction and
at high motion velocity, we use an elliptic model for prediction. We describe
how this prediction method can be integrated into our distributed virtual
environment for object model caching and prefetching. We also demonstrate the
effectiveness of the prediction method and the resulting caching and
prefetching mechanisms through extensive experiments. Keywords: 3D navigation, caching, distributed virtual environments, motion prediction,
prefetching, virtual walkthrough | |||
| Situational visualization | | BIBAK | Full-Text | 143-150 | |
| David M. Krum; William Ribarsky; Christopher D. Shaw; Larry F. Hodges; Nickolas Faust | |||
| In this paper, we introduce a new style of visualization called Situational
Visualization, in which the user of a robust, mobile visualization system uses
mobile computing resources to enhance the experience and understanding of the
surrounding world. Additionally, a Situational Visualization system allows the
user to add to the visualization and any underlying simulation by inputting the
user's observations of the phenomena of interest, thus improving the quality of
visualization for the user and for any other users that may be connected to the
same database. Situational Visualization allows many users to collaborate on a
common set of data with real-time acquisition and insertion of data. In this
paper, we present a Situational Visualization system we are developing called
Mobile VGIS, and present two sample applications of Situational Visualization. Keywords: dynamic databases, location and time-specific user input, location-specific
services, mobile users and collaborators, real-time acquisition and insertion
of data, synchronized databases | |||
| Visualization of particle traces in virtual environments | | BIBAK | Full-Text | 151-157 | |
| Falko Kuester; Ralph Bruckschen; Bernd Hamann; Kenneth I. Joy | |||
| Real-time visualization of particle traces in virtual environments can aid
in the exploration and analysis of complex three dimensional vector fields.
This paper introduces a scalable method suitable for the interactive
visualization of large time-varying vector fields on commodity hardware. A
real-time data streaming and visualization approach and its out-of-core scheme
for the pre-processing and rendering of data are described. The presented
approach yields low-latency application start-up times and small memory
footprints. A proof of concept systems was implemented on a low-cost Linux
workstation equipped with spatial tracking hardware, data gloves and shutter
glasses. The system was used to implement a virtual wind tunnel in which a
volumetric particle injector can introduce up to 60000 particles into the flow
field while an interactive rendering performance of 60 frames per second is
maintained. Keywords: computational fluid dynamics, out-of-core visualization, particle tracing,
scientific visualization, simulation, stereoscopic rendering, virtual reality,
virtual wind tunnel | |||
| Is semitransparency useful for navigating virtual environments? | | BIBAK | Full-Text | 159-166 | |
| Luca Chittaro; Ivan Scagnetto | |||
| A relevant issue for any Virtual Environment (VE) is the navigational
support provided to users who are exploring it. Semitransparency is sometimes
exploited as a means to see through occluding surfaces with the aim of
improving user navigation abilities and awareness of the VE structure.
Designers who make this choice assume that it is useful, especially in the case
of VEs with many levels of occluding surfaces, e.g. virtual buildings or
cities. This paper is devoted to investigate this assumption with a proper
experimental evaluation on users. First, we discuss possible ways for improving
navigation, and focus on implementation choices for semitransparency as a
navigation aid. Then, we present and discuss the experimental evaluation we
carried out. We compared subjects' performance in three conditions: local
exploitation of semitransparency inside the VE, a more global exploitation
provided by a bird's-eye-view, and a control condition where neither of the two
features was available. Keywords: evaluation, navigation aids, wayfinding | |||
| FreeDrawer: a free-form sketching system on the responsive workbench | | BIBAK | Full-Text | 167-174 | |
| Gerold Wesche; Hans-Peter Seidel | |||
| A sketching system for spline-based free-form surfaces on the Responsive
Workbench is presented. We propose 3D tools for curve drawing and deformation
techniques for curves and surfaces, adapted to the needs of designers. The user
directly draws curves in the virtual environment, using a tracked stylus as an
input device. A curve network can be formed, describing the skeleton of a
virtual model. The non-dominant hand positions and orients the model while the
dominant hand uses the editing tools. The curves and the resulting skinning
surfaces can interactively be deformed. Keywords: 3D drawing, 3D sketching, 3D user interfaces, computer aided conceptual
design, curve and surface deformations, immersive shape modeling, responsive
workbench, variational modeling, virtual environments | |||
| VRID: a design model and methodology for developing virtual reality interfaces | | BIBAK | Full-Text | 175-182 | |
| Vildan Tanriverdi; Robert J. K. Jacob | |||
| Compared to conventional interfaces, Virtual reality (VR) interfaces contain
a richer variety and more complex types of objects, behaviors, interactions and
communications. Therefore, designers of VR interfaces face significant
conceptual and methodological challenges in: a) thinking comprehensively about
the overall design of the VR interface; b) decomposing the design task into
smaller, conceptually distinct, and easier tasks; and c) communicating the
structure of the design to software developers. To help designers to deal with
these challenges, we propose a Virtual Reality Interface Design (VRID) Model,
and an associated VRID methodology. Keywords: design methodology, design model, user interface software, virtual reality | |||
| Interactive content for presentations in virtual reality | | BIBAK | Full-Text | 183-189 | |
| A. L. Fuhrmann; Jan Prikryl; Robert F. Tobler; Werner Purgathofer | |||
| In this paper, we develop concepts for presenting interactive content in
form of a slideshow in a virtual environment, similar to conventional desktop
presentation software. We demonstrate how traditional content like text and
images can be integrated into 3D models and embedded applications to form a
seamless presentation combining the advantages of traditional presentation
methods with 3D interaction techniques and different 3D output devices. We
demonstrate how different combinations of output devices can be used for
presenter and audience, and discuss their various advantages. Keywords: augmented reality, content representation, embedded applications,
presentation, virtual reality | |||