HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2006 ACM Symposium on Virtual Reality Software and Technology

Fullname:VRST'06 ACM Symposium on Virtual Reality Software and Technology
Editors:Mel Slater; Yoshifumi Kitamura; Ayellet Tal; Angelos Amditis; Yiorgos Chrysanthou
Location:Limassol, Cyprus
Dates:2006-Nov-01 to 2006-Nov-03
Standard No:ISBN: 1-59593-321-2; Order Number: 609060; ACM DL: Table of Contents hcibib: VRST06
  1. Human factors I
  2. Tracking
  3. Rendering I
  4. Haptics & interaction
  5. Cultural heritage, education, and entertainment
  6. Characters & cities
  7. Rendering II
  8. Interaction
  9. Navigation
  10. Hardware & systems
  11. Collaboration
  12. Haptics in VR (special session)
  13. Modeling
  14. Human factors II
  15. Tutorial I
  16. Tutorial II
Scientific challenges ingame technology BIBAFull-Text 1
  Mark Overmars
Computer games play an increasingly important role, both in entertainment and in serious applications, like education, training, communication, decision support, and marketing. Games use the most modern technology (both in hardware and in software) and future games will greatly benefit from new developments in such technology.
   After an introduction into games and their future use, we will discuss the scientific challenges future games pose to researchers. Typical areas that will be treated are new modeling techniques, the design and behavior of virtual characters and avatars, simulating virtual worlds, and new interaction and interface techniques.
Real images for virtual reality BIBAFull-Text 2
  Luc Robert
In most Virtual Reality applications, people try to achieve a high degree of realism. Numerous techniques and approaches have been developed to achieve this by using pictures and videos of the real world.
   At REALVIZ, we have developed software products allowing digital content creators to acquire shape and motion from real images, using computer vision techniques. In this presentation I will give a snapshot of the current status of these technologies, and how they are used in various industries for VR- or AR-related applications.

Human factors I

Perceptual sensitivity to visual/kinesthetic discrepancy in hand speed, and why we might care BIBAKFull-Text 3-8
  Eric Burns; Frederick P. Brooks
We investigated the ability of a user in a head-mounted display virtual environment to detect a virtual hand avatar moving at a speed different than that of the real hand. We measured discrepancy detection thresholds for each of the six cardinal directions of 3-space (left, right, up, down, toward, and away). For each of these six directions we measured two discrepancy detection thresholds: one for when the avatar hand moved more quickly than the real hand and one for when it moved more slowly. We found a trend that users are less sensitive to increases in hand avatar speed than they are to decreases. The amount the hand avatar speed can be increased without a user noticing is surprisingly large. This information is useful for techniques that require introducing hand-avatar motion discrepancy, such as a technique for recovering from the position discrepancy introduced by simulated surface constraints.
Keywords: intersensory discrepancy, perception, sensory conflict
Evaluating the effectiveness of occlusion reduction techniques for 3D virtual environments BIBAKFull-Text 9-18
  Niklas Elmqvist; M. Eduard Tudoreanu
We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion reduction techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of different tasks, including counting, pattern recognition, and object relation, in different kinds of environments with various properties. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. The study involved 16 subjects and was conducted in a three-sided CAVE environment. Our results confirm the general guideline that each task calls for a specialized interaction -- no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy; simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost.
Keywords: 3D space distortion, evaluation, interaction techniques, occlusion management, occlusion reduction
Evaluating the effects of real world distraction on user performance in virtual environments BIBAKFull-Text 19-26
  Yi Wang; Kunmi Otitoju; Tong Liu; Sijung Kim; Doug A. Bowman
Although many virtual environment (VE) technologies such as the four-screen CAVE are described as immersive, users can still perceive distractions from the real world. This exposure to real-world distraction may reduce users' sense of presence, and if presence is correlated with performance as some have claimed, the real-world distractions may also hinder performance. Thus, VE designers may want to consider ways to reduce real-world distraction. This paper presents an experiment to investigate the effect of reduced visual stimulus in the peripheral area on user performance and the usability of an immersive VE. We carefully designed three tasks that cause different levels of awareness of the real-world distraction. Using these tasks, we evaluated users' performance and preference in two conditions. The low-stimulus condition was created by hanging a black cloth across the missing back wall of a CAVE. The high-stimulus condition was created by projected animations and real human motion outside the CAVE. The experiments show that reduced distraction may have a positive or negative effect on user performance, depending on the specific tasks and environments.
Keywords: distraction, immersive virtual environments, low-stimulus area, user performance
The benefits of third-person perspective in virtual and augmented reality? BIBAKFull-Text 27-30
  Patrick Salamin; Daniel Thalmann; Frédéric Vexo
Instead of the reality in which you can see your own limbs, in virtual reality simulations it is sometimes disturbing not to be able to see your own body. It seems to create an issue in the proprio-perception of the user who does not completely feel integrated in the environment. This perspective should be beneficial for the users. We propose to give the possibility to the people to use the first and the third-person perspective like in video games (e.g. GTA). As the gamers prefer to use the third-person perspective for moving actions and the first-person view for the thin operations, we will verify this comportment is extendable to simulations in augmented and virtual reality.
Keywords: distance evaluation, exocentric perspective, immersion, presence, proprio-perception
Mixed reality: are two hands better than one? BIBAKFull-Text 31-34
  Aaron Kotranza; John Quarles; Benjamin Lok
For simulating hands-on tasks, the ease of enabling two-handed interaction with virtual objects gives Mixed Reality (MR) an expected advantage over Virtual Reality (VR). A user study examined whether two-handed interaction is critical for simulating hands-on tasks in MR. The study explored the effect of one- and two-handed interaction on task performance in a MR assembly task. When presented with a MR system, most users chose to interact with two hands. This choice was not affected by a user's past VR experience or the quantity and complexity of the real objects with which users interacted. Although two-handed interaction did not yield a significant performance improvement, two hands allowed subjects to perform the virtual assembly task similarly to the real-world task. Subjects using only one hand performed the task fundamentally differently, showing that affording two-handed interaction is critical for training systems.
Keywords: mixed reality, two-handed interaction, virtual reality


Development of a tracking method for augmented reality applied to NPP maintenance work and its experimental evaluation BIBAKFull-Text 35-44
  Zhiqiang Bian; Hirotake Ishii; Masanori Izumi; Hiroshi Shimoda; Hidekazu Yoshikawa; Yoshitsugu Morishita; Yoshiki Kanehira
Nuclear power plants (NPP) must be maintained periodically. The maintenance efficiency must be improved and human error must be reduced simultaneously to improve NPPs' competitive capability in electricity markets. Although Augmented Reality (AR) offers great possibilities to support NPP maintenance work, some difficulties exist for application of AR to actual work support because current AR systems cannot be implemented in NPP environments without technical improvement. Problems exist such as recognition distance, tracking accuracy, and a complex working environment when applying AR to NPP field work support. Considerable extension of tracking distance and improvement of accuracy are particularly desired because NPPs are large-scale indoor environments. This study designed a linecode marker, a new type of paper-based marker, along with recognition and tracking algorithms for it to resolve these problems. In contrast to conventional paper-based markers, such as square markers and circle markers, the linecode marker is not merely easier to set up in complex industrial environments: it also enables the use of AR in industrial plants because of its considerable tracking-performance improvement. To evaluate tracking accuracy, the trackable distance, and the tracking speed of the proposed tracking method, an evaluation experiment was conducted in a large room. The experiment results show that the tracking distance is extended extremely over that of the traditional marker-based tracking method: tracking accuracy improved to 20 cm in 10 m distance. The running speed can be as fast as 15 frames per second with a laptop.
Keywords: augmented reality, linecode marker, maintenance, nuclear power plant, tracking method
Sceptre: an infrared laser tracking system for virtual environments BIBAKFull-Text 45-50
  Christian Wienss; Igor Nikitin; Gernot Goebbels; Klaus Troche; Martin Göbel; Lialia Nikitina; Stefan Müller
In this paper a 3D tracking system for Virtual Environments is presented which utilizes infrared (IR) laser technology. Invisible laser patterns are projected from the user(s) to the screen via the input device Sceptre or the appending headtracking device. IR-sensible cameras which are placed near the projectors in a backprojection setup recognize the pattern. That way position and orientation of the input devices is reconstructed. The infrared laser is not seen by human eye and therefore does not disturb the immersion.
Keywords: 3D-reconstruction, IR-laser, laser pattern, tracking
Spatial input device structure and bimanual object manipulation in virtual environments BIBAKFull-Text 51-60
  Arjen van Rhijn; Jurriaan D. Mulder
Complex 3D interaction tasks require the manipulation of a large number of input parameters. Spatial input devices can be constructed such that their structure reflects the task at hand. As such, somatosensory cues that a user receives during device manipulation, as well as a users expectations, are consistent with visual cues from the virtual environment. Intuitively, such a match between the device's spatial structure and the task at hand would seem to allow for more natural and direct interaction. However, the exact effects on aspects like task performance, intuitiveness, and user comfort, are yet unknown.
   The goal of this work is to study the effects of input device structure for complex interaction tasks on user performance. Two factors are investigated: the relation between the frame of reference of a user's actions and the frame of reference of the virtual object being manipulated, and the relation between the type of motion a user performs with the input device and the type of motion of the virtual object.
   These factors are addressed by performing a user study using different input device structures. Subjects are asked to perform a task that entails translating a virtual object over an axis, where the structure of the input device reflects this task to different degrees. First, the action subjects need to perform to translate the object is either a translation or a rotation. Second, the action is performed in the same frame of reference of the virtual object, or in a fixed, separately located, frame of reference.
   Results show that task completion times are lowest when the input device allows a user to make the same type of motion in the same coordinate system as the virtual object. In case either factor does not match, task completion times increase significantly. Therefore, it may be advantageous to structure an input device such that the relation between its frame of reference and the type of action matches the corresponding frame of reference and motion type of the virtual object being manipulated.
Keywords: configurable input device, direct manipulation, multi-dimensional control, virtual reality
Interactive modelling and tracking for mixed and augmented reality BIBAKFull-Text 61-64
  R. Freeman; A. Steed
Some tasks vital to many mixed and augmented reality systems are either too time consuming or complex to be carried out whilst the system is active. 3D scene modelling and labelling are two such tasks commonly performed by skilled operators in an off-line initialisation phase. Because this phase sometimes needs specialist software and/or expertise it can be a considerable limiting factor for new mixed reality system developers. If a mixed reality system is to operate in real-time, where artificial graphics are woven into real world live images, the way in which these off-line processes are tackled is critical. In this paper we propose a flexible new approach that reduces the time spent during the off-line initialisation phase by adopting an on-line interactive primitive modelling technique. Our solution combines two existing and freely available packages, the Augmented Reality Toolkit Plus (ARToolKitPlus) and the Mixed Reality Toolkit (MRT), to enable rapid interactive modelling over live video using a freely moving camera. As a demonstration we show how these can be used to rapidly seed an object appearance-based tracking algorithm.
Keywords: augmented reality, image based modelling, mixed reality, model based tracking

Rendering I

Traversal fields for ray tracing dynamic scenes BIBAKFull-Text 65-74
  Peijie Huang; Wencheng Wang; Gang Yang; Enhua Wu
This paper presents a novel scheme for accelerating ray traversal computation in ray tracing. By the scheme, a pre-computed stage is applied to constructing what is called a traversal field for each rigid object that records the destinations for all possible incoming rays. The field data, which could be efficiently compressed offline, is stored in a small number of big rectangles called ray-relays that enclose each approximate convex segment of an object. In the ray-tracing stage, the records on relays are retrieved in a constant time, so that a ray traversal is implemented as a simple texture lookup on GPU. Thus, the performance of our approach is only related to the number of relays rather than scene size, while the number of relays is quite small. In addition, because the traversal fields only depend on the internal construction of each convex segment, they can be used to ray trace objects undergoing rigid motions at a negligible extra cost. Experimental results show that interactive rates could be achieved for dynamic scenes with the effects of specular reflections and refractions on an ordinary desk PC with GPU.
Keywords: dynamic scene, graphics processing units (GPU), pre-computation, ray tracing
Pyramidal displacement mapping: a GPU based artifacts-free ray tracing through an image pyramid BIBAKFull-Text 75-82
  Kyoungsu Oh; Hyunwoo Ki; Cheol-Hi Lee
Displacement mapping enables us to details to polygonal meshes. We present a real-time artifacts-free inverse displacement mapping method using per-pixel ray tracing through an image pyramid on the GPU. In each pixel, we make a ray and trace the ray through a displacement map to find an intersection. To skip empty-regions safely, we use the quad-tree image pyramid of a displacement map in top-down order. For magnification we estimate an intersection between a ray and a bi-linearly interpolated displacement. For minification we perform a mipmap-like prefiltering to improve quality of result images and rendering performance. Results show that our method produces correct images even at steep grazing angles. Rendering speed of test scenes were over hundreds of frames per second and less influence to the resolution of the map. Our method is simple enough to add to existing virtual reality systems easily.
Keywords: GPU, displacement mapping, image-based rendering, quad-tree, real-time rendering
Novel view generation for a real-time captured video object BIBAKFull-Text 83-86
  Hua Chen; Peter F. Elzer
This paper describes a novel method of real-time novel view synthesis for an object that is observed by two fixed video cameras in a natural environment. Without reconstructing the 3D model of the object, the view of the object corresponding to a virtual camera that moves between the two real cameras is generated by applying a view morphing process to the object region in the image pair captured in real-time. Using the captured live video frames, the proposed method can not only generate realistic novel views in real-time, but also ensure a natural and smooth image transition between the two cameras. It can be used in a variety of Mixed Reality (MR) applications to integrate live video objects into virtual environments. Experimental results verify the validity of the proposed approach.
Keywords: mixed reality, view synthesis
Hardware-accelerated jaggy-free visual hulls with silhouette maps BIBAKFull-Text 87-90
  Chulhan Lee; Junho Cho; Kyoungsu Oh
Visual hull is intersection of cones made by back-projections of reference images. We introduce a real-time rendering of jaggy-free visual hull rendering method on programmable graphics hardware. By using texture mapping approach, we render a visual hull quickly. At each silhouette pixel, we produce jaggy-free images using silhouette information. Our implementation demonstrate high-quality image in real-time. The complexity of our algorithm is O(N), where N is the number of reference images. Thus the examples in this paper are rendered over one hundred of frames per second without jaggies.
Keywords: GPU, image-based rendering, silhouette map, visual hull

Haptics & interaction

A fluid resistance map method for real-time haptic interaction with fluids BIBAKFull-Text 91-99
  Yoshinori Dobashi; Makoto Sato; Shoichi Hasegawa; Tsuyoshi Yamamoto; Mitsuaki Kato; Tomoyuki Nishita
Haptic interfaces enable us to interact with a virtual world using our sense of touch. This paper presents a method for realizing haptic interaction with water. Our method displays forces acting on rigid objects due to water with a high frame rate (500 Hz). To achieve this, we present a fast method for simulating the dynamics of water. We decompose the dynamics into two parts. One is a linear flow expressed by a wave equation used to compute water waves. The other is a more complex and non-linear flow around the object. The fluid forces due to the non-linear flow is precomputed by solving Navier-Stokes equations, and stored in a database, named the Fluid Resistance Map. The precomputed non-linear flow and the linear flow are combined to compute the forces due to water.
Keywords: computational fluid dynamics, fluid resistance, haptics, simulation, virtual reality
Interaction techniques in large display environments using hand-held devices BIBAKFull-Text 100-103
  Seokhee Jeon; Jane Hwang; Gerard J. Kim; Mark Billinghurst
Hand-held devices possess a large potential as an interaction device for their today's ubiquity, and present us with an opportunity to devise new and unique ways of interaction as a smart device with multi-modal sensing and display capabilities. This paper introduces user interaction techniques (for selection, translation, scaling and rotation of objects) using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared environments. We propose three intuitive interaction techniques for 2D and 3D objects in such an environment. The first approach uses the motion flow information to estimate the relative motion of the hand-held device and interact with the large display. The marker-object and marker-cursor approaches both use software markers on the interaction object or on the cursor for the various interactive tasks. The proposed interaction techniques can be further combined with many auxiliary functions and wireless services (of the hand-held devices) for seamless information sharing and exchange among multiple users. A formal usability analysis is currently on-going.
Keywords: hand-held device, interaction techniques, large shared display
Simple user-generated motion cueing can enhance self-motion perception (Vection) in virtual reality BIBAKFull-Text 104-107
  Bernhard E. Riecke
Despite amazing advances in the visual quality of virtual environ-ments, affordable-yet-effective self-motion simulation still poses a major challenge. Using a standard psychophysical paradigm, the effectiveness of different self-motion simulations was quantified in terms of the onset latency, intensity, and convincingness of the per-ceived illusory self motion (vection). Participants were asked to actively follow different pre-defined trajectories through a naturalistic virtual scene presented on a panoramic projection screen using three different input devices: a computer mouse, a joystick, or a modified manual wheelchair. For the wheelchair, participants exerted their own minimal motion cueing using a simple force-feedback and a velocity control paradigm: small translational or rotational motions of the wheelchair (limited to 8cm and 10ð, re-spectively) initiated a corresponding visual motion with the visual velocity being proportional to the wheelchair deflection (similar to a joystick). All dependent measures showed a clear enhancement of the perceived self-motion when the wheelchair was used instead of the mouse or joystick. Compared to more traditional approaches of enhancing self-motion perception (e.g., motion platforms, free walking areas, or treadmills) the current approach of using a simple user-generated motion cueing has only minimal requirements in terms of overall costs, required space, safety features, and technical effort and expertise. Thus, the current approach might be promising for a wide range of low-cost applications.
Keywords: human factors, motion cueing, psychophysics, self-motion perception, self-motion simulation, vection, virtual reality, wheelchair
Separating the effects of level of immersion and 3D interaction techniques BIBAKFull-Text 108-111
  Ryan P. McMahan; Doug Gorton; Joe Gresock; Will McConnell; Doug A. Bowman
Empirical evidence of the benefits of immersion is an important goal for the virtual environment (VE) community. Direct comparison of immersive systems and non-immersive systems is insufficient because differences between such systems may be due not only to the level of immersion, but also to other factors, such as the input devices and interaction techniques used. In this paper, a study is presented that separates the effects of level of immersion and 3D interaction technique for a six-degree-of-freedom manipulation task. In the study, two components of immersion -- stereoscopy and field of regard -- were varied and three 3D interaction techniques -- HOMER, Go-Go, and DO-IT (a new keyboard- and mouse-based technique) -- were tested. The results of the experiment show that the interaction technique had a significant effect on object manipulation time, while the two components of immersion did not. The implications of these results are discussed for VE application developers.
Keywords: 3D interaction, field of regard, immersive virtual environment
Limpid desk: see-through access to disorderly desktop in projection-based mixed reality BIBAKFull-Text 112-115
  Daisuke Iwai; Kosuke Sato
We propose Limpid Desk which supports document search on a real desktop with virtual transparentizing of the upper layer of a document stack in projection-based mixed reality (MR)environments. In the system, users can visually access a lower layer document without physically removing the upper documents. This is accomplished by projecting a special pattern of light that is calculated to compensate the appearances of the upper layer documents as if they are transparent. In addition, we propose two types of visual effects for users to cognize the progress of the transparentizing of real documents and to recognize the layer number of the virtually exposing document, and execute psychological tests to confirm the intuitiveness of these effects. This paper also presents an intuitive document search interaction in the proposed system.
Keywords: color reflectance compensation, projection-based mixed reality, smart desk, transparentizing documents

Cultural heritage, education, and entertainment

Usability evaluation of the EPOCH multimodal user interface: designing 3D tangible interactions BIBAKFull-Text 116-122
  Panagiotis Petridis; Katerina Mania; Daniel Pletinckx; Martin White
This paper expands on the presentation of a methodology that provides a technology-enhanced exhibition of a cultural artefact through the use of a safe hybrid 2D/3D multimodal interface. Such tangible interactions are based on the integration of a 3DOF orientation tracker and information sensors with a 'Kromstaf' rapid prototype replica to provide tactile feedback. The multimodal interface allows the user to manipulate the object via physical gestures which, during evaluation, establish a profound level of virtual object presence and user satisfaction. If a user cannot manipulate the virtual object effectively many application specific tasks cannot be performed. This paper assesses the usability of the multimodal interface by comparing it with two input devices -- the Magellan SpaceMouse, and a 'black box', which contains the same electronics as the multimodal interface but without the tactile feedback offered by the 'Kromstaf' replica. A complete human-centred usability evaluation was conducted utilizing task based measures in the form of memory recall investigations after exposure to the interface in conjunction with perceived presence and user satisfaction assessments. Fifty-four participants across three conditions (Kromstaf, space mouse and black box) took part in the evaluation.
Keywords: evaluation, multimodal user interfaces, perception, presence, virtual environments
Entertainment virtual reality system for simulation of spaceflights over the surface of the planet Mars BIBAKFull-Text 123-132
  Ricardo Olanda; Manolo Pérez; Pedr Morillo; Marcos Fernández; Sergio Casas
In recent years Virtual Reality technologies have enabled astronomers to recreate and explore three dimensional structures of the Universe for scientific purposes. Mars, due to its scientific interest,has been the focal point of numerous research projects using these technologies, however, none of these virtual reality tools have been developed specifically for entertainment purposes.
   The focus of this paper is to present MarsVR, as an entertainment research project that educates people on the topography and orography of the planet Mars from the perspective of popular science. Some projects have been designed MarsVR for entertainment purposes and include the latest advances in 3D real time applications. However, these applications have underestimated the relevant data necessary for simulating the planet Mars as an interactive virtual environment.
Keywords: entertainment virtual reality, immersive visualization systems, terrain representation
A versatile large-scale multimodal VR system for cultural heritage visualization BIBAKFull-Text 133-140
  Chris Christou; Cameron Angus; Celine Loscos; Andrea Dettori; Maria Roussou
We describe the development and evaluation of a large-scale multimodal virtual reality simulation suitable for the visualization of cultural heritage sites and architectural planning. The system is demonstrated with a reconstruction of an ancient Greek temple in Messene that was created as part of a EU funded cultural heritage project (CREATE). The system utilizes a CAVE-like theatre consisting of head-tracked user localization, a haptic interface with two arms, and 3D sound. The haptic interface was coupled with a realistic physics engine allowing users to experience and fully appreciate the effort involved in the construction of architectural components and their changes through the ages. Initial user-based studies were carried out, to evaluate the usability and performance of the system. A simple task of stacking blocks was used to compare errors and timing in a haptics-enabled system with a haptics-disabled system. In addition, a qualitative study of the final system took place while it was installed in a museum.
Keywords: haptics, multimodal Interfaces, virtual heritage
System and infrastructure considerations for the successful introduction of augmented reality guides in cultural heritage sites BIBAKFull-Text 141-144
  Athanasios M. Demiris; Vassilios Vlahakis; Nicolaos Ioannidis
Recent advances in augmented reality and portable systems bring us closer to the introduction of such advanced technologies in everyday routine of cultural heritage sites, providing an extremely helpful means of disseminating and communicating information to the visitors of such sites. Portable multimedia guides are already a reality, while fixed-position AR has also found its way to various sites. So far the focus has always been on the terminal device capabilities and functions, while the infrastructure necessary to support the daily routine was somehow neglected. In this paper we will present a complete systemic approach introducing a set of components we deem necessary for the successful introduction of AR in such sites.
Keywords: augmented reality in cultural heritage, content management, system architecture

Characters & cities

From motion capture to action capture: a review of imitation learning techniques and their application to VR-based character animation BIBAKFull-Text 145-154
  Bernhard Jung; Heni Ben Amor; Guido Heumer; Matthias Weber
We present a novel method for virtual character animation that we call action capture. In this approach, virtual characters learn to imitate the actions of Virtual Reality (VR) users by tracking not only the users' movements but also their interactions with scene objects.
   Action capture builds on conventional motion capture but differs from it in that higher-level action representations are transferred rather than low-level motion data. As an advantage, the learned actions can often be naturally applied to varying situations, thus avoiding retargetting problems of motion capture. The idea of action capture is inspired by human imitation learning; related methods have been investigated for a longer time in robotics. The paper reviews the relevant literature in these areas before framing the concept of action capture in the context of VR-based character animation. We also present an example in which the actions of a VR user are transferred to a virtual worked.
Keywords: action capture, character animation, imitation learning, motion capture, virtual reality
Real-time generation of populated virtual cities BIBAKFull-Text 155-164
  Luiz Gonzaga da Silveira; Soraia Raupp Musse
This paper presents a new approach for real-time generation of 3D virtual cities. The main goal is to provide of a generic framework which support semi-automatic creation, manage-ment, and visualization of urban complex environments for virtual human simulation, called virtual urban life (VUL). It intends to minimize efforts of designers in the modeling of complex and huge environments. A versatile multi-level data model has been developed to support data management and visualization in an efficient way. Moreover, a polygon partitioning algorithm addresses the city allotment problem in an automatic way, according to input parameters and constraints. In addition, we discuss some results of virtual populated city simulations developed with proposed frame-work.
Keywords: city modeling, crowd simulation, polygon partitioning and real-time visualization, terrain modeling, virtual life simulation
From motion capture data to character animation BIBAKFull-Text 165-168
  Gaojin Wen; Zhaoqi Wang; Shihong Xia; Dengming Zhu
In this paper, we propose a practical and systematical solution to the mapping problem that is from 3D marker position data recorded by optical motion capture systems to joint trajectories together with a matching skeleton based on least-squares fitting techniques. First, we preprocess the raw data and estimate the joint centers based on related efficient techniques. Second, a skeleton of fixed length which precisely matching the joint centers are generated by an articulated skeleton fitting method. Finally, we calculate and rectify joint angles with a minimum angle modification technique. We present the results for our approach as applied to several motion-capture behaviors, which demonstrates the positional accuracy and usefulness of our method.
Keywords: articulated skeleton fitting, motion capture
Learning system for human motion characters of traditional arts BIBAKFull-Text 169-172
  Yoshinori Maekawa; Takuya Oda; Taku Komura; Yoshihisa Shinagawa
A useful learning system for human motion characters of traditional arts, such as Mai (Japanese classic dance), Kabuki (one of Japan's traditional stage arts), etc., is being developed. In such arts an effective system to pass the tradition down from a top artist to next generations is required. Video contents are generally used to pass the human motions in traditional arts down for non-experts. However, the video contents normally show the human motions as the views from a single direction. If the human motions are presented from orthogonal three-directions at the same time, it comes more useful. So, our learning system produces the three-dimensional human motion by the sequences of views from a single direction in the video contents, then, the motions from any directions can be presented simultaneously with the video contents.
   In addition, a learner can check the difference in motion between a top artist and him/herself by the producing his/her three-dimensional skeleton motion with our system and the overlapping it with one of the top artist. Here, in the comparison between two different physiques (a top artist and a learner) a simple adjustment method is suggested.
   In our study standard motion capture processes are employed, but our goal is to develop an original practical training system of performers' motion for beginners in traditional arts. In this paper the developing system is demonstrated for Kyo-mai (Mai originated in Kyoto) as example. The developing system can be useful not only in performing arts but also in industry or in sports.
Keywords: computer graphics, human motion analysis, motion capture, traditional arts, training tools

Rendering II

Utilizing jump flooding in image-based soft shadows BIBAKFull-Text 173-180
  Guodong Rong; Tiow-Seng Tan
This paper studies the usage of the GPU as a collection of groups of related processing units, where each group communicates in some way to complete a computation efficiently and effectively. In particular, we use the GPU to perform jump flooding to pass information among groups of processing units in the design of two simple real-time soft shadow algorithms. These two algorithms are purely image-based in generating plausible soft shadows. Their computational costs depend mainly on the resolution of the shadow map or the screen. They run on an order of magnitude faster than existing comparable methods.
Keywords: game programming, hard shadow, interactive application, parallel prefix, penumbra map, programmable graphics hardware
Layered probability maps: basic framework and prototype system BIBAKFull-Text 181-188
  Yutaka Kunita; Masahiro Ueno; Keiji Tanaka
We propose an image-based rendering method called"layered prob-ability mapping." The algorithm requires modest computer power, but produces high-quality output images even from textureless or noisy input images. Thus, real-time applications such as immersive teleconferencing and live broadcasting are promising areas of use. The key idea is layered probability map (LPM) representation using a set of two-dimensional probabilistic functions. This accommodates ambiguities regarding object depth. To test the feasibility of this method, we developed a prototype system with nine synchronized cameras and a thirteen-PC cluster. By calculating the algorithm in parallel, stable 5-15fps image generation has been achieved without any off-line procedures between the image capture and display.
Keywords: image-based rendering, real-time graphics, three-dimensional video
Dynamic load-balanced rendering for a CAVE system BIBAKFull-Text 189-192
  Tetsuro Ogi; Takaya Uchino
Recently, PC clusters have been used to construct CAVE-like immersive projection displays. However, in order to improve the rendering performance of PC cluster-based CAVE systems, the number of node PCs should be increased in accordance with the number of screens that are included. In this research, a mechanism for dynamic load-balanced rendering in a PC cluster system incorporating an arbitrary number of nodes was developed. This system constructs the cluster system by using a chain connection-type compositor board, whereby load-balancing can be controlled dynamically in response to the movement of the virtual objects or of the user's viewpoint. This paper describes the implementation of this dynamic load-balanced rendering method and presents the results of an evaluation experiment.
Keywords: PC cluster, immersive projection display, load-balancing
Dynamic aspects of real-time face-rendering BIBAKFull-Text 193-196
  Yvonne Jung; Christian Knöpfle
Simulating the visual appearance and lighting of human skin is a difficult task, which is addressed by researchers for several years. Because of the availability of high performance, programmable graphics boards, now it is possible to use techniques formerly only available to offline rendering. In this paper we present solutions to improve the visual quality of skin by enhancing such techniques to real-time application. Furthermore dynamic properties like aging and emotional changes in the appearance of a face like blushing and weeping are introduced to obtain convincing results at interactive frame rates for immersive virtual worlds.
Keywords: emotions, shader, skin rendering, virtual reality


Interactive editing of segmented volumetric datasets in a hybrid 2D/3D virtual environment BIBAKFull-Text 197-206
  Alexander Bornik; Reinhard Beichel; Dieter Schmalstieg
In this paper we present a novel system for segmentation refinement, which allows for interactive correction of surface models generated from imperfect automatic segmentations of arbitrary volumetric data. The proposed approach is based on a deformable surface model allowing interactive manipulation with a hybrid user interface consisting of an immersive stereoscopic display and a Tablet PC. The user interface features visualization methods and manipulation tools specifically designed for quick inspection and correction of typical defects resulting from automated segmentation of medical datasets. A number of experiments show that typical segmentation problems can be fixed within a few minutes using the system, while maintaining real-time responsiveness of the system.
Keywords: 3D user interfaces, hybrid user interfaces, interactive segmentation, segmentation refinement, virtual reality
Robust line tracking using a particle filter for camera pose estimation BIBAKFull-Text 207-211
  Fakhreddine Ababsa; Malik Mallem
This paper presents a robust line tracking approach for camera pose estimation which is based on particle filtering framework. Particle filters are sequential Monte Carlo methods based on point mass (or "particle") representations of probability densities, which can be applied to any state-space model. Their ability to deal with non-linearities and non-Gaussian statistics allows to improve robustness comparing to existing approaches, such as those based on the Kalman filter. We propose to use the particle filter to compute the posterior density for the camera 3D motion parameters. The experimental results indicate the effectiveness of our approach and demonstrate its robustness even when dealing with severe occlusion.
Keywords: 3D pose estimation, augmented reality, line tracking, particle filter
A reprocessing tool for quantitative data analysis in a virtual environment BIBAKFull-Text 212-215
  E. J. Griffith; M. Koutek; F. H. Post; T. Heus; H. J. J. Jonker
This paper presents an approach to help speed up and unify the exploration and analysis of time-dependent, volumetric data sets by easily incorporating new qualitative and quantitative information into an exploratory virtual environment (VE). The new information is incorporated through one or more expedited offline "reprocessing" steps, which compute properties of objects extracted from the data. These objects and their properties are displayed in the exploratory VE. A case study involving atmospheric data is presented to demonstrate the utility of the method.
Keywords: data visualization, virtual reality
The development of glove-based interfaces with the TRES-D methodology BIBAKFull-Text 216-219
  José P. Molina; Arturo S. García; Diego Martinez; Francisco J. Manjavacas; Victor Blasco; Victor López; Pascual González
The development of 3D user interfaces is mostly focused on technology and the ways of using it, and so the main concerns are the selection of hardware, software and interaction techniques. The process of development itself is as important as these issues, but it is usually ignored or poorly documented. This paper introduces the TRES-D methodology, and illustrates its application in the development of three different glove-based interfaces, not only to show the benefits of using these devices, but also the benefits of using such a methodological framework.
Keywords: 3d user interfaces, data gloves


Supporting guided navigation in mobile virtual environments BIBAKFull-Text 220-226
  Rafael Garcia Barbosa; Maria Andréia Formico Rodrigues
Developing interactive 3D graphics for mobile Java applications is a reality. Recently, the Mobile 3D Graphics (M3G) API was proposed to provide an efficient 3D graphics environment suitable for the J2ME platform. However, new services and applications using interactive 3D graphics, which have already achieved reasonable standards on the desktop, do not yet exist for resource-constrained handheld devices. In this work, we developed a framework for supporting guided navigation in mobile virtual environments. To illustrate its main functionalities, a virtual rescue training was designed, implemented and tested on mobile phones. Users can load virtual environments from a remote PC server, navigate through them, find an optimal and collision-free path from one place to another, and obtain additional information on the objects.
Keywords: guided navigation, mobile device, virtual environment
Navigation aids for multi-floor virtual buildings: a comparative evaluation of two approaches BIBAKFull-Text 227-235
  Luca Chittaro; Subramanian Venkataraman
Virtual environments (VEs) very often contain buildings that have to be navigated by users. In the literature, several navigation aids based on maps have been proposed for VEs, but in virtual buildings they have been typically used for single-floor scenarios. In this paper, we propose and experimentally evaluate two different navigation aids for multi-floor virtual buildings, one based on 3D maps and the other based on 2D maps. We compared subjects' performance with two different types of tasks: search and direction estimation. While the 2D navigation aid outperformed the 3D one for the search task, there were no significant differences between the two aids for the direction estimation task.
Keywords: evaluation, multi-floor virtual buildings, navigation aids
Z-Goto for efficient navigation in 3D environments from discrete inputs BIBAKFull-Text 236-239
  Martin Hachet; Fabrice Decle; Pascal Guitton
3D interactive applications are now appearing on mobile devices such as phones and PDAs. Compared to classical desktop or immersive configurations, mobile devices induce several constraints that intrinsically limit the user performance in interactive tasks. Consequently, a special effort has to be made in order to adapt the classical 3D user interfaces to mobile settings. In this paper, we propose a new key-based technique that favors navigation in 3D environments. Compared to a classical "go to" approach, our technique called Z-Goto directly operates in the 3D space. This accelerates the user's displacements by reducing the number of required keystrokes. Moreover, the construction of the cognitive maps is improved, as Z-Goto favors depth perception. An experiment shows that Z-Goto obtains better completion times compared to a standard "go to" technique for a primed search task on mobile device. It also shows that the user satisfaction for this new technique is good.
Keywords: interaction technique, mobile devices, navigation tasks, travel and wayfinding
3D visualization technologies for teleguided robots BIBAKFull-Text 240-243
  Salvatore Livatino; Filippo Privitera
The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in teleoperation when compared to 2D viewing. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot teleguide applications. This work investigates stereoscopic robot teleguide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also investigates how user performance may vary when employing different display technologies. Results from a set of test trials ran on five virtual reality systems emphasized few aspects which represent a base for further investigation as well as a guide when designing specific systems for telepresence.
Keywords: 3D visualization, stereo vision, teleoperation, telerobotics, virtual reality

Hardware & systems

The illusionhole with polarization filters BIBAKFull-Text 244-251
  Yoshifumi Kitamura; Tomokazu Nakayama; Takashi Nakashima; Sumihiko Yamamoto
We have proposed the IllusionHole system, which allows three or more people to simultaneously observe stereoscopic images from dynamically changing individual viewpoints. With a simple configuration, this system provides intelligible 3D stereoscopic images free of flicker and distortion. Based on the IllusionHole concept, we have built a prototype system using two liquid crystal projectors and polarizing filters. This paper describes the details of this prototype. We investigate how the image formation position of the stereographic image is affected by sources of errors such as measurements of the user's interpupillary distance and observing position. We also measure and discuss the variation of brightness and hue with viewing direction.
Keywords: CSCW, collaborative work, education, illusionhole, interactive, multiple users, science museum, stereoscopic display
Extending the scene graph with a dataflow visualization system BIBAKFull-Text 252-260
  Michael Kalkusch; Dieter Schmalstieg
Data ow graphs are a very successful paradigm in scientific visualization, while scene graphs are a leading approach in interactive graphics and virtual reality. Both approaches have their distinct advantages, and both build on a common set of basic techniques based on graph data structures. However, despite these similarities, no unified implementation of the two paradigms exists. This paper presents an in-depth analysis of the architectural components of dataflow visualization and scene graphs, and derives a design that integrates both these approaches.
   The implementation of this design builds on a common software infrastructure based on a scene graph, and extends it with virtualized dataflow, which allows the use of the scene graph structure and traversal mechanism for dynamically building and evaluating dataflow.
Keywords: dataflow visualization system, object hierarchies, scene graph, visualization
Media productions for a dome display system BIBAKFull-Text 261-264
  Athanasios Gaitatzes; Georgios Papaioannou; Dimitrios Christopoulos; Gjergji Zyba
As the interest of the public for new forms of media grows, museums and theme parks select real time Virtual Reality productions as their presentation medium. Based on three-dimensional graphics, interaction, sound, music and intense story telling they mesmerize their audiences. The Foundation of the Hellenic World (FHW) having opened so far to the public three different Virtual Reality theaters, is in the process of building a new Dome-shaped Virtual Reality theatre with a capacity of 130 people. This fully interactive theatre will present new experiences in immersion to the visitors. In this paper we present the challenges encountered in developing productions for such a large spherical display system as well as building the underlying real-time display and support systems.
Keywords: computer clusters, spherical display systems, stereoscopic display
Analytical compensation of inter-reflection for pattern projection BIBAKFull-Text 265-268
  Yasuhiro Mukaigawa; Takayuki Kakinuma; Yuichi Ohta
If a pattern is projected onto a concave screen,the desired view cannot be correctly observed due to the influence of inter-reflections. This paper proposes a simple but effective technique for photometric compensation in consideration of inter-reflections. The compensation is accomplished by canceling inter-reflections estimated by the radiosity method. The significant advantage of our method is that iterative calculations are not necessary because it analytically solves the inverse problem of inter-reflections.
Keywords: inter-reflection, projector


P2P Network for very large virtual environment BIBAKFull-Text 269-276
  Romain Cavagna; Christian Bouville; Jerome Royan
The ever increasing speed of Internet connections has led to a point where it is actually possible for every end user to seamlessly share data on Internet. Peer-To-Peer (P2P) networks are typical of this evolution. The goal of our paper is to show that server-less P2P networks with self-adaptive assignment techniques can efficiently deal with very large environments such as met in the geovisualization domain. Our method allows adaptative view-dependent visualization thanks to a hierarchical and progressive data structure that describes the environment. In order to assess the global efficiency of this P2P technique, we have implemented a dedicated real time simulator. Experimentation results are presented using a hierarchical LOD model of a very large urban environment.
Keywords: peer-to-peer, self-adaptation, self-organization, self-scalability, simulation, virtual environment
Dynamic interactive VR network services for education BIBAKFull-Text 277-286
  Krzysztof Walczak; Rafal Wojciechowski; Wojciech Cellary
Global information society and knowledge-based economy will generate a need for life-long learning on a mass scale. To deal with this challenge, traditional education should be extended with new forms of learning and teaching by employing advanced technologies such as virtual and mixed reality. In order to provide learners with these forms of education, a wide range of VR-based interactive educational network services should be developed. Technologies for delivering such network services are already available. Currently, a challenge is to develop methods and tools for efficient creation of vast amounts of VR-based learning material.
   In this paper, a method of dynamic content creation is described, which enables flexible authoring and manipulation of VR-based educational contents in interactive network environments. Application of this method to several architectural variants of educational systems is discussed.
Keywords: X-VR, distance learning and teaching, interactive 3D, mixed reality, virtual reality
Are two heads better than one?: object-focused work in physical and in virtual environments BIBAKFull-Text 287-296
  Ilona Heldal; Maria Spante; Mike Connell
Under which conditions has collaboration added value over individual work? How does performance change when using different technologies? These are important questions for industry and for research. This paper addresses them for pairs versus individuals using physical objects and virtual representations for object-focused task-solving. Based upon previous research on pair's performance and experiences for collaboration in a real setting and four different distributed virtual environments (VEs), single-user experimental studies were carried out. The results show that in relation to performance, pairs working in networked CAVE; technologies are superior compared to individuals, or pairs working in other distributed settings. In general, social interaction works as a facilitator for this type of task solving in networked VEs. Though, best performance was found in the real setting, with no major difference when comparing individuals versus pairs, working in VEs often were appreciated higher than working with physical objects.
Keywords: collaboration, immersive, performance, presence, social interaction, usability, virtual environments

Haptics in VR (special session)

A differential method for the haptic rendering of deformable objects BIBAKFull-Text 297-304
  Remis Balaniuk
This paper introduces a new method for the computation of contact forces during the haptic interaction between a rigid probe and a soft virtual object. Traditional methods used to estimate forces inside a haptic loop are based on a penetration distance of the haptic probe inside the virtual objects. This unnatural approach creates some visual incoherences when simulating the contact with rigid objects, but works fine on the force estimation side. For soft objects however, the use of a penetration distance makes less sense and creates many problems both visually and haptically. We propose a method that considers the penetration of the probe inside the virtual object as being an approximation error, and performs an iterative model adjustment estimating a local elasticity for the deformable object. Forces are computed incrementally. The proposed approach is independent from any particular implementation used for simulating the deformable object. The force estimation is based on the actual shape of the object, considering its deformations, allowing multiple users to interact with a same object while feeling the influence of each other. Experimental results are presented.
Keywords: haptic interfaces, soft-tissue modeling, virtual reality
A framework for bounded-time collision detection in haptic interactions BIBAKFull-Text 305-311
  Maurizio de Pascale; Domenico Prattichizzo
In this paper we present the V-GRAPH, a framework for bounded-time collision detection for point-like haptic interactions. This frame-work employs strategies similar to those used by the Lin-Canny and Dobkin-Kirkpatrick algorithms but, differently from these ones, it uses a partition of the space focused on vertices only, which al-lows both for an easier implementation and for usage with non-convex objects without the need for splitting the original mesh. In a preprocessing phase the mesh is analyzed to extract neighboring information based on Voronoi theory, then this data is used at run-time in a greedy visit exploiting motion coherence to achieve fast proximity queries. Finally standard segment-triangle intersection tests are eventually carried out to identify the exact point of collision. Moreover the framework can be easily extended to multiple levels of detail. Computational analysis and experimental results show that execution times are independent from mesh complexity, achieving same running times even on models composed by mil-lions of polygons. These features make it particularly suited for virtual museum and digital sculpting applications. Implementation is straightforward and freely available tools can be used for pre-processing.
Keywords: V-GRAPH, Voronoi, bounded time, collision detection, haptic
Tactylus, a pen-input device exploring audiotactile sensory binding BIBAKFull-Text 312-315
  Ernst Kruijff; Gerold Wesche; Kai Riege; Gernot Goebbels; Martijn Kunstman; Dieter Schmalstieg
Recent studies have shown that through a careful combination of multiple sensory channels, so called multisensory binding effects can be achieved that can be beneficial for collision detection and texture recognition feedback. During the design of a new pen-input device called Tactylus, specific focus was put on exploring multisensory effects of audiotactile cues to create a new, but effective way to interact in virtual environments with the purpose to overcome several of the problems noticed in current devices.
Keywords: 3D user interfaces, audiotactile feedback, sensory, substitution
Using neuromuscular electrical stimulation for pseudo-haptic feedback BIBAKFull-Text 316-319
  Ernst Kruijff; Dieter Schmalstieg; Steffi Beckhaus
This paper focuses at the usage of neuromuscular electrical stimulation (NMES) for achieving pseudo-haptic feedback. By stimulating the motor nerves, muscular contractions can be triggered that can be matched to a haptic event. Reflecting an initial user test, we will explain how this process can be realized, by investigating the physiological processes involved. Relating the triggered feedback to general haptics, its potential in future interfaces will be identified and laid out in a development roadmap.
Keywords: 3D user interfaces, biofeedback, haptic feedback, neuroelectrical stimulation
A haptic toolkit for the development of immersive and web-enabled games BIBAKFull-Text 320-323
  E. Ruffaldi; A. Frisoli; M. Bergamasco; C. Gottlieb; F. Tecchia
The creation of applications of Virtual Reality enabled with Haptic interaction and dynamic simulation requires usually to cover many implementation details that increase the development time and the effectiveness of the application itself. This work presents one game application that has been developed using a Haptic toolkit for the rapid application development, that integrates 3D graphics,haptic feedback and dynamic simulation.
   The resulting application can be easily deployed on the Web directly to the final user.
Keywords: game, haptics, virtual environments, web


PNORMS: platonic derived normals for error bound compression BIBAKFull-Text 324-333
  João Fradinho Oliveira; Bernard Francis Buxton
3D models of millions of triangles invariably repeatedly use the same 12-byte unit normals. Several bit-wise compression algorithms exist for efficient storage and progressive transmission and visualization of normal vectors. However such methods often incur a reconstruction time penalty which, in the absence of dedicated hardware acceleration, make real-time rendering with such compression/reconstruction methods prohibitive. In particular, several methods use a subdivided octahedron to create look-up normals, where the bit length of normal indices varies according to the number of subdivisions used. Not much attention has been given to the error in the normals using such schemes. We show that different Platonic solids create different amounts of normals for each subdivision or bit length in bit-wise compression terms, with different distributions and associated errors. In particular we show that subdividing the icosahedron gives a smaller maximum and mean error than its counterparts Platonic solids. This result has led us to create an alternative to bit-wise compression of normal ids for real-time rendering, where we use a x5 subdivided icosahedron to create 2.5 times more normals than a x5 subdivided octahedron, with less error, and exploit the advantages of absolute normal indices that do not require reconstruction at run-time, whilst still having memory savings of over 83% when using 2-byte indices.
   We present results using 2-byte indices for a target max error of 1.3ð degrees and 4-byte for a max error of <0.1ð. We present two hierarchical encoding methods, a fast method which allows one to dynamically encode large sets of modified triangles, useful for task, and a slower but more accurate method that caters for symmetry present in the subdivision solid being used. Different levels of a database allow for different cartoon like shading effects. The advantages of these databases are that they can be re-used for any object, and have studied bounds on the maximum errors of normals for yet to be known geometry such as new objects to be added to a scene. This error bound is also independent of the size and normal distribution of the object that we wish to add. In order to visualize the colour coding distribution of the errors in the normals of large models a simple 1-byte color encoding algorithm was developed.
Keywords: colour compression, error bound, normal compression, run-time encoding
Intuitively specifying object dynamics in virtual environments using VR-WISE BIBAKFull-Text 334-337
  Bram Pellens; Frederic Kleinermann; Olga De Troyer
Designing and building Virtual Environments is not an easy task, especially when it comes to specifying object behavior where either knowledge about animation techniques or programming skills are required. With our approach, VR-WISE, we try to facilitate the design of VEs and make this more accessible to novice users. In this paper, we present how behavior is specified in VR-WISE, as well as the prototype developed for the approach.
Keywords: behavior, conceptual modeling, design phase, virtual reality
A robust method for analyzing the physical correctness of motion capture data BIBAKFull-Text 338-341
  Yi Wei; Shihong Xia; Dengming Zhu
The physical correctness of motion capture data is important for human motion analysis and athlete training. However, until now there is little work that wholly explores this problem of analyzing the physical correctness of motion capture data. In this paper, we carefully discuss this problem and solve two major issues in it. Firstly, a new form of Newton-Euler equations encoded by quaternions and Euler angles which are very fit for analyzing the motion capture data are proposed. Secondly, a robust optimization method is proposed to correct the motion capture data to satisfy the physical constraints. We demonstrate the advantage of our method with several experiments.
Keywords: equations of multi-rigid- body's motion, motion capture data, physical correctness
Developing a 3D simulated bio-terror crises communication training module BIBAKFull-Text 342-345
  Edward Carpenter; Induk Kim; Laura Arns; Mohan J. Dutta-Berman; Krishna Madhavan
The anthrax attacks of 2001 brought professional and public attention to the significance of successful crisis communication in the context of bioterrorism. The project discussed in this paper is an initiative taken to respond to the calls for more effective bio-terror crises communication training. The goal of this project is to address various challenges in bio-terror crises communication by developing an innovative crisis communication training module for public relations students. This goal is achieved by attending to two main factors: (a) keen awareness of important theories in crises communication response, and (b) "hands-on" training in real-time bio-terror communication handling techniques. This paper introduces the process taken to develop 3D simulated crisis communication training material and presents future plans to assess its effectiveness.
Keywords: bioterrorism, communications, crisis control, virtual reality
Simplified animation circuit for metadata-based behavior model query and retrieval BIBAKFull-Text 346-349
  Tien-Lung Sun; Yu-Lun Chang
This paper describes an approach to simplify the animation circuit so that useful metadata could be extracted to facilitate behavior model storage and retrieval. In the VR model of a product, the tasks performed by the behavior nodes could be classified into three categories, i.e., changing the attributes of a part, animating the behavior of a part, and detecting user's interaction with a part. Based on this property, 2-tuple metadata (P, T) is designed to represent the behavior node, where P is the part operated by behavior node and T is tasks performed by the behavior node. The parts and tasks contained in the metadata are hierarchical classified. The process to simplify the animation circuit consists of two major steps. First, the behavior nodes in the animation circuit are converted to their metadata. The resulted graph is called a metadata graph. Secondly, the metadata graph is simplified by merging nodes that have the same classifications. The structured metadata extracted from the metadata graphs allows behavior model query and retrieval to be performed at different levels of abstractions.
Keywords: VR model, VRML, behavior, query and retrieval

Human factors II

Control of eye-movement to decrease VE-sickness BIBAKFull-Text 350-355
  Michiteru Kitazaki; Tomoaki Nakano; Naoyuki Matsuzaki; Hiroaki Shigemasu
One of well-known theories for motion sickness and VE (Virtual Environment) sickness is 'sensory conflict' theory. In this paper, we investigated whether the conflict between actual (extra-retinal) eye-movement and visually-simulated (retinal) eye-movement affects the VE-sickness. In results, we found that VE-sickness was significantly decreased by the control of observer's eye-movement with a stationary/moving fixation point. When the extra-retinal and retinal eye-movements were incongruent while the observer's head was actively moving, the VE-sickness was increased for sickness-sensitive observers. These results suggest that we can decrease VE-sickness by controlling eye-movements with a stationary/moving fixation point to remove conflict of extra-retinal and visual eye-movements. This is a new proposal of the way to decrease VE-sickness.
Keywords: VE sickness, extra-retinal information, eye-movement, motion sickness, visual information
Hand-held virtual reality: a feasibility study BIBAKFull-Text 356-363
  Jane Hwang; Jaehoon Jung; Gerard Jounghyun Kim
Hand-held computing devices are ubiquitous and have become part of our lives these days. Moreover, hand-held devices are also increasingly being equipped with special sensors and non-traditional displays. As such, it raises the question of whether such a "small" and "reduced" device could serve as an effective virtual reality (VR) platform and provide sufficient immersion and presence, e.g. through multimodal interaction. In this paper, we address this question by comparing the perceived field of view (FOV) and level of immersion and presence among the users' of VR platforms, varied in the sizes of physical/software FOV and in styles of interaction. In particular, we consider a motion based interaction, a style of interaction uniquely suitable for the "hand-held" devices. Our experimental study has revealed that when a motion based interaction was used, the FOV perceived by the user for the small hand held device was significantly greater than (around 50%) the actual. Other displays using the button or mouse/keyboard interface did not exhibit such a phenomenon. In addition, the level of user felt presence was higher than even that from a large projection based VR platform. The paper demonstrates the distinct possibility of realizing reasonable virtual reality even with devices with a small visual field of view and limited processing power.
Keywords: field of view, hand-held devices, immersion, motion based interface, presence, task performance, virtual reality
Presence in response to dynamic visual realism: a preliminary report of an experiment study BIBAKFull-Text 364-367
  Pankaj Khanna; Insu Yu; Jesper Mortensen; Mel Slater
This paper describes an experiment that examines the influence of visual realism on reported presence. 33 participants experienced two different renderings of a virtual environment that depicts a pit in the centre of a room, in a head-tracked head-mounted display. The environment was rendered using parallel ray tracing at 15fps, but in one condition ray casting (RC) was used achieving a result equivalent to OpenGL based per-pixel local illumination, and in the second full recursive ray tracing (RT). The participants were randomly allocated to two groups -- one that experienced RC first followed by RT, and the second group in the opposite order. Reported presence was obtained by questionnaires following each session. The results indicate that reported presence, in terms of the 'sense of being there' was significantly higher for the RT than for the RC condition.
Keywords: presence, virtual environments
The impact of immersive virtual reality displays on the understanding of data visualization BIBAKFull-Text 368-371
  Ahmed Bayyari; M. Eduard Tudoreanu
This paper presents evidence that situational awareness in a visualization of data benefits from immersive, virtual reality display technology because such displays appear to support better understanding of the visual information. Our study was designed to de-emphasize perceptual and interaction characteristics of the dis-plays and found that the task of counting targets is strongly influenced by the type of system used to render the visualization. Immersive-type displays outperformed traditional monitors. The target objects in the study have distinguishing features that cannotmbe identified from a distance to alleviate the effect of perceptual differences among displays. Counting was chosen because it entails basic understanding of the relationship among the data values in order to recognize previously counted items. The display choices consisted of a traditional monitor and three configurations of an immersive, projection environment, obtained by selectively turning off one or two projectors of a three-wall CAVE.
Keywords: evaluation, immersion, information visualization, virtual reality
Effects of physical display size and amplitude of oscillation on visually induced motion sickness BIBAKFull-Text 372-375
  Hiroaki Shigemasu; Toshiya Morita; Naoyuki Matsuzaki; Takao Sato; Masamitsu Harasawa; Kiyoharu Aizawa
Viewing environment is an important factor to understand the mechanism of visually induced motion sickness (VIMS). In Experiment 1, we investigated whether the symptom of VIMS changed depending on viewing angle and physical display size. Our results showed that larger viewing angle made the symptom of sickness severer and nausea symptom changed depending on physical display size with identical viewing angles. In Experiment 2, we investigated effects of viewing angle and amplitude of oscillation. The results showed that the effects of viewing angle were not only related to amplitude of oscillation but also to the other factors of viewing angle.
Keywords: SSQ, amplitude of oscillation, visual angle, visually induced motion sickness
Variations in physiological responses of participants during different stages of an immersive virtual environment experiment BIBAKFull-Text 376-382
  Andrea Brogni; Vinoba Vinayagamoorthy; Anthony Steed; Mel Slater
This paper presents a study of the fine grain physiological responses of participants to an immersive virtual simulation of an urban environment. The analysis of differences in participant responses at various stages of the experiment (baseline recordings, training, first half and second half of the urban simulation) are examined in detail. It was found that participants typically show a stress response during the training phase and a stress response towards the end of the simulation of the urban experience.
   There is also some evidence that variations in the level of visual realism based the texture strategy used was associated with changes in mental stress.
Keywords: evaluation, human response, immersive virtual environments, physiology

Tutorial I

Building a complete virtual reality application BIBAFull-Text 383
  Franco Tecchia
The development of a complete Virtual Reality application is a complex activity that requires good knowledge of several time-critical tasks: Computer Graphics, real-time Physics, Haptics and network programming are examples of the components needing to coexist in a modern Virtual Reality system.
   Each of these building blocks constitutes a research field on its own and a vast literature exists on techniques and algorithms useful to address specific problems; still, from a more high level perspective, only through tight integration and balanced design can a complex framework achieve optimal performances.
   Having to address such a range of integration issues, the development of a Virtual Reality application can in practice turn out to be a very lengthy and difficult process, where fundamental design choices and their implications should be carefully considered. The choice of the right tools is also very important, as common everyday practice shows how difficult is still to put together a successful and robust system.
   This tutorial aims at giving an overview of what are the main components involved in the task, how they should interact, and what are the inherent difficulties to place everything together.
   The tutorial is divided in two parts: The second part will introduce a real-life and recent example of integrated framework, to be used as a reference and base of discussion for the design of the next generation of integrated development environment and their applications.

Tutorial II

Virtual reality systems and applications BIBAFull-Text 384
  A. Gaitatzes; G. Papaioannou; D. Christopoulos
In this tutorial we will present the infrastructure required, both software and hardware, to create the Virtual Reality (VR) illusion. The history and the application areas of VR will be presented. From the software required to the image generator to the different display systems -- both head based and projection based -- all components of a VR system as well as practical deployment issues will be investigated. The stereo depth principle will be explained along with the different stereo methodologies available today. Aspects of a VR experience like immersion and collaboration will be explored. Other topics include the physical interface and interaction devices and the methods of manipulating a Virtual Environment by tracking users and devices.
   An attempt will be made to differentiate VR from pre-rendered Computer Graphics by presenting the issues concerning Real Time graphics. Finally possible future developments in the areas of Virtual Reality technology and Virtual Environments will be presented.