HCI Bibliography Home | HCI Journals | About VR | Journal Info | VR Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
VR Tables of Contents: 010203040506070809101112131415161718

Virtual Reality 16

Editors:Daniel Ballin; Robert D. Macredie
Dates:2012
Volume:16
Publisher:Springer-Verlag
Standard No:ISSN 1359-4338 (print) EISSN 1434-9957 (online)
Papers:26
Links:link.springer.com | Twitter | Table of Contents
  1. VR 2012-03 Volume 16 Issue 1
  2. VR 2012-06 Volume 16 Issue 2
  3. VR 2012-09 Volume 16 Issue 3
  4. VR 2012-11 Volume 16 Issue 4

VR 2012-03 Volume 16 Issue 1

Special Issue on Manufacturing and Construction

Editorial BIBFull-Text 1-2
  James Ritchie; Judy Vance; Satyandra Gupta
A dual-representation strategy for the virtual assembly of thin deformable objects BIBAKFull-Text 3-14
  Vikalp Mishra; Krishnan Suresh
The two main objectives of virtual assembly are: (1) to train assembly-operators through virtual assembly models, and (2) to simultaneously evaluate products for ease-of-assembly. The focus of this paper is on developing computational techniques for virtual assembly of thin deformable beam and plate-like objects. To meet the objectives of virtual assembly, the underlying computational technique must: (1) be carried out at a high frame-rate (>20 frames/second), (2) be accurate (<5% error in deformation and force estimation), (3) be conducive to collision detection, and (4) support rapid design evaluations. We argue in this paper that popular computational techniques such as 3-D finite element analysis, boundary element analysis and classic beam/plate/shell analysis fail to meet these requirements. We therefore propose a new class of dual representation techniques for virtual assembly of thin solids, where the geometry is retained in its full 3-D form, while the underlying physics is dimensionally reduced, delivering: (1) high computational efficiency and accuracy (over 20 frames per second with <1% deformation error), and (2) direct CAD model processing, i.e., the CAD model is not geometrically simplified, and 3-D finite element mesh is not generated. In particular, a small-size stiffness matrix with about 300 degrees of freedom per deformable object is generated directly from a coarse surface triangulation, and its LU-decomposition is then exploited during real-time simulation. The accuracy and efficiency of the proposed method is established through numerical experiments and a case study.
Keywords: Virtual assembly; Deformation; Thin; Plates; Kirchhoff-Love; Dual-representation
Coupling of interactive manufacturing operations simulation and immersive virtual reality BIBAKFull-Text 15-23
  Denis V. Dorozhkin; Judy M. Vance; Gordon D. Rehn; Marco Lemessi
This paper presents a novel general-purpose simulation analysis application that combines concurrent operations simulation with the advanced data interrogation and user interaction capabilities of immersive virtual reality systems. The application allows for interactive modification of the simulation parameters, while providing the users with the available simulation information by effectively placing the operator in the midst of the environment being simulated. The major contribution of this research is the total integration of the immersive virtual reality environment with the simulation, allowing users in the environment to interactively change the inputs to the simulation as it is running. Implementation and functionality details of the developed application are presented. The experience of using the application to analyze a manufacturing operation in a collaborative scenario is also discussed.
Keywords: Concurrent operations simulation; Virtual reality
An integrated head pose and eye gaze tracking approach to non-intrusive visual attention measurement for wide FOV simulators BIBAKFull-Text 25-32
  Hua Cai; Yingzi Lin
Eye gaze tracking is very useful for quantitatively measuring visual attention in virtual environments. However, most eye trackers have a limited tracking range, e.g., ±35° in the horizontal direction. This paper proposed a method to combine head pose tracking and eye gaze tracking together to achieve a large range of tracking in virtual driving simulation environments. Multiple parallel multilayer perceptrons were used to reconstruct the relationship between head images and head poses. Head images were represented with the coefficients extracted from Principal Component Analysis. Eye gaze tracking provides precise results on the front view, while head pose tracking is more suitable for tracking areas of interest than for tracking points of interest on the side view.
Keywords: Eye gaze tracking; Head pose tracking; Multilayer perceptron (MLP); Visual attention
Prototyping flexible touch screen devices using collocated haptic-graphic elastic-object deformation on the GPU BIBAKFull-Text 33-43
  Arun Rakesh Yoganandan; P. Pat Banerjee; Cristian J. Luciano
Rapid advances in flexible display technologies and the benefits that they provide are promising enough to consider them for futuristic mobile devices. Current prototyping methods lack facilities to simulate such flexible touch screen displays and the interaction with them. In this paper, we present a technique that provides product developers a tool to interactively simulate products featuring flexible displays, using Augmented Reality and Haptics. This GPU-based algorithm is computationally inexpensive and efficient to deform a polygonal mesh in real time while maintaining an acceptable haptic feedback. The implementation of the algorithm has been found to be successful when applied to a variety of product simulations. This simulation tool can enhance or even replace traditional prototyping and facilitate testing of the prototype at various stages of the design cycle.
Keywords: Virtual reality prototyping; Flexible displays; Product simulation; Haptics
Real-time simulation for a virtual reality-based MIG welding training system BIBAKFull-Text 45-55
  Terrence L. Chambers; Amit Aglawe; Dirk Reiners; Steven White
This paper describes a real-time welding simulation method for use in a desktop virtual reality simulated Metal Inert Gas welding training system. The simulation defines the shape of the weld bead, the depth of penetration, and the temperature distribution in the workpiece, based on inputs from the motion-tracking system that tracks the position of the welding gun as a function of time. A finite difference method is used to calculate the temperature distribution, including the width of the weld bead and the depth of penetration. The shape of the weld bead is then calculated at each time step by assuming a semi-spherical volume, based on the width of the weld bead, the welding speed, and the wire feed rate. The real-time performance of the system is examined, and results from the real-time simulation are compared to physical tests and are found to have very good correlation for welding speeds up to 1,000 mm/min.
Keywords: Virtual reality; Welding; Finite difference; Simulation; Training
VRMDS: an intuitive virtual environment for supporting the conceptual design of mechanisms BIBAKFull-Text 57-68
  Juan Camilo Alvarez; Hai-Jun Su
This paper presents Virtual Reality Mechanism Design Studio (VRMDS), an intuitive virtual environment for supporting the interactive design and simulation of mechanisms. The studio allows users to build spatial or planar mechanisms through intuitive operations and subsequently simulate their dynamic motion. Written in Python script language, VRMDS provides 3D stereoscopic immersive visualization, haptic enabled interaction, head and hand tracking and a user-friendly graphical user interface. A data model for organizing the data structure of links and commonly used mechanical joints is designed and implemented upon the basis of the Vizard Virtual Reality (VR) library. Within the virtual environment, the user can create links and assemble them into a mechanism by defining joints between links. Simultaneously, a corresponding MATLAB's SimMechanics model is automatically created at run time. The dynamics simulation of mechanisms is enabled by interfacing with the dynamics solver built-in SimMechanics. The user may choose to run the system in an immersive VR environment or a desktop environment. The result is a versatile mechanism design tool that is beneficial to the early stages of the design process. A case study of a spatial mechanism is provided to demonstrate the usefulness of this system in mechanism design.
Keywords: Virtual reality; Conceptual design; Computer-aided design; Mechanism design; Multi-body dynamics simulation; Haptic interfaces; SimMechanics
Automated generation of engineering rationale, knowledge and intent representations during the product life cycle BIBAKFull-Text 69-85
  Raymond C. W. Sung; James M. Ritchie; Theodore Lim; Zoe Kosmadoudi
One of the biggest challenges in engineering design and manufacturing environments is the effective capture of and decoding of tacit knowledge. Fundamental to Life Cycle Engineering is the capture of engineering information and knowledge created at all stages of the product development process, from conceptual design through to product support and disposal. Consider this -- the amount of vital information and knowledge lost when key design personnel retire -- hence the need to capture meta-cognitive task-related strategies, particularly to support knowledge reuse and training. Many methods have been tried and tested with the successful few found to be very time consuming and expensive to implement and carry out; consequently, there is a need to investigate alternative paradigms for knowledge and information capture. This paper reports on a current industrial case study on knowledge capture methods employed by industrial partners in the design and manufacture of a variety of electro-mechanical products. The results suggest the need for new kinds and forms of knowledge capture methods and representation, particularly those associated with individual design engineering tasks. Following the findings, the paper presents a knowledge capture methodology for automatic real-time logging, capture and post-processing of design data from a virtual reality design system. Task-based design experiments were carried out with industrial partners to demonstrate the effective, unobtrusive and automatic capture and representation of various forms of design knowledge and information. Qualitative and quantitative evaluation of knowledge representations were also performed to determine which method was most effective at conveying design knowledge and information for other engineers.
Keywords: Knowledge capture; Cable harness design; Knowledge representation; User logging; Design rationale; Design task analysis

VR 2012-06 Volume 16 Issue 2

Grasp programming by demonstration in virtual reality with automatic environment reconstruction BIBAKFull-Text 87-104
  Jacopo Aleotti; Stefano Caselli
A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.
Keywords: Virtual reality; Environment modeling; Grasp programming; Glove interaction
The Virtual Trillium Trail and the empirical effects of Freedom and Fidelity on discovery-based learning BIBAKFull-Text 105-120
  Maria C. R. Harrington
The Virtual Trillium Trail is a new kind of desktop virtual reality application that crosses over into the area of geospatial, educational simulations. Visual fidelity significantly impacts intrinsic learning, activity in situ, and knowledge gained, independent of other factors. The main empirical contribution of this report is on the impact of the user interface design parameters of graphical fidelity and navigational freedom on learning outcomes. A planned orthogonal contrast, Two-way ANOVA with the factors of Visual Fidelity and Navigational Freedom -- both scaled, and set to high and low levels -- shows significant impacts on the variables of Salient Events, a proxy for discovery-based learning, and Knowledge Gained, as measured between a pre-test and a post-test. Thus, there is strong empirical evidence to support the use of desktop virtual environments, built with high-fidelity, photo-realistic, and free navigational game engine technology, as educational simulations for informal education. The high-level Visual Fidelity combined with the high-level Navigational Freedom condition showed a mean learning gain of 37.44% and is significantly superior to the low-level Visual Fidelity, low-level Navigational Freedom condition, ceteris paribus.
Keywords: Virtual reality; Serious games; Educational simulations; Child-computer-environment interface; Discovery-based learning; Ecology education; User interfaces; Three-dimensional graphics and realism
The Palenque project: evaluating interaction in an online virtual archaeology site BIBAKFull-Text 121-139
  Erik Champion; Ian Bishop; Bharat Dave
This case study evaluated the effect on cultural understanding of three different interaction modes, each teamed with a specific slice of the digitally reconstructed environment. The three interaction modes were derived from an initial descriptive theory of cultural learning as instruction, observation and action. A major aim was to ascertain whether task performance was similar to the development of understanding of the cultural context reached by participation in the virtual environment. A hypothesis was that if task performance is equivalent to understanding and engagement, we might be able to evaluate the success of virtual heritage environments (through engagement and education), without having to annoy the user with post-experience questionnaires. However, results suggest interaction in virtual heritage environments is so contextually embedded; subjective post-test questionnaires can still be more reliable than evaluating task performance.
Keywords: Palenque; Virtual heritage; Cultural learning; Mayan
Discriminability-based evaluation of transmission capability of tactile transmission systems BIBAKFull-Text 141-150
  Shogo Okamoto; Masashi Konyo; Satoshi Tadokoro
Tactile transmission systems deliver tactile information such as texture roughness to operators of robotic systems. Such systems are typically composed of tactile sensors that sense the physical characteristics of textures and tactile displays that present tactile stimuli to operators. One problem associated with tactile transmission systems is that when the system has a bottleneck, it is difficult to identify whether the tactile sensor, tactile display, or perceptual ability of the user is the cause because they have different performance criteria. To solve this problem, this study established an evaluation method that uses the discriminability index as an evaluation criterion. The method lets tactile sensors, displays, and human tactile perception be assessed in terms of the ability to transmit physical quantities; the same criterion is used for all three possible causes so that their abilities can be directly compared. The developed method was applied to a tactile-roughness transmission system (Okamoto et al. 2009), and its tactile sensor was identified as the bottleneck of the system.
Keywords: Assessment of man-machine system; Discriminability index; Performance measurement; Tactile display; Tactile sensor
RTIL-system: a Real-Time Interactive L-system for 3D interactions with virtual plants BIBAKFull-Text 151-160
  Ludovic Hamon; Emmanuelle Richard; Paul Richard; Rachid Boumaza
The L-system is a rewriting process based on formal grammar and is used to generate 3D, dynamic structures such as virtual plants and fractal graphics. In previous works, we highlighted that existing L-system software applications and programs are limited, either in terms of human interaction or in terms of modelling. In particular, few of them allow the user to interact with virtual plants during their growth. Our own L-system engine was developed and called the real-time interactive L-system (RTIL-system). The RTIL-system covers most important L-system extensions such as parametric and context-sensitive features. Furthermore, real-time interactions with the user and the environment with respect to L-system formalism are available. This paper presents an RTIL-system focusing on human interaction, the Partial Interactive Derivation (PID) concept and further progress by the extension of PID to context-sensitive rules. To illustrate the potential of the RTIL-system, the effect of various interactive tasks such as sub-axis additions, pruning and bending on the subsequent dynamic development of virtual plants is described.
Keywords: Virtual reality; L-system; Real-time interaction; Virtual plant; Fractal
A novel approach in rehabilitation of hand-eye coordination and finger dexterity BIBAKFull-Text 161-171
  Y. Shen; P. W. Gu; S. K. Ong; A. Y. C. Nee
Stroke patients or victims who have been involved in serious accidents often suffer from impaired hand-eye coordination and muscle dexterity. Products, such as nine-hole pegboards, have been designed to help rehabilitate various skills, e.g., perceptual accuracy and finger dexterity. Patients who do not have sufficient muscle strength would not be able to carry out such traditional exercises. This paper presents the research aims at providing a fresh and viable approach to physiotherapy for such patients while emulating the rehabilitation capabilities of traditional products. In this paper, a novel approach, AR-Rehab, for the rehabilitation of hand-eye coordination and finger dexterity has been developed incorporating Augmented Reality (AR) technology. In this application, the users can interact with virtual piano keys in a real-life scene by moving the real hands wearing data-gloves to detect the flexing of the fingers and markers to detect the position of the hands.
Keywords: Augmented reality; Rehabilitation; Hand-eye coordination; Finger dexterity

VR 2012-09 Volume 16 Issue 3

Haptic interpersonal communication: improvement of actions coordination in collaborative virtual environments BIBAKFull-Text 173-186
  Jean Simard; Mehdi Ammi
This article explores the use of haptic feedback for interpersonal communication in collaborative virtual environments. After a detailed presentation of all communication mechanisms involved, we propose the investigation of a low-level communication approach through the feedthrough mechanism. This channel is used to communicate kinematic information about a partner's gestures during closely coupled collaboration. Several communication metaphors, with complementary behaviors, were investigated to improve the coordination between two partners during an assembly task. The results clearly show the role of communication strategies for the improvement of gesture coordination and highlight the correlation between applied force and the level of efficiency.
Keywords: Collaborative virtual environments; Haptics; Awareness; Sensorial communication; Communication metaphors
Performance improvement of Distributed Virtual Environments by exploiting objects' attributes BIBAKFull-Text 187-203
  Christos Bouras; Eri Giannaka; Thrasyvoulos Tsiatsos
Distributed virtual environments need to address issues related to the control of network traffic, resource management, and scalability. Given the distributed nature of these environments, the main problems they need to overcome are the efficient distribution of workload among the servers and the minimization of the communication cost. In this direction, a lot of work has been done and numerous relevant techniques and algorithms have been proposed. The majority of these approaches mainly focus on user entities and their interactions. However, most of actual DVE systems include additional and non-dynamic elements, denoted as objects, whose presence can affect users' behavior. This paper introduces virtual objects' attributes and proposes two approaches that exploit these attributes in order to handle workload assignment and communication cost in DVE systems. Both approaches take into account scenario-specific aspects of DVE systems, such as the impact that entities' attributes have on each other and the way this impact can affect the system's state. These scenario-specific aspects are then combined with quantitative factors of the system, such as workload, communication cost, and utilization. The experiments conducted in order to validate the behavior of the proposed approach show that the incorporation of object's presence can improve the DVE system's performance. More specifically, objects' presence and their attributes can assist in the significant reduction in the communication cost along with effective workload distribution among the system's servers.
Keywords: Distributed virtual environments; Load balancing; VR techniques and systems
A methodology for optimal voxel size computation in collision detection algorithms for virtual reality BIBAKFull-Text 205-213
  G. Echegaray; D. Borro
Real-time Virtual Reality applications require accuracy but are also time dependent; therefore, in these environments, the time consumption is particularly important. For that reason, when facing the problem of Collision Detection for a Virtual Reality application, we firstly focus our attention on optimizing time performance for collisions among objects. Spatial Partitioning algorithms have been broadly used in Collision Detection. In particular, voxel-based methods are simple and quick, but finding the optimum voxel size is not trivial. We propose a methodology to easily determine the optimal voxel size for Collision Detection algorithms. Using an algorithm which represents volumetric objects with tetrahedra as an example, a performance cost function is defined in order to analytically bound the voxel size that gives the best computation times. This is made by inferring and estimating all the parameters involved. Thus, the cost function is delimited to depend only on geometric data. By doing so, it is possible to determine the optimal voxelization for any algorithm and scenario. Several solutions have been researched and compared. Experimental results with theoretical and real 3D models have validated the methodology. The reliability of our research has also been compared with traditional experimental solutions given by previous works.
Keywords: Collision detection; Voxel size; Uniform spatial partitioning; Optimization
An integrated virtual environment for feasibility studies and implementation of aerial MonoSLAM BIBAKFull-Text 215-232
  M. A. Amiri Atashgah; S. M. B. Malaek
This work presents a complete framework of an integrated aerial virtual environment (IAVE), which could effectively help implementing MonoSLAM (single-camera simultaneous localization and mapping) on an aerial vehicle. The developed system allows investigating different flight conditions without using any preloaded maps or predefined features. A 3D graphical engine integrated with a full 6 DOF aircraft dynamic simulator together with its trajectory generator completes the package. The 3D engine generates and accumulates real-time images of a general camera installed on the aerial vehicle. We effectively exploit C++ to develop the 3D graphics engine (3DGE) and all its associated visual effects, including different types of lighting, climate conditions, and moving objects. The existing 3DGE exploits the so-called Frenet Adapted Frames (FAF) with constrained angular velocities that is very effective in motion modeling of both ground and aerial moving objects. An in-house-developed MATLAB GUI puts into service the offline MonoSLAM system, which is very user friendly. The current version of IAVE effectively employs the so-called Inverse Depth Parameterization notions for features' depth estimation in monocular SLAM, where different case studies show its dependable results for low-cost aerial navigation of a general aviation low-speed aircraft.
Keywords: 3D graphics engine; Virtual environment; MonoSLAM; General aviation; Aerial navigation
Virtual reality as a communication process BIBAKFull-Text 233-241
  Daniele Marini; Raffaella Folgieri; Davide Gadia; Alessandro Rizzi
In this work, we consider immersive Virtual Reality (VR) as a communication process between humans, mediated by computer systems, which uses interaction, visualization, and other sensory stimuli to convey information. From this viewpoint, it is relevant to understand how VR can solve a given communication problem, what is therefore the expressive power of VR system, i.e., its ability in establishing the communication, what are the guidelines to design an effective system, and what are the more relevant models of VR applications. Firstly, we try to clarify the notion of reality in Virtual Reality systems and conclude that reality is not an intrinsic characteristic of VR, rather the result of a conventional way of coding information. The purpose of coding is to lead the observer to the conclusion that the VR set is what is called in italian as verisimile (from Latin veri similis), i.e., "similar-to-the-real-thing". So the creation of an effective VR application is an artifice or an illusion. But in order to avoid an over-reliance on the creativity of the VR designer, we intend to identify a solid ground on which different kinds of VR solutions can be considered in terms of their ability to solve the desired communication objective. To this aim, we will rely on methods derived from rhetoric to semiotics.
Keywords: Virtual Reality; Semiotics; Communication; Realism
Immersive manipulation of virtual objects through glove-based hand gesture interaction BIBAKFull-Text 243-252
  Gan Lu; Lik-Kwan Shark; Geoff Hall; Ulrike Zeshan
Immersive visualisation is increasingly being used for comprehensive and rapid analysis of objects in 3D and object dynamic behaviour in 4D. Challenges are therefore presented to provide natural user interaction to enable effortless virtual object manipulation. Presented in this paper is the development and evaluation of an immersive human-computer interaction system based on stereoscopic viewing and natural hand gestures. For the development, it is based on the integration of a back-projection stereoscopic system for object and hand display, a hybrid inertial and ultrasonic tracking system to provide the absolute positions and orientations of the user's head and hands, as well as a pair of high degrees-of-freedom data gloves to provide the relative positions and orientations of digit joints and tips on both hands. For the evaluation, it is based on a two-object scene with a virtual cube and a CT (computed tomography) volume created for demonstration of real-time immersive object manipulation. The system is shown to provide a correct user view of objects and hands in 3D with depth, as well as to enable a user to use a number of simple hand gestures to perform basic object manipulation tasks involving selection, release, translation, rotation and scaling. Also included in the evaluation are some quantitative tests of the system performance in terms of speed and latency.
Keywords: Hand gesture tracking and recognition; Immersive stereoscopic visualisation; Virtual object manipulation

VR 2012-11 Volume 16 Issue 4

NAVIG: augmented reality guidance system for the visually impaired BIBAKFull-Text 253-269
  Brian F. G. Katz; Slim Kammoun; Gaëtan Parseihian; Olivier Gutierrez
Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.
Keywords: Assisted navigation; Guidance; Spatial audio; Visually impaired assistive device; Need analysis
Toward the design of transitional interfaces: an exploratory study on a semi-immersive hybrid user interface BIBAKFull-Text 271-288
  Felipe G. Carvalho; Daniela G. Trevisan; Alberto Raposo
A task that can be decomposed into subtasks with different technological demands may be a challenge, since it requires multiple interactive environments as well as transitions between them. Some of these transitions may involve changes in hardware devices and interface paradigms at the same time. Some previous works have proposed various setups for hybrid user interfaces, but none of them focused on the design of transition interactions. Our work emphasizes the importance of interaction continuity as a guideline in the design and evaluation of transitional interfaces within a hybrid user interface (HUI). Finally, an exploratory study demonstrates how this design aspect is perceived by users during transitions in an HUI composed by three interactive environments.
Keywords: Transitional interfaces; Hybrid user interfaces; Continuity properties
Camera space shadow maps for large virtual environments BIBAKFull-Text 289-299
  Ivica Kolic; Zeljka Mihajlovic
This paper presents a new single-pass shadow mapping technique that achieves better quality than the approaches based on perspective warping, such as perspective, light-space, and trapezoidal shadow maps. The proposed technique is appropriate for real-time rendering of large virtual environments that include dynamic objects. By performing operations in camera space, this solution successfully handles the general and the dueling frustum cases and produces high-quality shadows even for extremely large scenes. This paper also presents a fast nonlinear projection technique for shadow map stretching that enables complete utilization of the shadow map by eliminating wastage. The application of stretching results in a significant reduction in unwanted perspective aliasing, commonly found in all shadow mapping techniques. Technique is compared with other shadow mapping techniques, and the benefits of the proposed method are presented. The proposed shadow mapping technique is simple and flexible enough to handle most of the special scenarios. An API for a generic shadow mapping solution is presented. This API simplifies the generation of fast and high-quality shadows.
Keywords: Shadow maps; Real-time shadows; Dynamic shadows; Virtual environments
Supporting cognitive processing with spatial information presentations in virtual environments BIBAKFull-Text 301-314
  Eric D. Ragan; Doug A. Bowman; Karl J. Huber
While it has been suggested that immersive virtual environments could provide benefits for educational applications, few studies have formally evaluated how the enhanced perceptual displays of such systems might improve learning. Using simplified memorization and problem-solving tasks as representative approximations of more advanced types of learning, we are investigating the effects of providing supplemental spatial information on the performance of learning-based activities within virtual environments. We performed two experiments to investigate whether users can take advantage of a spatial information presentation to improve performance on cognitive processing activities. In both experiments, information was presented either directly in front of the participant, at a single location, or wrapped around the participant along the walls of a surround display. In our first experiment, we measured memory scores and analyzed participant strategies for a memorization and recall task. In addition to comparing spatial and non-spatial presentations, we also varied field of view and background imagery. The results showed that the spatial presentation caused significantly better memory scores. Additionally, a significant interaction between background landmarks and presentation style showed that participants used more visualization strategies during the memorization task when background landmarks were shown with spatial presentations. To investigate whether the advantages of spatial information presentation extend beyond memorization to higher level cognitive activities, our second experiment employed a puzzle-like task that required critical thinking using the presented information. Focusing only on the effects of spatial presentations, this experiment measured task performance and mental workload. The results indicate that no performance improvements or mental workload reductions were gained from the spatial presentation method compared with a non-spatial layout for our problem-solving task. The results of these two experiments suggest that supplemental spatial information can affect mental strategies and support performance improvements for cognitive processing and learning-based activities. However, the effectiveness of spatial presentations is dependent on the nature of the task and a meaningful use of space and may require practice with spatial strategies.
Keywords: Virtual environments; Memory; Cognition; Learning; Space
Evaluation of an electronic video game for improvement of balance BIBAKFull-Text 315-323
  Kristiina M. Valter McConville; Sumandeep Virk
Virtual environments have been investigated for fitness and medical rehabilitation. In this study, the Sony EyeToy® and PlayStation 2® were used with the AntiGrav™ game to evaluate their potential for improving postural balance. The game required lateral head, body, and arm movements. The performance on balance tests of subjects who trained for 3 weeks with this game was compared to the performance of controls who were not trained. Training subjects showed improvement for two of the three tests (each testing a different facet of balance), suggesting specificity of training, while control subjects did not show significant improvement on any test. Simulator sickness questionnaire results showed a variety of mild symptoms, which decreased over the training sessions. Motor learning analysis of the game scores showed that mastery had been achieved on the easier level in the game, but not on the second level of difficulty. This reflects the potential for continued learning and training through advanced levels within a game. A model parameter using the time constants of game score improvement was developed, which could be used to quantify the difficulty for any video game design. The results suggest that this video game could be used for some aspects of balance training.
Keywords: Balance training; Difficulty model; Motor learning; Simulator sickness; Video game; Virtual environment
Overcoming the information overload problem in a multiform feedback-based virtual reality system for hand motion rehabilitation: healthy subject case study BIBAKFull-Text 325-334
  Sha Ma; Martin Varley; Lik-Kwan Shark; Jim Richards
The use of composite multiple feedback in a newly proposed virtual reality system enables the patient to perceive similar real-world performance in the virtual world. However, it might cause information overload, which makes the patient feel confused and distracted during training. The aim of this study is to investigate the effectiveness of having separate function-specific feedback pre-training prior to the final multiform feedback task. During the evaluating tests with thirty healthy subjects, it has been found that effective pre-training set could overcome the problem in the main task. Minor modifications on the pre-training set could overcome or aggravate the problem, which indicates the importance of choosing the correct pre-training parameters.
Keywords: Virtual world; User training for immersive environment; Multiform feedback; Hand motion; Function rehabilitation; EMG