| A pseudo-immersive virtual environment -- a framework for modelling sheet deformation | | BIBAK | Full-Text | 1-16 | |
| B. S. Mahal; D. E. R. Clark; J. E. L. Simmons | |||
| This paper presents a real-time, computationally inexpensive environment for
accurate simulations of sheet materials on a personal computer. The approach
described differs from other techniques through its novel use of multilayer
sheet structures. The ultimate aim is to incorporate into the environment the
capacity of simulating a range of temperatures. A pseudo-immersive "Window on
World" (WoW) environment is used to handle the implementation of the real-time,
aesthetically accurate deformation algorithm (MaSSE-Mass-Spring Simulation
Engine). The motion of the sheet is controlled by simulated gravity and through
its interaction with objects that have been inserted into a virtual room. In
addition, the WoW interface is used to adjust environmental parameters
dynamically and adjust the scene viewing perspective. An obvious use of the
environment is centred on mechanical engineering-based real-time simulations of
heat-sensitive sheet materials. This would allow for a wide range of
applications in virtual manufacturing including the clothing industry and
hostile environments. Keywords: Mass-spring systems; Sheet deformation; Heat simulation; Pseudo-immersive
desktop environment; Ordinary differential equations | |||
| Interaction with a desktop virtual environment: a 2D view into a 3D world | | BIBAK | Full-Text | 17-25 | |
| Eleanor Marshall; Sarah Nichols | |||
| With the development of computer software and hardware in the past few
years, it has been possible to produce effective training virtual environments
on everyday personal computers with little expert training required for users
or designers. However, the development of the equipment that enables this has
brought little coinciding research on what features to include when designing
these environments. Despite these increased advances in PC capabilities for
desktop virtual environments (VEs), there are still limitations on the number
of objects that can be programmed to be interactive, usually due to
restrictions on programming time and cost. As a result, it is often left to the
programmer to decide which of the objects included to increase the realism of
the environment will be interactive and which aesthetic. The work presented in
this paper is an experiment that aims to establish a guide for environment
designers to aid effective environment interaction development by identifying
key elements in a VE design. Keywords: Desktop virtual environments; Interaction hotspots; Training; Virtual
environment development; Virtual environment design | |||
| Navigation in desktop virtual environments: an evaluation and recommendations for supporting usability | | BIBAK | Full-Text | 26-40 | |
| Angelia Sebok; Espen Nystad; Stein Helgar | |||
| Virtual reality (VR) can provide useful tools for a variety of applications.
However, for these tools to be effective, they must be easy to use. In virtual
environments (VEs), usability is impaired by poorly designed navigation
systems. Insufficient realism and missing physiological orientation and motion
cues impair spatial learning in desktop VEs. Capabilities for navigation in a
VE are far more varied than in reality; so much greater flexibility can be
offered, but designing VEs with too many options can overwhelm users. To assist
designers in building effective, usable navigation systems for VEs, navigation
techniques must be evaluated to identify which features actually support users
in accomplishing their tasks and which features create unnecessary problems.
This study evaluates navigation in two different VEs to develop recommendations for the design of navigation systems in desktop VEs. The study consists of an objective assessment of navigation control dynamics, a guideline-based evaluation and a review of data collected during two experimental studies. The findings indicate that real-world constraints, specialised navigation techniques and feedback regarding location and direction of travel are needed to support navigation in desktop VEs. Keywords: Controls; Guidelines; Navigation; Usability; Virtual environment | |||
| Design and display of enhancing information in desktop information-rich virtual environments: challenges and techniques | | BIBAK | Full-Text | 41-54 | |
| Nicholas F. Polys; Doug A. Bowman | |||
| Information-rich virtual environments (IRVEs) have been described as
environments in which perceptual information is enhanced with abstract (or
symbolic) information, such as text, numbers, images, audio, video, or
hyperlinked resources. Desktop virtual environment (VE) applications present
similar information design and layout challenges as immersive VEs, but, in
addition, they may also be integrated with external windows or frames commonly
used in desktop interfaces. This paper enumerates design approaches for the
display of enhancing information both internal and external to the virtual
world's render volume. Using standard Web-based software frameworks, we explore
a number of implicit and explicit spatial layout methods for the display and
linking of abstract information, especially text. Within the VE view, we
demonstrate both heads-up-displays (HUDs) and encapsulated scenegraph behaviors
we call semantic objects. For desktop displays, which support information
display venues external to the scene, we demonstrate the linking and
integration of the scene with Web browsers and external visualization
applications. Finally, we describe the application of these techniques in the
PathSim visualizer, an IRVE interface for the biomedical domain. These design
techniques are relevant to instructional and informative interfaces for a wide
variety of VE applications. Keywords: Information-rich virtual environments; Visualization design; Information
psychophysics; Multiple view architectures; Desktop virtual environments | |||
| Evaluating design guidelines for reducing user disorientation in a desktop virtual environment | | BIBAK | Full-Text | 55-62 | |
| Shamus P. Smith; Tim Marsh | |||
| Navigation in virtual environments can be difficult. One contributing factor
is user disorientation. Two major causes of this are the lack of navigation
cues in the environment and problems with navigating too close to or through
virtual world objects. Previous work has developed guidelines, informed by
cinematography conventions, for the construction of virtual environments to aid
user comprehension of virtual space to reduce user disorientation. To validate
these guidelines, two user studies have been performed where users of a desktop
virtual environment are to complete a navigation task in a virtual maze. In an
initial study [12], collision detection with the maze walls was not enabled and
the results indicated that the guidelines were effective for reducing
disorientation but not for developing the user's awareness of the environment
space. A second study has been performed where collision detection was enabled.
Results suggest that the use of the guidelines can help reduce the incidences
of user disorientation and aid navigation tasks. However, the guidelines have
little impact on users' ability to construct cognitive maps of the desktop
virtual environment. Keywords: Navigation; Virtual environment; User disorientation; Design guidelines;
Evaluation study | |||
| Fast continuous collision detection and handling for desktop virtual prototyping | | BIBAK | Full-Text | 63-70 | |
| Stephane Redon | |||
| This paper presents an overview of our recent work on continuous collision
detection methods and constraints handling for rigid polyhedral objects. We
demonstrate that continuous collision detection algorithms are practical in
interactive dynamics simulation of complex polyhedral rigid bodies and show how
continuous collision detection and efficient constraint-based dynamics
algorithms allow us to perform various virtual prototyping tasks intuitively,
precisely and robustly on commodity desktop computers. Especially, we present
two applications of our system to actual industrial cases. We note that both
tasks are performed with a simple 2D mouse on a high-end computer. Keywords: Continuous collision detection; Dynamics simulation; Virtual prototyping;
Virtual reality | |||
| Taxonomy for visualizing location-based information | | BIBAK | Full-Text | 71-82 | |
| Riku Suomela; Juha Lehikoinen | |||
| Location-based data is digital information that has a real-world location.
Location-based data can be used for many purposes, such as providing additional
information on real-world objects or helping a user in a specific task. Access
to such data can be provided in many ways, for example, with augmented reality
(AR) systems. AR techniques can help its user in various tasks and the AR data
can be presented to the user in various ways, depending on the task at hand.
The different visualizations that can be used are heavily dependent on the
hardware platform and, thus, all technologies are not suitable for every
situation. This paper studies two factors that affect the visualization of
location-based data. The two factors are the environment model they use,
ranging from three dimensions (3D) to no dimensions (0D) at all; and the
viewpoint, whether it is a first-person or a third-person view. As a result, we
define a taxonomy for visualizing location-based data, where each model-view
(MV) combination is referred to using its MV number. We also present numerous
case studies with different MV values. Keywords: Location-based data; Virtual objects; Augmented reality; Visualization;
Taxonomy | |||
| Conceptualising mixed spaces of interaction for designing continuous interaction | | BIBAK | Full-Text | 83-95 | |
| Daniela Gorski Trevisan; Jean Vanderdonckt; Benoît Macq | |||
| Recent progress in the overlay and registration of digital information on
the user's workspace in a spatially meaningful way has allowed mixed reality
(MR) to become a more effective operational medium. However, research in
software structures, design methods and design support tools for MR systems is
still in its infancy. In this paper, we propose a conceptual classification of
the design space to support the development of MR systems. The proposed design
space (DeSMiR) is an abstract tool for systematically exploring several design
alternatives at an early stage of interaction design, without being biassed
towards a particular modality or technology. Once the abstract design
possibilities have been identified and a concrete design decision has been
taken (i.e. a specific modality has been selected), a concrete MR application
can be considered in order to analyse the interaction techniques in terms of
continuous interaction properties. We suggest that our design space can be
applied to the design of several kinds of MR applications, especially those in
which very little user focus distraction can be tolerated, and where smooth
connections and interactions between real and virtual worlds is critical for
the system development. An image-guided surgery system (IGS) is used as a case
study. Keywords: Design space; Mixed reality; Continuous interaction; Image-guided surgery | |||
| Virtual museums for all: employing game technology for edutainment | | BIBAK | Full-Text | 96-106 | |
| George Lepouras; Costas Vassilakis | |||
| Museums have started to realise the potential of new technologies for the
development of edutainment content and services for their visitors. Virtual
reality technologies promise to offer a vivid, enjoyable experience to the
museums guests, but the cost in time, effort and resources can prove to be
overwhelming. In this paper, we propose the use of 3D game technologies for the
purpose of developing affordable, easy to use and pleasing virtual
environments. To this end, we present a case study based on an already
developed version of a virtual museum and a newly implemented version that uses
game technologies. The informal assessment indicates that game technologies can
offer a prominent and viable solution to the need for affordable desktop
virtual reality systems. Keywords: Virtual museums; Desktop VR; 3D game technologies | |||
| Avatar gender and personal space invasion anxiety level in desktop collaborative virtual environments | | BIBAK | Full-Text | 107-117 | |
| Nasser Nassiri; Norman Powell; David Moore | |||
| We report an investigation exploring the effect of avatar gender on the
anxiety level caused by personal space (PS) invasion in desktop collaborative
virtual environments (DCVE). We outline an experiment in which participants, of
both genders, whose avatars' PS were "invaded" by other avatars of either
gender, reported their anxiety levels through the use of a post-experiment
questionnaire. The data from the questionnaire are analysed and discussed. The
results suggest that the combination of the gender of the invading avatar and
the avatar being invaded has an influence on the PS invasion anxiety level and
that the ranking of gender combination groups has a striking difference from
those observed for PS invasion in physical environments. Results also show that
the participants in general did not register high anxiety, contrary to what one
might expect from personal space invasion in the physical world. Keywords: Personal space; Collaborative virtual environment; Avatars | |||
| Beyond user experimentation: notational-based systematic evaluation of interaction techniques in virtual reality environments | | BIBAK | Full-Text | 118-128 | |
| Emmanuel Dubois; Luciana P. Nedel; Carla M. Dal Sasso. Freitas | |||
| Despite the increasing number of interaction devices for virtual reality
(VR) applications (e.g. data-gloves, space balls, data-suits and so on),
surprisingly very little attention has been given to the evaluation of VR
interaction techniques or more generally to the usability of virtual reality
environments (VRE). The main reasons for these limited efforts are probably
that empirical user testing with VREs is difficult and time-consuming and
ergonomic rules or criteria and traditional HCI tools and methods are not well
suited for VRE. Alternatively, the specification of interaction based on a
formal method or notation provides a precise and unambiguous description that
can be used to reason the user's actions while interacting with a VRE. In this
paper, we propose a new approach to design interaction techniques in VRE, based
on the use of a formal specification language: the ASUR notation. In the early
stages of system design, time and effort are reduced by assisting the designers
in considering alternative solutions and anticipating usability issues. To
better explain the proposed methodology, we report an evaluation of selection
and manipulation techniques in a virtual environment based on a chess game. The
evaluation has been carried out in two ways: predictively, with the help of the
ASUR notation, and empirically via user experiments. We present the outcomes of
the empirical studies and demonstrate that the reasoning with the ASUR notation
leads to similar but also results complementary to those obtained with the
experiments. Keywords: Mixed reality; Virtual reality; 3D interaction; Interaction design notation;
User experimentation | |||
| Editorial | | BIB | Full-Text | 129-130 | |
| Robert J. Stone | |||
| REMOTE: desk-top Virtual Reality for future command and control? | | BIBAK | Full-Text | 131-146 | |
| R. S. Aylett; C. Delgado; J. H. Serna; R. Stockdale; H. Clarke | |||
| This paper discusses a study in supporting collaborative military planning
in which groupware, video-conferencing and a desktop Collaborative Virtual
Environment (CVE) were used. It discusses the design and implementation of the
CVE and the setup and execution of the study using questionnaires and
observation. The results of the study questionnaires showed that the CVE was
not seen by users as the best of the ways offered to support collaborative
planning; these results are discussed and their implication for the design of
such a CVE are assessed. Keywords: Virtual environment; Collaboration; Groupware; Avatar; Net-VE | |||
| Using wireless technology to develop a virtual reality command and control centre | | BIBAK | Full-Text | 147-155 | |
| Damian Green; Neville Stanton; Guy Walker; Paul Salmon | |||
| This paper investigates the applicability of wireless communication systems
for use in command and control environments. Human positional data is
transmitted over a wireless network. This data is then used to update a highly
accurately modelled real-time 3D environment of the surroundings, with avatars
positioned at the transmitted points. The data is displayed on a stereoscopic
3D screen enabling novel automatic tracking of human movement and allowing for
more rapid and informed tactical decision making. The system has applicability
in a variety of C4I environments, including the military and emergency
services. Keywords: Wireless technology; Applications; Command and control; C4I; Human factors | |||
| Virtual environment cultural training for operational readiness (VECTOR) | | BIB | Full-Text | 156-167 | |
| John E. Deaton; Charles Barba; Tom Santarelli; Larry Rosenzweig | |||
| The Euclid RTP 11.13 Synthetic Environment Development and Exploitation Process (SEDEP) | | BIBAK | Full-Text | 168-176 | |
| Keith Ford | |||
| Euclid RTP 11.13 was a major initiative to promote the use of synthetic
environments (SEs) in Europe. One of the main results from the programme was
the concept of the SE development environment (SEDE) for creating and utilising
SEs, which is analogous to an integrated development environment for developing
software applications. The purpose of the SEDE is to provide a facility that
will assist the different types of SE users i.e. Problem Setters, Problem
Solvers, and SE Implementers, so that SEs can be delivered faster, better and
cheaper. The SEDE comprises of five main components: the SE Development and
Exploitation process (SEDEP), repository, SE management tool (SEMT), SE tools
(both COTS and those being developed in Euclid 11.13) and a Knowledge Base. The
SEDEP was developed from FEDEP version 1.5 and its purpose is to provide
additional information to the SE community not covered by the terms of
reference of the FEDEP. In particular, it is a generic process that is not
dedicated to one kind of interoperability technology and covers the complete SE
lifecycle, from eliciting the user needs through to evaluating the results from
operating the SE. In order to capture the work done in RTP 11.13, the FEDEP and
SEDEP development teams worked together to pull-through applicable information
into the IEEE 1516.3 version of the FEDEP. Following the conclusion of RTP
11.13, further development of the SEDEP has stopped whilst a new 'owner' is
found for it. However, the SEDEP version 2.0 is still publicly available
(http://www.euclid1113.com) and SE developers are encouraged to use it since it
complements the information provided by the FEDEP. Keywords: Euclid RTP 11.13; Synthetic Environments; Process; SEDEP | |||
| Wearable augmented virtual reality for enhancing information delivery in high precision defence assembly: an engineering case study | | BIBA | Full-Text | 177-184 | |
| Philip N. Day; Gus Ferguson; Patrik O'Brian Holt; Steven Hogg | |||
| Virtual reality (VR) technology has matured during the past few years to a degree where real industrial applications have become feasible. The work described in this paper involves collaboration between Heriot-Watt University and BAE Systems and aimed to establish the feasibility of using augmented VR to support complex information delivery in high precision defence assembly. Laboratory and field studies were conducted which investigated performance when using augmented VR as compared to conventional methods of information delivery. The results show that augmented VR is comparable to conventional methods of information delivery in terms of latencies and errors but allows less disruption to work and greater mobility. There appear to be no adverse affects on operators from using VR and generally operators are positive towards using VR technology. The feasibility of supporting augmented VR with wearable technology is also demonstrated. The overall results are discussed in terms of further application of VR in industrial settings. | |||
| Intelligent virtual agents keeping watch in the battlefield | | BIBAK | Full-Text | 185-193 | |
| Pilar Herrero; Angélica de Antonio | |||
| One of the first areas where virtual reality found a practical application
was military training. Two fairly obvious reasons have driven the military to
explore and employ this kind of technique in their training; to reduce exposure
to hazards and to increase stealth. Many aspects of combat operations are very
hazardous, and they become even more dangerous if the combatant seeks to
improve his performance. Some smart weapons are autonomous, while others are
remotely controlled after they are launched. This allows the shooter and weapon
controller to launch the weapon and immediately seek cover, thus decreasing his
exposure to return fire. Before launching a weapon, the person who controls
that weapon must acquire/perceive as much information as he can, not only from
its environment, but also from the people who inhabits that environment.
Intelligent virtual agents (IVAs) are used in a wide variety of simulation
environments, especially in order to simulate realistic situations as, for
example, high fidelity virtual environment (VE) for military training that
allows thousands of agents to interact in battlefield scenarios. In this paper,
we propose a perceptual model, which seeks to introduce more coherence between
IVA perception and human being perception, increasing the psychological
"coherence" between the real life and the VE experience. Agents lacking this
perceptual model could react in a non-realistic way, hearing or seeing things
that are too far away or hidden behind other objects. The perceptual model, we
propose in this paper introduces human limitations inside the agent's
perceptual model with the aim of reflecting human perception. Keywords: Intelligent virtual agents (IVAs); Perception; Awareness; Focus; Nimbus;
Human factors | |||
| ERT-VR: an immersive virtual reality system for emergency rescue training | | BIBAK | Full-Text | 194-197 | |
| Lei Li; Maojun Zhang; Fangjiang Xu; Shaohua Liu | |||
| Virtual reality technology offers a cost effective means to train emergency
rescuers, which is an urgent task on account of the increasing terrorist
activities. An immersive virtual reality system called ERT-VR is introduced. In
ERT-VR, the display system based on stereoscopic projectors is used to train
the emergency rescue commanders. The members of the operational teams use
head-mounted display as the display system. The 3D scenario creator is the most
important unit in ERT-VR. Instructors assign a specific training scenario to
the trainees by using the scenario creator. Trainees take on the role of the
characters in the training scenario and control their actions and ultimately
the scenario outcomes. All the actions are recorded into the database system
and can be replayed anytime. The potential of each trainee is evaluated by an
expert system. Keywords: Virtual reality; Emergency rescue training; Advance training technology;
Scenario manager | |||
| Editorial | | BIB | Full-Text | 199-200 | |
| Patrick Olivier; Steven K. Feiner | |||
| Mixed feelings: expression of non-basic emotions in a muscle-based talking head | | BIBAK | Full-Text | 201-212 | |
| Irene Albrecht; Marc Schröder; Jörg Haber; Hans-Peter Seidel | |||
| We present an algorithm for generating facial expressions for a continuum of
pure and mixed emotions of varying intensity. Based on the observation that in
natural interaction among humans, shades of emotion are much more frequently
encountered than expressions of basic emotions, a method to generate more than
Ekman's six basic emotions (joy, anger, fear, sadness, disgust and surprise) is
required. To this end, we have adapted the algorithm proposed by Tsapatsoulis
et al. [1] to be applicable to a physics-based facial animation system and a
single, integrated emotion model. A physics-based facial animation system was
combined with an equally flexible and expressive text-to-speech synthesis
system, based upon the same emotion model, to form a talking head capable of
expressing non-basic emotions of varying intensities. With a variety of
life-like intermediate facial expressions captured as snapshots from the system
we demonstrate the appropriateness of our approach. Keywords: Continuous emotions; Emotional speech synthesis; Facial animation | |||
| Model-based video tracking for gestural interaction | | BIBAK | Full-Text | 213-221 | |
| J.-B. de la. Rivière; P. Guitton | |||
| Among many techniques to interact with 3D environments, gesture-based input
appears promising. However, due to insufficient computing hardware
capabilities, such interfaces have to be built either upon standard tracking
devices or using limited image-based video tracking algorithms. As today
computing power tends to be more and more powerful, more complex video analysis
such as real-time model-based tracking is at hand. Considering the use of a
model-based approach to allow unencumbered input gives us the advantage of
extracting a low-level hand description useful to build natural interfaces. The
algorithm we developed relies on a 3D polygonal hand model. Its pose
parametrization is iteratively refined so that its 2D projection matches more
closely the input 2D image. Relying on the graphics hardware to handle fast 2D
projection is critical, while adding more cameras is useful to cope with the
occlusion issue. Keywords: Full hand pose estimation; Real-time video analysis; Articulated tracking;
VR interaction techniques | |||
| Untethered gesture acquisition and recognition for virtual world manipulation | | BIBA | Full-Text | 222-230 | |
| David Demirdjian; Teresa Ko; Trevor Darrell | |||
| Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech. | |||
| A two visual systems approach to understanding voice and gestural interaction | | BIBAK | Full-Text | 231-241 | |
| Barry A. Po; Brian D. Fisher; Kellogg S. Booth | |||
| It is important to consider the physiological and behavioral mechanisms that
allow users to physically interact with virtual environments. Inspired by a
neuroanatomical model of perception and action known as the two visual systems
hypothesis, we conducted a study with two controlled experiments to compare
four different kinds of spatial interaction: (1) voice-based input, (2)
pointing with a visual cursor, (3) pointing without a visual cursor, and (4)
pointing with a time-lagged visual cursor. Consistent with the two visual
systems hypothesis, we found that voice-based input and pointing with a cursor
were less robust to a display illusion known as the induced Roelofs effect than
pointing without a cursor or even pointing with a lagged cursor. The
implications of these findings are discussed, with an emphasis on how the two
visual systems model can be used to understand the basis for voice and gestural
interactions that support spatial target selection in large screen and
immersive environments. Keywords: Two visual systems; Pointing; Cursors; Visual feedback; Voice input; Visual
illusions | |||
| Analysis of composite gestures with a coherent probabilistic graphical model | | BIBAK | Full-Text | 242-252 | |
| Jason J. Corso; Guangqi Ye; Gregory D. Hager | |||
| Traditionally, gesture-based interaction in virtual environments is composed
of either static, posture-based gesture primitives or temporally analyzed
dynamic primitives. However, it would be ideal to incorporate both static and
dynamic gestures to fully utilize the potential of gesture-based interaction.
To that end, we propose a probabilistic framework that incorporates both static
and dynamic gesture primitives. We call these primitives Gesture Words
(GWords). Using a probabilistic graphical model (PGM), we integrate these
heterogeneous GWords and a high-level language model in a coherent fashion.
Composite gestures are represented as stochastic paths through the PGM. A
gesture is analyzed by finding the path that maximizes the likelihood on the
PGM with respect to the video sequence. To facilitate online computation, we
propose a greedy algorithm for performing inference on the PGM. The parameters
of the PGM can be learned via three different methods: supervised,
unsupervised, and hybrid. We have implemented the PGM model for a gesture set
of ten GWords with six composite gestures. The experimental results show that
the PGM can accurately recognize composite gestures. Keywords: Human computer interaction; Gesture recognition; Hand postures; Vision-based
interaction; Probabilistic graphical model | |||