HCI Bibliography Home | HCI Journals | About VR | Journal Info | VR Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
VR Tables of Contents: 010203040506070809101112131415161718

Virtual Reality 3

Dates:1998
Volume:3
Publisher:Springer-Verlag
Standard No:ISSN 1359-4338 (print) EISSN 1434-9957 (online)
Papers:27
Links:link.springer.com | Twitter | Table of Contents
  1. VR 1998-03 Volume 3 Issue 1
  2. VR 1998-06 Volume 3 Issue 2
  3. VR 1998-09 Volume 3 Issue 3
  4. VR 1998-12 Volume 3 Issue 4

VR 1998-03 Volume 3 Issue 1

Editorial BIBFull-Text 1-2
  Dave Snowdon; Elizabeth Churchill
Collaborative virtual environments: An introductory review of issues and systems BIBAKFull-Text 3-15
  E. F. Churchill; D. Snowdon
A Collaborative Virtual Environment or CVE is a distributed, virtual reality that is designed to support collaborative activities. As such, CVEs provide a potentially infinite, graphically realised digital landscape within which multiple users can interact with each other and with simple or complex data representations. CVEs are increasingly being used to support collaborative work between geographically separated and between collocated collaborators. CVEs vary in the sophistication of the data and embodiment representations employed and in the level of interactivity supported. It is clear that systems which are intended to support collaborative activities should be designed with explicit consideration of the tasks to be achieved and the intended users' social and cognitive characteristics. In this paper, we detail a number of existing systems and applications, but first discuss the nature of collaborative and cooperative work activities and consider the place of virtual reality systems in supporting such collaborative work. Following this, we discuss some future research directions.
Keywords: Virtual environments; Virtual spaces; Collaboration and communication; Virtual embodiments; Computer supported co-operative work (CSCW)
On the linguistic nature of cyberspace and virtual communities BIBAKFull-Text 16-24
  A. Cicognani
This paper argues for a linguistic explanation of the nature of Virtual Communities. Virtual Communities develop and grow in electronic space. or 'cyberspace'. Authors such as Benedikt Meyrowitz and Mitchell have theorised about the nature of electronic space whilst Lefebvre. Popper, Hakim Bey (aka Lamborn Wilson) and Kuhn have theorised more generally about the nature of space. Extending this tradition and the works of these authors, this paper presents a language based perspective on the nature of electronic spaces. Behaviour in cyberspace is based on and regulated by hardware, software tools and interfaces. A definition of electronic space cannot be given beyond its linguistic characteristics, which underlie and sustain it. The author believes that the more users and developers understand the relationship between language and cyberspace, the more they will be able to use specific metaphors for dwelling and inhabiting it. In particular, MUDs/MOOs and the Web are interesting places for testing and observing social behaviours and dynamics.
Keywords: Virtual communities; Cyberspace; Speech acts; Linguistics
Shared space: An augmented reality approach for computer supported collaborative work BIBAKFull-Text 25-36
  M. Billinghurst; S. Weghorst; T., III Furness
Virtual Reality (VR) appears a natural medium for three-dimensional computer supported collaborative work (CSCW). However the current trend in CSCW is to adapt the computer interface to work with the user's traditional tools, rather than separating the user from the real world as does immersive VR. One solution is through Augmented Reality, the overlaying of virtual objects on the real world. In this paper we describe the Shared Space concept -- the application of Augmented Reality for three-dimensional CSCW. This combines the advantages of Virtual Reality with current CSCW approaches. We describe a collaborative experiment based on this concept and present preliminary results which show that this approach may be better for some applications.
Keywords: Augmented Reality; Virtual Reality; Computer Supported Collaborative Work
"Studierstube": An environment for collaboration in augmented reality BIBAKFull-Text 37-48
  Z. Szalavári; D. Schmalstieg; A. Fuhrmann; M. Gervautz
We propose an architecture for multi-user augmented reality with applications in visualisation, presentation and education, which we call "Studierstube". Our system presents three-dimensional stereoscopic graphics simultaneously to a group of users wearing light weight see-through head mounted displays. The displays do not affect natural communication and interaction, making working together very effective. Users see the same spatially aligned model, but can independently control their viewpoint and different layers of the data to be displayed. The setup serves computer supported cooperative work and enhances cooperation of visualisation experts. This paper presents the client-server software architecture underlying this system and details that must be addressed to create a high-quality augmented reality setup.
Keywords: Augmented reality; Multi-user applications; Collaboration; Distributed graphics
A collaborative environment for role-playing in object space BIBAKFull-Text 49-58
  C. Hand; S. Lingard; M. Skipper
We present some experiences from the development and early use of CRCMOO, a Collaborative Virtual Environment (CVE) which supports the CRC cards software design technique, implemented initially using a MOO. After briefly describing CRC, we discuss how CRCMOO differs from other collaborative environments for software engineering. The role playing metaphor is discussed, followed by the results of an analysis of the CRC task and a description of how the results were incorporated into a second prototype system, this time using a graphical user interface.
Keywords: CRC cards; MOO; Role-playing; Spatial understanding; Views
Collaborative configuration in virtual environments BIBAKFull-Text 59-70
  T. Axling
Most collaborative work in virtual environments involves creating and changing objects according to some rules which correspond to what is known as configuration tasks in the field of knowledge based systems. Also, tasks such as presenting information as 3D objects in a virtual environment or dynamically changing (reconfiguring) embodiments to adapt to an environment are configuration tasks. These tasks can be supported by a generic tool, a configuration engine. However collaborative configuration requires a high level of interactivity to be meaningful which hence must be supported by the engine. We are, therefore, utilising our previous experiences in developing configuration engines to develop one, 3dObelics, that is suited for the highly interactive tasks of collaborative configuration in virtual environments. The engine is built on the idea of viewing configuration as a pure constraint satisfaction problem and that a well-defined modelling language can overcome the difficulties that are associated with constraint programming. 3dObelics uses DIVE, a tool kit for building collaborative virtual environments, and a system for speech control of agents in DIVE labelled 'Talking Agents'. 3dObelics is meant to act as a platform for building multi-user configuration applications with a minimum of programming. To our knowledge, 3dObelics is the first general tool for this.
Keywords: Collaborative work; Configuration; Constraints; Virtual environments
A conversational agent to help navigation and collaboration in virtual worlds BIBAKFull-Text 71-82
  O. Bersot; P. -O. El Guedj; C. Godéreaux; P. Nugues
This paper describes the prototype of a conversational agent embedded within a collaborative virtual environment. This prototype -- Ulysse -- accepts spoken utterances from a user enabling him or her to navigate within relatively complex virtual worlds. It also accepts and executes commands to manipulate objects in the virtual world. We are beginning to adapt our agent to parse certain written descriptions of simultaneous actions of world entities and to animate these entities according to the given description.
   The paper first describes what we can expect from a spoken interface to improve the interaction quality between a user and virtual worlds. Then it describes Ulysse's architecture, which includes a speech recognition device together with a speech synthesiser. Ulysse consists of a chart parser for spoken words, a semantic analyser, a reference resolution system, a geometric reasoner, a dialogue manager, and an animation manager, and has been integrated in the DIVE virtual environment. Ulysse can be 'personified' using a set of behavioural rules. A number of tests have demonstrated its usefulness for user navigation. We are currently developing simulations of written reports of car accidents within Ulysse; such simulations provide dynamic recreations of accident scenarios for individual and collaborative reviewing and assessment.
Keywords: Conversational agents; Spoken navigation; Simulation; Semantics of space; Planning

VR 1998-06 Volume 3 Issue 2

Virtual surfaces and the influence of cues to surface shape on grasp BIBAKFull-Text 85-101
  F. E. Pollick
This research compared grasps to real surfaces with grasps to virtual surfaces, and used virtual surfaces to examine the role of cues to surface shape in grasp. The first experiment investigated the kinematics of overhand grasps to real and virtual objects. The results showed that, compared with grasps to real surfaces, grasps to virtual objects were different in the deceleration phase of the grasp movement and were more variable in their endpoint position. The second experiment used several measures to examine the relationship between the visual perception of a surface and the decision to grasp the surface with either an over-or underhand grasp. It was found that visual perception of the surface was consistent with the grasping decision. The third experiment used virtual surfaces to examine how the removal of visual cues to shape affected the decision to switch from over- to underhand grasp. Results showed that the orientation at which the decision switched was dependent on the visual information content. Overall, the results showed that subtle differences existed between the reach to grasp movements towards real and virtual surfaces and that the decision to choose between grasp types was dependent on the visual information used to depict the virtual surface. These results are discussed in relation to the design and use of input devices to enable manipulation of three-dimensional objects in virtual worlds.
Keywords: Grasp; Surface; Shape perception; Virtual reality
A curved surface display for three fingers based on human visual and tactile sensory fusion BIBAKFull-Text 102-111
  J.-L. Wu; S. Kawamura
In general, it is difficult to present tactile information because arbitrary curvatures of the curved surface and many degrees of freedom need to be realised. On the other hand, psychophysical studies have suggested that human visual and tactile sensations have an illusory fusion characteristic. This means that we can recognise curved surfaces of objects through visual and tactile sensations, even if exact tactile information is not presented. Hence, by utilising the human characteristic of sensory fusion, realisation of a curved surface display can be simplified. From such motivation, the human fusion characteristics of visual and tactile sensation are measured, and are quantitatively analysed. Based on the analysed results, a curved surface display for three fingers is developed. In the curved surface display, only four curved patterns are utilised instead of presenting many curved patterns. Performance of the developed tactile display is proved through evaluated experiments.
Keywords: Human sensory fusion; Tactile sensation; Visual sensation; Curved surface display; Virtual reality
An efficient posture recognition method using fuzzy logic BIBAKFull-Text 112-119
  E. K. H. Tsang; H. Sun
Computer-human interaction plays an important role in virtual reality. Glove-based input devices have many desirable features which make direct interactions between the user and the virtual world possible. However, due to the complexity of the human hand, recognising hand functions precisely and efficiently is not an easy task. Existing algorithms are either imprecise or computationally expensive, making them impractical to integrate with VR applications, which are usually very CPU intensive.
   In the problem of posture and gesture recognition, both the sample patterns stored in the database and the ones to be recognised may be imprecise. This kind of imprecise knowledge can be best dealt with using fuzzy logic. A fast and simple posture recognition method using fuzzy logic is presented in this paper. Our model consists of three components: the posture database, the classifier and the identifier. The classifier roughly classifies the sample postures before they are put into the posture database. The identifier compares an input posture with the records in the identified class and finds the right match efficiently. Fuzzy logic is applied in both the classification and identification processes to cope with imprecise data. The main goal of this method is to recognise hand functions in an accurate and efficient manner. The accuracy, efficiency and the noise tolerance of the model have been examined through a number of experiments.
Keywords: Human-computer interaction; Hand posture recognition; Fuzzy logic; Posture commands
A methodology for the evaluation of travel techniques for immersive virtual environments BIBAKFull-Text 120-131
  D. A. Bowman; D. Koller; L. F. Hodges
We present a framework for the analysis and evaluation of travel, or viewpoint motion control, techniques for use in immersive virtual environments (VEs). In previous work, we presented a taxonomy of travel techniques and a set of experiments mapping parts of the taxonomy to various performance metrics. Since these initial experiments, we have expanded the framework to allow evaluation of not only the effects of different travel techniques, but also the effects of many outside factors simultaneously. Combining this expanded framework with the measurement of multiple response variables epitomises the philosophy of testbed evaluation. This experimental philosophy leads to a deeper understanding of the interaction and the technique(s) in question, as well as to broadly generalisable results. We also present an example experiment within this expanded framework, which evaluates the user's ability to gather information while travelling through a virtual environment. Results indicate that, of the variables tested, the complexity of the environment is by far the most important factor.
Keywords: Virtual environments; Interaction techniques; Evaluation; Information gathering
Specification and evaluation of level of detail selection criteria BIBAKFull-Text 132-143
  M. Reddy
Level of detail (LOD) is a technique where geometric objects are represented at a number of resolutions, allowing the workload of the system to be based upon an object's distance, size, velocity, or eccentricity. However, little is known about how to specify optimally when a particular LOD should be selected so that the user is not aware of any visual change, or to what extent any particular LOD scheme can improve an application's performance. In response, this paper produces a generic, orthogonal model for LOD based upon data from the field of human visual perception. The effect of this model on the system is evaluated to discover the contribution that each component makes towards any performance improvement. The results suggest that both velocity and eccentricity LOD should be implemented together (if at all) because their individual contribution is likely to be negligible. Also, it is apparent that size (or distance) optimisations offer the greatest benefit, contributing around 95% of any performance increment.
Keywords: Computer graphics; Level of detail; Performance optimisation; Visual acuity; Visual perception

VR 1998-09 Volume 3 Issue 3

Guest editorial -- Wearable computers BIBFull-Text 145-146
  W. Barfield
Wearable computers: Information tool for the twenty-first century BIBAKFull-Text 147-156
  K. L. Jackson; L. E. Polisky
The wearable computer is a portable computer that is actually worn on the user's body. Ergonomics is therefore a vital feature of its design. Since humans naturally communicate with voice, a wearable computer also responds to the voice. Wearable computers and global wireless networks make it possible to bring exciting capabilities to the individual. Until recently, wearable computer development has been restricted to academic and military laboratories. Now, technological advances and reduced cost have ignited investor excitement about wearable computers. Wearable system applications in manufacturing, logistics, medicine, training, quality control, communications and even entertainment are now becoming widespread. The earliest development of wearable computers occurred in the 1960s. All of the elements of the modern wearable computer were in place in the Eudaemons system for predicting outcome on a roulette wheel. Since then, wearable computer development has paralleled advances in microprocessor technology. After addressing the important distinction between wearable and mobile computers, this paper will look at wearable computers as an information tool for industry. A short history of wearable computers will trace development from the early single application attempts to today's feature-rich systems. A discussion on current and anticipated applications is then followed by an overview of important related technologies. Finally, the paper will assess how wearable computers could impact twenty-first century industry and society.
Keywords: Wearable computer; Information tool; Mobile computer; Wireless networking; Voice recognition
Issues in the design and use of wearable computers BIBAKFull-Text 157-166
  W. Barfield; K. Baird
Wearable computers are fully functional, self-powered, self-contained computers that allow the user to access information anywhere and at any time. In this paper, design issues for wearable computers are discussed, including power considerations, use of input devices, image registration, and the use of wearable computers for the design of smart spaces. Application areas for wearable computers are presented, including medicine, manufacturing, maintenance, and as personal assistants. Finally, future research directions for wearable computers are indicated.
Keywords: Wearable computer; Augmented reality; Interface design; Smart spaces
Tracking for augmented reality on wearable computers BIBAKFull-Text 167-175
  U. Neumann; J. Park
Wearable computers afford a degree of mobility that makes tracking for augmented reality difficult. This paper presents a novel object-centric tracking architecture for presenting augmented reality media in spatial relationships to objects, regardless of the objects' positions or motions in the world. The advance this system provides is the ability to sense and integrate new features into its tracking database, thereby extending the tracking region automatically. A "lazy evaluation" of the structure from motion problem uses images obtained from a single calibrated moving camera and applies recursive filtering to identify and estimate the 3D positions of new features. We evaluate the performance of two filters; a classic Extended Kalman Filter (EKF) and a filter based on a Recursive-Average of Covariances (RAC). Some implementation issues and results are discussed in conclusion.
Keywords: Augmented reality; Image registration
A generic model for Immersive Documentation Environments BIBAKFull-Text 176-186
  S. Kakez; J. Figue; V. Conan
We propose a generic model for designing systems relying on augmented reality techniques in the context of an Immersive Documentation Environment (IDE). This model encompasses a user/system interaction paradigm, system architecture and exploitation scenario. We illustrate the use of this model on several virtually documented environment systems providing the user with enhanced interaction capabilities. These systems are dedicated to several applications where the operator needs natural (hands free) access to information, to carry out measurements and/or operate on devices (e.g. maintenance, instruction). These systems merge live images acquired by a video camera with synthetic data (multimedia documents including CAD models and text) and present the result properly registered in the real world. Vocal commands as well as multimodal interaction, associating speech and gesture, are used to improve interaction.
Keywords: Augmented reality; Human-computer interaction; Advanced maintenance systems; Tele-operation; Distributed teaching systems; Mine clearance systems
Evaluation of text input mechanisms for wearable computers BIBAKFull-Text 187-199
  B. Thomas; S. Tyerman; K. Grimmer
This paper reports on an experiment investigating the functionality and usability of novel input devices on a wearable computer for text entry tasks. Over a 3-week period, 12 subjects used three different input devices to create and save short textual messages. The virtual keyboard, forearm keyboard, and Kordic keypad input devices were assessed for their efficiency and usability in simple text-entry tasks. Results collected included the textual data created by the subjects, the duration of activities, the survey data and observations made by supervisors. The results indicated that the forearm keyboard is the best performer for accurate and efficient text entry while other devices may benefit from more work on designing specialist graphical user interfaces (GUIs) for the wearable computer.
Keywords: Input devices; Wearable computers; Empirical study
Context-awareness in wearable and ubiquitous computing BIBAKFull-Text 200-211
  D. Abowd; A. K. Dey; R. Orr; J. Brotherton
A common focus shared by researchers in mobile, ubiquitous and wearable computing is the attempt to break away from the traditional desktop computing paradigm. Computational services need to become as mobile as their users. Whether that service mobility is achieved by equipping the user with computational power or by instrumenting the environment, all services need to be extended to take advantage of the constantly changing context in which they are accessed. This paper will report on work carried out by the Future Computing Environments Group at Georgia Tech to provide infrastructure for context-aware computing. We will describe some of the fundamental issues involved in context-aware computing, solutions we have generated to provide a flexible infrastructure and several applications that take advantage of context awareness to allow freedom from traditional desktop computing.
Keywords: Context-aware computing; Ubiquitous computing; Consumer applications; Personal information mangement; Tourism; Voice-only interaction; Positioning systems
Wearable computers as a virtual environment interface for people with visual impairment BIBAKFull-Text 212-221
  D. A. Ross
People who are totally blind or who have severe visual impairments (e.g. less than 20/200 acuity, central macular scotomas, or advanced diabetic retinopathy) 'see' the environment in a fashion that may be completely foreign to those who operate in a very visual fashion. For this population, the visual complexity of the environment is not a concern. What is of concern are salient features found within the environment that relate to their ability to navigate successfully in, and/or interact with, the environment as needed. Toward the purpose of representing these salient features in comprehensive form, investigators at the Atlanta Veterans Affairs Research and Development Center are employing wearable computer technology to develop a virtual environment interface. The long-range goal is to create a simplistic virtual representation of the environment that includes only features related to the current navigational task and/or interactive needs of the person. In a completed study, the use of digital infrared transmitters as 'beacons' representing salient features of the environment was explored. The purpose of a current study now in progress is to evaluate and compare various user interface structures that were suggested by subjects during the preliminary study. The problem of interest in the current study is street-crossing; however, the results of this research should be applicable to many other problems, including identifying and locating building entrances, and identifying, locating and interacting with electronic devices such as information kiosks, automated teller machines, and self-serve point-of-sale terminals. The long-range result desired is a wearable computer with which one can easily identify and interact with a wide variety of devices in the environment via one familiar, easy-to-use interface.
Keywords: Wearable computer; Navigation; Infrared transmitters; Visual impairment

VR 1998-12 Volume 3 Issue 4

Human factors in virtual environments BIBFull-Text 223-225
  C. Chen; M. Czerwinski; R. Macredie
Physically touching and tasting virtual objects enhances the realism of virtual experiences BIBAKFull-Text 226-234
  H. G. Hoffman; A. Hollander; K. Schroder; S. Rousseau; T., III Furness
Experiment 1 explored the impact of physically touching a virtual object on how realistic the virtual environment (VE) seemed to the user. Subjects in a 'no touch' group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. 'See and touch' subjects physically picked up a virtual plate possessing solidity and weight, using a technique called 'tactile augmentation'. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. 'See and touch' subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the 'no touch' group. In Experiment 2 (a pilot study), subjects 'physically bit' a chocolate bar in one condition, and 'imagined biting' a chocolate bar in another condition. Subjects rated the event more fun and realistic when allowed to physically bite the chocolate bar. Results of the two experiments converge with a growing literature showing the value of adding physical qualities to virtual objects. This study is the first to empirically demonstrate the effectiveness of tactile augmentation as a simple, safe, inexpensive technique with large freedom of motion for adding physical texture, force feedback cues, smell and taste to virtual objects. Examples of practical applications are discussed.
Keywords: Virtual reality; Tactile feedback; Smell; Taste; Mixed reality
Virtual environments for engineering applications BIBAKFull-Text 235-244
  L. Sastry; D. R. S. Boyd
Virtual reality (VR), provides the user with an ego-centred human-computer interaction environment by presenting the data as a computer-generated 3D virtual environment. This enables the user to be immersed in this world via user position tracking devices and to interact with the data objects in the world in intuitive ways. This paper describes a selection of VR simulations for engineering applications implemented in CLRC which demonstrate the potential of VR interaction techniques to offer quicker and possibly better understanding of spatial relationships and temporal patterns inherent in large data sets. Two of the case studies have been implemented to support engineers communicate and review their designs with scientists, managers and manufacturers and to plan their assembly and maintenance work in hazardous physical environments. The other two applications are visualisation case studies based on data sets generated by computational engineering simulations. The case studies are 'real world' applications, involving end-users of large or complex data sets. Insight gained into the user interaction requirements through the implementation and user comments is guiding ongoing research and development activity and this is discussed briefly.
Keywords: Virtual reality; User interaction; Visualisation; Computational engineering; Visual simulation
Using VRML-based visualisations to facilitate information retrieval in the world wide web BIBAKFull-Text 245-258
  S. Mukherjea
With the explosive growth of information in the WWW, it is becoming increasingly difficult for the user to find information of interest. Visualisations may be helpful in assisting the users in their information retrieval task. Effective visualisation of the structure of a WWW site is extremely useful for browsing through the site. Visualisation can also be used to augment a WWW search engine when too many or too few results are retrieved. In this paper, we discuss several visualisations we have developed to facilitate information retrieval on the WWW. With VRML becoming the standard for graphics on the Web and efficient VRML browsers becoming available, VRML was used for developing these visualisations. Unique visualisations like focus + context views of WWW nodes and semantic visualisation are presented and examples are given on scenarios where the visualisations are useful.
Keywords: Information visualisation; World Wide Web; Searching; Browsing; VRML
From toys to brain: Virtual reality applications in neuroscience BIBAKFull-Text 259-266
  G. Riva
While many virtual reality (VR) applications have emerged in the areas of entertainment, education, military training, physical rehabilitation, and medicine, only recently have some research projects begun to test the possibility of using virtual environments (VEs) for research in neuroscience, neurosurgery and for the study and rehabilitation of human cognitive and functional activities. Virtual reality technology could have a strong impact on neuroscience. The key characteristic of VEs is the high level of control of the interaction with the tool without the constraints usually found in computer systems. VEs are highly flexible and programmable. They enable the therapist to present a wide variety of controlled stimuli and to measure and monitor a wide variety of responses made by the user. However, at this stage, a number of obstacles exist which have impeded the development of active research. These obstacles include problems with acquiring funding for an almost untested new treatment modality, the lack of reference standards, the non-interoperability of the VR systems and, last but not least, the relative lack of familiarity with the technology on the part of researchers in these fields.
Keywords: Virtual reality; Neuroscience; Neurosurgery; Assessment; Rehabilitation
A virtual environment-based system for the navigation of underwater robots BIBAKFull-Text 267-277
  Q. Lin; C. Kuo
Efficient teleoperation of underwater robot requires clear 3D visual information of the robot's spatial location and its surrounding environment. However, the performance of existing telepresence systems is far from satisfactory. In this paper, we present our virtual telepresence system for assisting tele-operation of an underwater robot. This virtual environment-based telepresence system transforms robot sensor data into 3D synthetic visual information of the workplace based on its geometrical model. It provides the operators with a full perception of the robot's spatial location. In addition, we propose a robot safety domain to overcome the robot's location offset in the virtual environment caused by its sensor errors. The software design of the system and how a safety domain can be used to overcome robot location offset in virtual environment will be examined. Experimental tests and its result analysis will also be presented in this paper.
Keywords: Virtual environment; Robot navigation; Virtual telepresence; Robot safety domain; Subsea intervention