HCI Bibliography Home | HCI Journals | About VR | Journal Info | VR Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
VR Tables of Contents: 010203040506070809101112131415161718

Virtual Reality 9

Dates:2005/2006
Volume:9
Publisher:Springer-Verlag
Standard No:ISSN 1359-4338 (print) EISSN 1434-9957 (online)
Papers:24
Links:link.springer.com | Twitter | Table of Contents
  1. VR 2005-12 Volume 9 Issue 1
  2. VR 2006-03 Volume 9 Issue 2/3
  3. VR 2006-04 Volume 9 Issue 4

VR 2005-12 Volume 9 Issue 1

Performance evaluation of compromised synchronization control mechanism for distributed virtual environment (DVE) BIBAKFull-Text 1-16
  Olarn Wongwirat; Shigeyuki Ohara
Synchronization in a distributed virtual environment (DVE) involves mechanisms to ensure a consistent view of a virtual world for all participants. Most applications in the DVE are related to collaborative activities that include non-contention and contention cases. Using transmission of update messages is suitable enough to support synchronization for only non-contention activity. The contention activity requires an additional mechanism to control accessing a common object for synchronization. In this paper, we present the compromised synchronization control mechanism to support both non-contention and contention activities. The mechanism employs frequent update event and multiple-lock checking to control the synchronization. Frequent update event is used to support a dynamic virtual world for non-contention activity. Multiple-lock checking is embedded to ensure consistency when accessing the common object is required simultaneously for the contention event. Performance measurement of the compromised synchronization is provided by simulation in terms of locking time, sampling event, number of logical processes, and traffic tolerance. Prototype application is also implemented to compare the result in a small scale level. Based on the simulation and experimental results, the compromised sychronization control mechanism is capable to support up to 100 participants for the non-contention activity. It provides a good performance of supporting the contention activity in a small scale. The mechanism is considered suitable for collaborative application where contention is considered a critical event.
Keywords: Performance evaluation; Compromised synchronization; Dynamic shared state; Frequent update event; Multiple-lock checking; Distributed virtual environment
A hybrid reality environment and its application to the study of earthquake engineering BIBAKFull-Text 17-33
  Tara C. Hutchinson; Falko Kuester; Tung-Ju Hsieh; Rebecca Chadwick
Visualization can provide the much needed computer-assisted design and analysis environment to foster problem-based learning, while Virtual Reality (VR) can provide the environment for hands-on manipulation, stimulating interactive learning in engineering and the sciences. In this paper, an interactive 2D and 3D (hybrid) environment is described, which facilitates collaborative learning and research and utilizes techniques in visualization and VR, therefore enhancing the interpretation of physical problems within these fields. The environment described, termed VizClass, incorporates a specially designed lecture room and laboratory integrating both 2-D and 3-D spatial activities by coupling a series of interactive projection display boards (touch-sensitive whiteboards) and a semi-immersive 3D wall display. The environment is particularly appealing for studying critical, complex engineering problems, for example, where time-varying feature modifications and coupling between multiple modes of movement are occurring. This paper describes the hardware architecture designed for this new hybrid environment as well as an initial application within the environment to the study of a real case history building subjected to a variety of earthquakes. The example simulation uses field measured seismic data sources, and illustrations of simple visual paradigms to provide an enhanced understanding of the physical model, the damage accumulated by the model, and the association between the measured and observed data. A detailed evaluation survey was also conducted to determine the merits of the presented environment and the techniques implemented. Results substantiate the plausibility of using these techniques for more general, everyday users. Over 70% of the survey participants believed that the techniques implemented were valuable for engineers.
Keywords: Virtual reality; Visualization; Multimodal interaction; Human-computer interaction; Display technologies
Tell Me a Story BIBAKFull-Text 34-48
  Isabel Machado; Paul Brna; Ana Paiva
For the past few years, the storytelling research domain has raised the interest of several researchers from different areas (such as interactive drama, computer games, intelligent learning environment etc.), mainly because stories are an important vehicle to structure the human knowledge and past experience, and even to help people to behave in a social context. In this paper, we present a storytelling approach, which is based on the premise that story and narrative may emerge from characters' interactions during the creation activity, but such characters may either act autonomously or be controlled by children. Since we position our research in an educational context, we argue that in such creation activities children's thoughts and needs must be respected and contribute directly to the achievement of the story. Therefore, children should be provided with some support and guidance directives to help them to better understand and reflect on the role-taking situations they experience during such activity. To prove the validity of our approach we developed a generic architecture -- Support And Guidance Architecture (SAGA). SAGA aims not only to provide support and guidance to children during the story telling activities, by combining the roles of a dynamic script writer and a drama manager, but also to provide such services to already existing story creation applications. To illustrate this, we have integrated SAGA with Teatrix, a collaborative virtual environment for story creation, and an evaluation study was conducted in order to find the impact of SAGA in children's storytelling activities.
Keywords: Virtual storytelling; Intelligent agents; Narrative; Intelligent learning environments; Story generation
A multi-user desktop virtual environment for teaching shop-keeping to children BIBAKFull-Text 49-56
  Brian M. Slator; Harold Chaput; Robert Cosmano; Ben Dischinger
Virtual role-playing environments can be a powerful mechanism of instruction, provided they are constructed such that learning how to play and win the game contributes to a player's understanding of real-world concepts and procedures. North Dakota State University (NDSU) provides students with environments to enhance their understanding of geology (Planet Oit), cellular biology (Virtual Cell), programming languages (ProgrammingLand), retailing (Dollar Bay), and history (Blackwood). These systems present a number of opportunities and an equal number of challenges. Players are afforded a role-based, multi-user, 'learn-by-doing' experience, with software agents acting as both environmental effects and tutors and the possibilities of multi-user cooperation and collaboration. However, technical issues and one important cultural issue present a range of difficulties. The Dollar Bay environment, its particular challenges, and the solutions to these are presented.
Keywords: Role-based learning systems; Multi-user learning systems; Software agents; Intelligent software tutoring agents; Agent-based economic simulation
Efficient virtual reality design of quiet underwater shells BIBAKFull-Text 57-69
  W. Akl; A. Baz
Abstract
   Efficient computational tools are developed to model, visualize, and feel the structural-acoustics of shells in a virtual reality environment. These tools aim at building the structural-acoustic models of shells from an array of basic building blocks including: beams, shells, and stiffeners. The concepts of finite element analysis, sub-structuring, model reduction, meta-modeling, and parallel computations form the main steps to be followed for building simplified computational models of complex shell systems. The resulting models are particularly suitable for the efficient application of multi-criteria optimization techniques in order to select the optimal design parameters of these complex shell systems. The developed integrated analysis tools enable the engineers to design complex systems in a cost effective and a timely manner. Furthermore, engineers will be immersed in an audio-visually coupled tele-operated environment whereby direct interaction and control of the design process can be achieved. In this manner, the behavior of synthetic models of shells can be monitored by literally walking through the shell and adjusting its design parameters as needed to ensure optimal performance while satisfying design and operational requirements. For example, engineers can move electronic wands to vary the number, size, type, and location of stiffeners in the shell, monitor the resulting structural-acoustic visually or by haptic feedback and simultaneously listen to the radiated sound pressure field. Such manipulations of the virtual shells in the scene are carried out while the engineer is navigating through and around the shell to ensure that the vibration and sound levels, at any critical locations, are within the acceptable limits. The developed integrated approach also serves as a means for virtual training of students and engineers on designing and operating complex smart structures on the site as well as through collaborative efforts with other virtual reality sites. Such unique capability will enable engineers to design prototypes of expensive vehicles without building them. Examples of these vehicles include aircraft, submersibles, torpedoes, and others that can share this virtual experience and can be profoundly impacted upon by the proposed approach. The presented optimal design approach is implemented in the Virtual Reality CAVE Laboratory at the University of Maryland that is controlled by an eight parallel processor Silicon Graphics Infinite Reality (ONYX2) computer.
Keywords: Finite element analysis; Sub-structural analysis; Reduced order models; Meta-models; Multi-criteria design optimization; Design in virtual reality environment
Neural network-based calibration of electromagnetic tracking systems BIBAKFull-Text 70-78
  Volodymyr V. Kindratenko; William R. Sherman
Electromagnetic tracking systems are a common component of many virtual reality installations. Their accuracy, however, suffers from the distortions of the electromagnetic field used in calculating the tracker sensor's position. We have developed a tracker calibration technique based on a neural network that effectively compensates for the errors in both tracked location and orientation. This case study discusses our implementation of the calibration algorithm and compares the results with traditional calibration methods.
Keywords: Tracker calibration; Electromagnetic tracker; Neural network; Virtual reality
OpenTracker: A flexible software design for three-dimensional interaction BIBAFull-Text 79-92
  Gerhard Reitmayr; Dieter Schmalstieg
Tracking is an indispensable part of any virtual reality and augmented reality application. While the need for quality of tracking, in particular for high performance and fidelity, has led to a large body of past and current research, little attention is typically paid to software engineering aspects of tracking software. To address this issue we describe a software design and implementation that applies the pipes-and-filter architectural pattern to provide a customizable and flexible way of dealing with tracking data and configurations. The contribution of this work cumulates in the development of a generic data flow network library called OpenTracker to deal specifically with tracking data. The flexibility of the data flow network approach is demonstrated in a set of development scenarios and prototype applications in the area of mobile augmented reality.
Analysis of composite gestures with a coherent probabilistic graphical model BIBFull-Text 93
  Jason J. Corso; Guangqi Ye; Gregory D. Hager

VR 2006-03 Volume 9 Issue 2/3

Editorial: design of haptic user-interfaces and applications BIBFull-Text 95-96
  Steven Wall; Stephen Brewster
Communication in a networked haptic virtual environment for temporal bone surgery training BIBAKFull-Text 97-107
  Matthew A. Hutchins; Duncan R. Stevenson; Chris Gunn
Networked virtual environments using haptic interfaces can be used for surgical training and support both a simulation component and a communication component. We present such an environment for training in surgery of the temporal bone, which emphasises communication between an instructor and a student. We give an overview of the learning requirements for surgeons in this area and present the details of our implementation with a focus on the way communication is supported. We describe a training trial that was undertaken with a group of surgical trainees and carry out a qualitative analysis of transcripts from the teaching sessions. We conclude that the virtual environment supports a rich dialogue between the instructor and student, allowing them to ground their conversation in the shared model. Haptic interfaces are an important enabling technology for the simulation and communication and are used in conjunction with other modes and media to support situated learning.
Keywords: Haptic workbench; Networked virtual environments; Haptic surgical training; Conversation analysis
Supporting visually impaired children with software agents in a multimodal learning environment BIBAKFull-Text 108-117
  Rami Saarinen; Janne Järvi; Roope Raisamo; Eva Tuominen
Visually impaired children have a great disadvantage in the modern society since their ability to use modern computer technology is limited due to inappropriate user interfaces. The aim of the work presented in this paper was to develop a multimodal software architecture and applications to support learning of visually impaired children. The software architecture is based on software agents, and has specific support for visual, auditory and haptic interaction. It has been used successfully with different groups of 7-8 year-old and 12 year-old visually impaired children. In this paper we discuss the enabling software technology and interaction techniques aimed to realize our goal and our experiences in the actual use of the system.
Keywords: Multimodal interaction; Software agent architecture; Visually impaired children; Learning environments; Haptics; Auditory feedback
Stroke-based modeling and haptic skill display for Chinese calligraphy simulation system BIBAFull-Text 118-132
  Daniel Wang; Yuru Zhang; Chong Yao
The goal of this paper is to study haptic skill representation and display in a Chinese calligraphy training system. The challenge is to model haptic skill during the writing of different strokes in Chinese characters and to achieve haptic rendering with high fidelity and stability. The planning of the writing process is organized at three levels: task, representation and device level to describe the haptic handwriting skill. State transition graph (STG) is proposed to describe switches between tasks during the handwriting. Chinese characters are modeled using 39 typical strokes, which are further grouped into basic and compound strokes. The compound stroke is considered to be sequential combination of the basic strokes. Straight and curve strokes are modeled using line segment and the Bezier curve, respectively. Information from STG is used for real-time collision detection and haptic rendering. Ambiguity of the collision detection at stroke-corner points is prevented using active stroke combined with local nearest point computation. A modified virtual fixture method is developed for haptic rendering. The approach is tested on a prototype training system using Phantom desktop. Initial experiments suggest that the proposed modeling and rendering method is effective.
A novel multimodal interface for improving visually impaired people's web accessibility BIBAKFull-Text 133-148
  Wai Yu; Ravi Kuber; Emma Murphy; Philip Strain; Graham McAllister
This paper introduces a novel interface designed to help blind and visually impaired people to explore and navigate on the Web. In contrast to traditionally used assistive tools, such as screen readers and magnifiers, the new interface employs a combination of both audio and haptic features to provide spatial and navigational information to users. The haptic features are presented via a low-cost force feedback mouse allowing blind people to interact with the Web, in a similar fashion to their sighted counterparts. The audio provides navigational and textual information through the use of non-speech sounds and synthesised speech. Interacting with the multimodal interface offers a novel experience to target users, especially to those with total blindness. A series of experiments have been conducted to ascertain the usability of the interface and compare its performance to that of a traditional screen reader. Results have shown the advantages that the new multimodal interface offers blind and visually impaired people. This includes the enhanced perception of the spatial layout of Web pages, and navigation towards elements on a page. Certain issues regarding the design of the haptic and audio features raised in the evaluation are discussed and presented in terms of recommendations for future work.
Keywords: Multimodal interface; Haptics; Audio; Assistive technology; Web accessibility; Web navigation
Mediated social touch: a review of current research and future directions BIBAKFull-Text 149-159
  Antal Haans; Wijnand IJsselsteijn
In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.
Keywords: Physical contact; Social touch; Interpersonal interaction; Literature review; Computer mediated communication; Haptic feedback
A haptic interface for computer-integrated endoscopic surgery and training BIBAKFull-Text 160-176
  M. Tavakoli; R. V. Patel; M. Moallem
Haptic feedback has the potential to provide superior performance in computer-integrated surgery and training. This paper discusses the design of a user interface that is capable of providing force feedback in all the degrees of freedom (DOFs) available during endoscopic surgery. Using the Jacobian matrix of the haptic interface and its singular values, methods are proposed for analysis and optimization of the interface performance with regard to the accuracy of force feedback, the range of applicable forces, and the accuracy of control. The haptic user interface is used with a sensorized slave robot to form a master-slave test-bed for studying haptic interaction in a minimally invasive environment. Using the master-slave test-bed, teleoperation experiments involving a single degree of freedom surgical task (palpation) are conducted. Different bilateral control methods are compared based on the transparency of the master-slave system in terms of transmitting the critical task-related information to the user in the context of soft-tissue surgical applications.
Keywords: Endoscopic surgery; Robot-assisted surgery; Haptic interface; Force observer; Master-slave teleoperation; VRPN; Bilateral control; Transparency; Soft tissue
Guidelines for haptic interpersonal communication applications: an exploration of foot interaction styles BIBAKFull-Text 177-191
  A. F. Rovers; H. A. van Essen
A new method for researching haptic interaction styles is presented, based on a layered interaction model and a classification of existing devices. The method is illustrated by designing a new foot interaction device. The aim of which is to enhance non-verbal communication over a computer network. A layered protocols interaction model allows to consider all aspects of the haptic communication process: the intention to perform an action, limitations of the human body, and specifications of the communication device and the network. We demonstrate how this model can be used to derive design-guidelines by analyzing and classifying existing communication devices. By designing and evaluating a foot interaction device, we not only demonstrate that feet are suited for personal, concealed communication over a network, but also show the added value of the design-guidelines. Results of user tests provide clues for designing stimuli for foot interaction and indicate applications of foot communication devices.
Keywords: Haptic interaction; Foot interaction; Layered protocols; Communication; Hapticons
Haptic modeling in the conceptual phases of product design BIBAKFull-Text 192-202
  M. Bordegoni; U. Cugini
The paper presents the results of a research project aimed at developing an innovative system for modeling industrial products based on haptic technology. The system consists of a Computer Aided Design (CAD) system enhanced with intuitive designer-oriented interaction tools and modalities. The system integrates innovative six degrees of freedom (DOF) haptic tools for modeling digital shapes, with sweep operators applied to class-A surfaces and force computation models based on chip formation models. The system aims at exploiting designers' existing skills in modeling products, improving the products design process by reducing the necessity of building several physical models for evaluating and testing the product designs. The system requirements have been defined observing designers during their daily work and translating the way they model shapes using hands and craft tools into specifications for the modeling system and the haptic tool. The system prototype has been tested by designers who have found it intuitive and effective to use.
Keywords: Virtual prototyping; Product design; Haptics; Haptic modeling
Wearable vibrotactile systems for virtual contact and information display BIBAKFull-Text 203-213
  Robert W. Lindeman; Yasuyuki Yanagida; Haruo Noma; Kenichi Hosaka
This paper presents a development history of a wearable, scalable vibrotactile stimulus delivery system. This history has followed a path from desktop-based, fully wired systems, through hybrid approaches consisting of a wireless connection from the host computer to a body-worn control box and wires to each tactor, to a completely wireless system employing Bluetooth technology to connect directly from the host to each individual tactor unit. Applications for such a system include delivering vibrotactile contact cues to users of virtual environments, providing directional cues in order to increase situational awareness in both real and virtual environments, and for general information display in wearable contexts. Through empirical study, we show that even a simple configuration, such as eight tactors arrayed around the torso, can be effective in increasing situational awareness in a building-clearing task, compared to users who perform the same task without the added cues.
Keywords: Vibrotactile; Tactile; Wearable; Feedback; Human-computer interaction

VR 2006-04 Volume 9 Issue 4

Virtual reality design techniques for web-based historical reconstructions BIBAKFull-Text 215-225
  Kevin Badni
Due to increases in personal computer power and available bandwidths, 3D worlds are becoming increasingly accessible over the Internet, allowing viewers to freely explore virtual constructed worlds. However, even with advances in computing power and bandwidths, creating realistic 3D worlds which are fast downloading and are pleasurable to navigate around requires a number of design techniques to be employed. The aim of this paper is to describe the design techniques used in a redesign of a web-based virtual reality reconstruction of an historical site describing how different techniques can be used to optimise download times yet retain historical realism. The techniques and processes can be used as a guide by any designer to help create lightweight realistic virtual models.
Keywords: VRML; Design methodologies; Visualisation techniques; On-line constraints
Subjective performance BIBAFull-Text 226-233
  Karsten Bormann
Much effort has gone into exploring the concept of presence in virtual environments. One of the reasons for this is the possible link between presence and performance, which has also received a fair amount of attention. However, the performance side of this equation has been largely ignored. That is, without much discussion, researchers tend to equate performance with measured performance on some specific task. But if presence is a measure of overall acceptance of the virtual experience can we get away with any less than assessing all aspects of user performance? This is a problem as we can neither measure everything subjects do nor determine the weights or importance of all subtasks carried out during task completion by any given user. The way around this may be to ask subjects to assess their own performance, call it subjective performance. This notion is explored in the context of an experiment that investigated the presence-performance relationship by decoupling the variation in spatial audio fidelity (the one independent variable) from its relevance to the search task at hand (the other independent variable). It was found that while presence showed no correlation with task time presence did exhibit a fairly strong correlation with subjective performance.
The contribution of virtual reality to research on sensory feedback in remote control BIBAKFull-Text 234-242
  Barry Richardson; Mark Symmons; Dianne Wuillemin
Here we consider research on the kinds of sensory information most effective as feedback during remote control of machines, and the role of virtual reality and telepresence in that research. We argue that full automation is a distant goal and that remote control deserves continued attention and improvement. Visual feedback to controllers has developed in various ways but autostereoscopic displays have yet to be proven. Haptic force feedback, in both real and virtual settings, has been demonstrated to offer much to the remote control environment and has led to a greater understanding of the kinesthetic and cutaneous components of haptics, and their role in multimodal processes, such as sensory capture and integration. We suggest that many displays using primarily visual feedback would benefit from the addition of haptic information but that much is yet to be learned about optimizing such displays.
Keywords: Haptic; Feedback; Active; Passive; Kinesthetic; Cutaneous; Virtual reality; Perception
Identification of real objects under conditions similar to those in haptic displays: providing spatially distributed information at the contact areas is more important than increasing the number of areas BIBAKFull-Text 243-249
  Gunnar Jansson; Linda Monaci
Present day haptic displays have one or a few contact areas, the information being similar over the whole area. The aim of this investigation was to study the relative importance of increasing the number of contact areas and providing spatially distributed information at each contact area. Technical development was "simulated" in experiments with real objects where the information was constrained in ways similar to those in haptic displays. The results suggest clearly that the largest improvement can be expected if spatially distributed information is made available within each contact area. If that is made, an improvement of performance can be expected also with an increased number of contact areas. Increasing only the number of contact areas will not give the same result.
Keywords: Haptic displays; Technical development; Contact areas; Spatially distributed cutaneous information
Freshly squeezed touch into bits: towards the development of a haptic design palette BIBAKFull-Text 250-259
  Simone Gumtau
Haptic interfaces have the potential to enhance communication and interaction via the computer-enabling affective expressive interpersonal communication and enriching interaction by haptic feedback. Still, what exactly their potential is and how we can design in order to fulfil it remains topic of contemporary debate. My contribution to this debate shall be to place some of the current developments into a philosophical and cultural context and to introduce social science based methodologies, which will help broaden the discussion and scope of input. Through semiotic analysis, we can predict 'meaning making' in haptic communication that goes beyond linguistic description. Examples of Haptic interfaces shall be positioned as case studies in this typology. Also, I describe the Haptic Box and PinKom as my way of investigating a semiotic system of touch. In conclusion, this paper hopes to inform and catalyse the development of a haptic design palette.
Keywords: Haptic; Communication; Semiotics; Affect; Social; Culture
Linking GIS with real-time visualisation for exploration of landscape changes in rural community workshops BIBAKFull-Text 260-270
  Christian Stock; Ian D. Bishop
To allow rural communities to evaluate possible future landscape scenarios, we have created a portable environment for landscape simulation (envisioning system). The goal of this system is to give communities the opportunity to plan their desired futures. Our system is designed for workshop environments and allows workshop attendees to explore and to interact with representations of virtual landscapes. We are using virtual reality technology to visualise the landscape representations, a geographic information system to allow participants to change the current landscape configuration, and mobile computing devices to allow the attendees to navigate in the virtual landscape, and give feedback and opinions on the landscape changes. Here, we describe the technology that implements the interaction between geographical information systems and real-time rendering needed to achieve real-time visualisation of landscape changes. To achieve this functionality we have programmed two software clients (a renderer and an ESRI ArcMap extension) and a server that handles message flow. The landscape has been divided into management units that each supports one land use type. Using the GIS interface, users can change the land uses associated to the units and the renderer will update the landscape correspondingly in real time.
Keywords: Envisioning systems; Virtual reality; Geographic information systems; Community values