HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2000 ACM Symposium on Virtual Reality Software and Technology

Fullname:VRST'00 ACM Symposium on Virtual Reality Software and Technology
Editors:Ha_Jine Kimn; Kwang Yun
Location:Seoul, Korea
Dates:2000-Oct-22 to 2000-Oct-25
Standard No:ISBN: 1-58113-316-2; ACM DL: Table of Contents hcibib: VRST00
  1. Keynote
  2. Collaborative virtual environment
  3. Image-based modeling and rendering
  4. Augmented reality / 3D modeling
  5. Distributed virtual environment
  6. Time critical rendering
  7. Interaction
  8. Character/web
  9. Application system


Interaction, imagination and immersion some research needs BIBAKFull-Text 1-7
  Thomas B. Sheridan
This paper discusses four ways that humans interact with their environments, plus four variables that determine the experience of virtual reality, and also which of the interactions support which of the VR-enhancing variables. Some philosophical issues about immersion, the experience of presence, and the meaning of reality are then considered. The engineering paradigm of estimation is then reviewed as a way of bridging classical ontologica differences of opinion about reality. Finally some VR research needs are discussed: haptics, minimally invasive diagnosis and surgery, driving simulation, decision aids in system operation, education, computer-aided synthesis, measures of presence and whether presence enhances performance, social ills of VR, and the relation of VR to spirituality.
Keywords: Human, applications, definition, education, haptics, imagination, immersion, interaction, ontology, presence, reality, spirituality, surgery, vehicles, virtual

Collaborative virtual environment

CAVERNsoft G2: a toolkit for high performance tele-immersive collaboration BIBAKFull-Text 8-15
  Kyoung S. Park; Yong J. Cho; Naveen K. Krishnaprasad; Chris Scharver; Michael J. Lewis; Jason Leigh; Andrew E. Johnson
This paper describes the design and implementation of CAVERNsoft G2, a toolkit for building collaborative virtual reality applications. G2's special emphasis is on providing the tools to support high-performance computing and data intensive systems that are coupled to collaborative, immersive environments.
   This paper describes G2's broad range of services, and demonstrates how they are currently being used in a collaborative volume visualization application.
Keywords: CVE, Tele-immersion, VR, data-mining, high-performance computing, networking library
Choosing and using a driving problem for CVE technology development BIBAKFull-Text 16-24
  William L. Mitchell; Daphne Economou; Steve R. Pettifer; Adrian J. West
The need for a real-world driving problem to guide technology development has long been recognised. However, this does not guarantee the identification of requirements for technology development. This paper argues that a more systematic approach is needed for choosing and making best use of a driving problem for CVE technology. The method consists of identifying the stakeholders in the technology development project. A series of issues must then be addressed: choice of problem area, choice of application, choice of research approach, design of the application, ensuring use by real users, choice of method of study, and identification of technology requirements. The method is illustrated by considering the development of the Deva CVE system with an art-based application and with an educational application.
Keywords: Design methodology, education, evaluation, human factors
Advanced real-time collaboration over the internet BIBAKFull-Text 25-32
  Chris Joslin; Tom Molet; Nadia Magnenat-Thalmann
In this paper we present our Networked Virtual Environment (NVE) System, called W-VLNET (Windows Virtual Life Network), which has been developed on the Windows NT Operating System (OS). This paper emphasizes the Real-Time aspect of this NVE system, the advanced interactivity that the system provides and its ability to transfer data across the Internet so that geographically distant users can collaborate with each other. Techniques for communication, scene management, facial and body animation, and general user interaction modules are detailed in this paper. The use of VRML97 and MPEG4 SHNC is overviewed to stress the compatibility of the system with other similar Virtual Reality systems. The software provides realistic virtual actors as well as sets of applicable high-level actions in real-time. Related issues on obtaining actor models and animating them in real-time are presented. We also introduce a case study to show an example of how the system can be used.
Keywords: Advanced Interaction, Distance Collaboration, Motion Tracking, Network Virtual Environment, Networks, Real-Time Interactions
DEVA3: architecture for a large-scale distributed virtual reality system BIBAKFull-Text 33-40
  Steve Pettifer; Jon Cook; James Marsh; Adrian West
In this paper we present work undertaken by the Advanced Interfaces Group at the University of Manchester on the design and development of a system to support large numbers of geographically distributed users in complex, large-scale virtual environments (VEs).We shown how the problem of synchronisation in the face of network limitations is being addressed by the Deva system through the exploitation of subjectivity. Further, we present a model for flexibly describing object behaviours in the VEs.
   Applications of the system in use are described.
Keywords: Distribution, Object behavior, Programming model, Subjectivity, System architecture, Virtual Environments

Image-based modeling and rendering

Interactive reconstruction of virtual environments from photographs, with application to scene-of-crime analysis BIBAKFull-Text 41-48
  Simon Gibson; Toby Howard
There are many real-world applications of Virtual Reality that require the construction of complex and accurate three-dimensional models, suitably structured for interactive manipulation. In this paper, we present semi-automatic methods that allow such environments to be quickly and easily built from photographs taken with uncalibrated cameras, and illustrate the techniques by application to the real-world problem of scene-of-crime reconstruction.
Keywords: Computer Vision, Model Building, Photogrammetry, Scene of Crime Reconstruction, Texture, Virtual Reality
Interactive 3D modeling using only one image BIBAKFull-Text 49-54
  Sujin Liu; Zhiyong Huang
For virtual reality systems, modeling of 3D objects and scenes is important and challenging. In this paper, we present an image-based interactive 3D modeling framework consisting of three major modules: photogrammetric modeling, human interaction, and texture mapping. These three modules are not sequentially used and they are mixed in the whole modeling process. The major idea is to explore the use of images in interactive modeling systems to achieve the automation. In particular, the use of only one image is addressed. On one side, unlike the common fully interactive modeling framework, the users are not required to specify some low level details interactively which can be derived automatically from the image. On the other side, it still requires human interactions to do some high level tasks that the algorithms are difficult to perform automatically. We have implemented the framework and experimental results are good.
Keywords: 2-D image, Modeling of 3-D shape, human computer interaction, texture mapping
A hybrid method of image synthesis in IBR for novel viewpoints BIBAKFull-Text 55-60
  Xuehui Liu; Hanqiu Sun; Enhua Wu
Due to visibility change and surface enlargement in producing a novel view from a new viewpoint, 3D re-projection from one reference image in IBMR inevitably produces holes in the destination image. Even worse, exposure errors occur when a background region occluded is visible in a desired image because of the absence of some background elements in the reference image. The general solution to this kind of problems is to use multiple images from different viewpoints as input source. By doing so however, the rendering cost would increase with the number of reference images and the composition algorithm has to rely on the z-buffer processing.
   In fact, plenty of redundant information exists among different reference images. Seeking for a nice way to extract the information needed in the novel view from the reference images is the key issue in solving the problem. In this paper, we propose a new method of image synthesis from multiple reference images. The method combines forward warping and backward warping to fulfil the image composition task for a novel viewpoint. The primary inspiration behind the development of our image synthesis method comes from a fact that the polygon edge geometry may indicate where an exposure and, possible an exposure error occur in the destination image if object silhouettes are prior known. The feature that intersection between scanline and polygons must be in pairs is employed to distinguish holes caused by enlargement of surfaces from those by visibility change. Different heuristic methods are used to choose one image as a primary reference image which shares the most resemblance with the destination image, and other reference images for filling different kinds of holes. Depth continuity along scanline and the depth information already present in the destination image are employed to accelerate the searching process of the corresponding pixels for filling holes.
Keywords: Image-Based modeling and rendering, backward warping, depth-image map, epipolar geometry, forward warping

Augmented reality / 3D modeling

A framework for rapid evaluation of prototypes with augmented reality BIBAKFull-Text 61-66
  Selim Balcisoy; Marcelo Kallmann; Pascal Fua; Daniel Thalmann
In this paper we present a new framework in Augmented Reality context for rapid evaluation of prototypes before manufacture. The design of such prototypes is a time consuming process, leading t o the need of previous evaluation in realistic interactive environments. We have extended the definition of modelling object geometry with modelling object behaviour being able to evaluate them in a mixed environment. Such enhancements allow the development of tools and methods to test object behaviour, and perform interactions between real virtual humans and complex real and virtual objects.
   We propose a framework for testing the design of objects i an augmented reality context, where a virtual human is able to perform evaluation tests with an object composed of real and virtual components. In this paper our framework is described and a case study is presented.
Keywords: Augmented Reality, Human Factors, Object Behaviour, Prototyping, Virtual Humans
An immersive modeling system for 3D free-form design using implicit surfaces BIBAKFull-Text 67-74
  Masatoshi Matsumiya; Haruo Takemura; Naokazu Yokoya
We present a new free-form interactive modeling technique based on the metaphor of clay work. This paper discusses design issues and an immersive modeling system which enables a user to design intuitively and interactively 3D solid objects with curved surfaces by using one's finger. Shape deformation is expressed by simple formulas without complex calculation because of skeletal implicit surfaces employed to represent smooth free-form surfaces. A polygonization algorithm that generates polygonal representation from implicit surfaces is developed to reduce the time required for rendering curved surfaces, since conventional graphics hardware is optimized for displaying polygons. The prototype system has shown that a user can design 3D solid objects composed of curved surfaces in a short time by deforming objects intuitively using one's finger in real time.
Keywords: CAD, Head Mounted Displays, Implicit Surfaces, Solid Modeling, Virtual Reality
A framework for the structured design of VR/AR content BIBAKFull-Text 75-82
  C. Geiger; V. Paelke; C. Reimann; W. Rosenbach
We describe a framework that allows to easily design and implement virtual and augmented reality worlds. Based on a structured design approach for interactive animated 3D content we want to supply designers and content experts of complex virtual environments with a component based toolset for the structured design of the visual and abstract components of 3D applications.
Keywords: Design Framework, Interactive 3D Animation, Virtual and Augmented Reality
Conceptual free-form styling on the responsive workbench BIBAFull-Text 83-91
  Gerold Wesche; Marc Droske
A two-handed 3D styling system for free-form surfaces in a table-like Virtual Environment, the Responsive Workbench (RWB), is described. Intuitive curve and surface deformation tools based on variational modeling and interaction techniques adapted to 3D VR modeling applications are proposed. The user draws curves (cubic B-splines) directly in the Virtual Environment using a stylus as an input device. The curves are connected automatically, such that a curve network develops. A combination of automatic and user-controlled topology extraction modules creates the connectivity information. The underlying surface model is based on B-spline surfaces, or, alternatively, uses multisided patches [20] bounded by closed loops of curve pieces.

Distributed virtual environment

Multi-resolution spatial model for large-scale virtual environment BIBAKFull-Text 92-96
  ChangHun Park; Heedong Ko; TaiYun Kim
The goal of this paper is to optimize interest management for the scalability of networked virtual environments. To remove the needless load of co-presence, the interest manager restricts the consistency of shared virtual space by means of relevance. We propose multi-resolution spatial model (MRSM) for the optimization of an interest manager, which pays attention to all participants in order to determine the limitation of consistency. They enable an interest manager to control the granularity of relevance filtering without disturbing co-presence. When an interest manager realize relevance against the movements of players, MRSM supports the detection of avatar's location and the estimation of relevance at different cost according to level-of-detail. And, this paper presents an algorithm that applies the size of neighbor to the modification of level-of-detail while the efficiency is guaranteed.
Keywords: Interest management, Networked virtual space, Scalability, filter update, level-of-detail (LOD), multi-resolution spatial model (MRSM), relevance realization, state update
Message caching for local and global resource optimization in shared virtual environments BIBAKFull-Text 97-102
  Helmuth Trefftz; Ivan Marsic
The use of Shared Virtual Environments is growing in areas such as multi-player video games, military and industrial training, and collaborative design and engineering. As a result, different mixes of computing power and graphics capabilities of the participating computers arise naturally as the variety of people/organizations sharing a virtual environment grows. This paper presents an adaptive mechanism to reduce bandwidth usage and to optimize the use of computing resources of heterogeneous computers mixes utilized in a shared virtual environment. The mechanism is based on caching of both outgoing- and incoming-messages. We also report the results of implementing the proposed scheme in a simple shared virtual environment.
Keywords: Message Caching, Networking, Shared Virtual Environments, Virtual Reality
Scalable interest management using interest group based filtering for large networked virtual environments BIBAKFull-Text 103-108
  Seunghyun Han; Mingyu Lim; Dongman Lee
As distributed virtual environment (DVE) scales in terms of users and network latency, a key aspect to consider is scalability for interactive performance because a large number of objects likely impose heavy burden especially on the network and computational resources. To improve the scalability, various relevance-filtering mechanisms and aggregation mechanisms have been proposed. However the existing filtering mechanisms do not scale well in terms of interactive performance as the number of users increases and crowds in a specific place.
   In this paper, we propose a new scalable filtering scheme that reduces the number of messages by dynamically grouping users based on their interests and distance. Within a group, members communicate with each other with high fidelity. However, a representative sends up-to-dated group information of members with low transmission frequency when they are not of immediate interest but are still within the interest area. The representative is elected from members of the group in distributed manner. The proposed scheme enhances the interactive performance scalability of large-scale DVE systems as much as 18% compared with the existing approach.
Keywords: Interest management, Networked virtual environments, Representative user, User interest-based group
Scalable predictive concurrency control for large distributed virtual environments with densely populated objects BIBAKFull-Text 109-114
  Dongman Lee; Jeonghwa Yang; Soon J. Hyun
We propose an enhanced prediction-based concurrency control scheme that supports the scalability of concurrency control for large distributed virtual environments especially where entities are highly populated and tend to gather closely. The prediction scheme is based on an entity-centric multicast group. Only the users surrounding a target entity multicast the ownership requests via an entity multicast group and become owner candidates. The current owner predicts the next owner among the owner candidates and sends an ownership to the next owner in advance. However, if entities are assigned their own multicast address when they are close to each other, users have to continuously issue join messages as moving by the entities. To reduce the network and message exchange overhead, we use the location proximity of entities in virtual environments. By grouping closely gathered entities into one entity group and sharing a multicast address among group member entities, we reduce the number of frequent join and leave operations and join messages, therefore maintain enough interactive performance. The experiment results show that the proposed mechanism improves scalability especially when entities are closely gathered.
Keywords: Large scale distributed virtual environments, concurrency control, entity group, entity-centric multicast group, prediction scheme, scalability

Time critical rendering

Conservative visibility preprocessing for walkthroughs of complex urban scenes BIBAKFull-Text 115-128
  JunHyeok Heo; Jaeho Kim; KwangYun Wohn
Visibility preprocessing is a useful method to reduce the complexity of scenes to be processed in real-time, and so enhances the overall rendering performance for interactive visualization of virtual environments. In this paper, we propose an efficient visibility preprocessing method. The proposed method is able to handle more general environments, like urban environments, and remove invisible polygons jointly blocked by multiple occluders. The proposed method requires O(nm) time and O(n+m) space. By selecting a suitable value for m, user can select a suitable level of trade-off between the preprocessing time and the quality of the computational result. In the proposed method, we assume that navigatable areas in virtual environments are partitioned into rectangular parallelepiped cells or sub-worlds. To preprocess the visibility of each polygon for a given partitioned cell, we should determine at least the area-to-area visibility. That is inherently a four-dimensional problem. In the proposed method, we efficiently express four-dimensional visibility information on two-dimensional spaces and keep it within a ternary tree. which is conceptually similar to a BSP (Binary Space Partitioning) tree, by exploiting the characteristics of conservative visibility.
Keywords: Conservative Visibility, Occlusion Culling, Visibility Determination, Visibility Preprocessing
Fast perception-based depth of field rendering BIBAKFull-Text 129-133
  Jurriaan D. Mulder; Robert van Liere
Current algorithms to create depth of field (DOF) effects are either too costly to be applied in VR systems, or they produce inaccurate results. In this paper, we present a new algorithm to create DOF effects. The algorithm is based on two techniques: one of high accuracy and one of high speed but less accurate. The latter is used to create DOF effects in the peripheral viewing area where accurate results are not necessary. The first is applied to the viewing volume focussed at by the viewer. Both techniques make extensive use of rendering hardware, for texturing as well as image processing. The algorithm presented in this paper is an improvement over other (fast) DOF algorithms in that it is faster and provides better quality DOF effects where it matters most.
Keywords: depth of field rendering, virtual reality
A new BSP tree framework incorporating dynamic LoD models BIBAKFull-Text 134-141
  Zhigeng Pan; Zhiliang Tao; Chiyi Cheng; Jiaoying Shi
In this paper we present a new BSP (Binary Space Partitioning (BSP) tree. BSP trees are one of the most successful space partitioning techniques, since they allow both object modeling and classification in one single structure. In this paper, we present a new framework designed for a multi-resolution modeling system that incorporates the BSP tree structure and dynamic levels of detail models. It has the advantages of both BSP and multi-resolution representations. The tree construction and traversal routines for the multi-resolution BSP tree are discussed in detail. Images and timing for our implementation are provided.
Keywords: BSP trees, Multi-resolution modeling, mesh simplification, real-time rendering


Developing an efficient technique of selection and manipulation in immersive V.E. BIBAKFull-Text 142-146
  Chang Geun Song; No Jun Kwak; Dong Hyun Jeong
An Interaction Task in Virtual Reality is such that a user can modify a computer generated virtual world using various techniques. But current interaction techniques cannot be applicable for most virtual environments due to their inefficiency and inconvenience. In this paper, we propose a selection and manipulation technique called the Finger-gesture. We evaluate its usefulness by conducting quantitative and qualitative experiments within a specific environment. Results indicate our new technique is more efficient in selection and modification tasks than other existing techniques including Go-Go and Ray-casting in terms of the task completion time and accuracy.
Keywords: 3D interaction technique, Finger-Gesture, Go-Go, Ray-casting, Virtual Reality
Immersive graph navigation using direct manipulation and gestures BIBAKFull-Text 147-152
  Noritaka Osawa; Kikuo Asai; Yuji Y. Sugimoto
An immersive graph visualization and navigation system is proposed. Its visualization is based on a multiple-focus layout technique using heat models. Virtual temperatures influence the graph layout. The navigation uses a combination of the layout technique and hand gestures. The system allows one to have multiple-focus nodes and to move a focus by direct manipulation and hand gestures dynamically. Direct manipulation by hand can arrange nodes and choose focus nodes. Hand gestures can control a focus area using a spotlight-like heat radiation, A forefinger points the direction of the spotlight and an angle between the forefinger and the thumb controls a spread angle of the spotlight. This technique enables one to navigate a graph in an immersive virtual space.
Keywords: direct manipulation, graph navigation, graph visualization, hand gestures, heat models
VR user interface: closed world interaction BIBAKFull-Text 153-159
  Ching-Rong Lin; R. Bowen Loftin
In this paper, we describe a user interface technique that uses a bounding box as a metaphor to facilitate interaction in a Virtual Reality (VR) environment. Because this technique is based on the observation that some of the VR application fields are contained in a closed world, we call it Closed World Interaction (CWI). After the user defines a closed world, the necessary virtual buttons are shown around the closed world which is presented by a frame. These virtual buttons are then used to interact with models. We also integrate some of the 2D Windows, Icons, Mouse and Pointer (WIMP) metaphors into CWI technique, reflecting our belief that users will be able to adapt to this environment quickly, A series of user studies were conducted to investigate the effectiveness of this technique. The results indicate that users can define a closed world quickly. Experience appears to be an important factor, and users can be trained to become familiar with CWI in the VR environment. The constrained interactions can also enhance the accuracy of selection. Two-handed manipulation somewhat improves the speed.
Keywords: 3D interaction, Virtual Reality and visualization
Virtual reality for education? BIBAKFull-Text 160-165
  Don Allison; Larry F. Hodges
It is still unclear what, if any, impact virtual reality will have on public education. The virtual reality gorilla system is being used as a testbed to study if and how virtual reality might be useful as an aid in educating middle school children, and to investigate the issues that arise when building virtual reality systems for knowledge acquisition and concept formation.
Keywords: Virtual reality, education, middle school


Animated deformations with radial basis functions BIBAKFull-Text 166-174
  Jun-yong Noh; Douglas Fidaleo; Ulrich Neumann
We present a novel approach to creating deformations of polygonal models using Radial Basis Functions (RBFs) to produce localized real-time deformations. Radial Basis Functions assume surface smoothness as a minimal constraint and animations produce smooth displacements of affected vertices in a model. Animations are produced by controlling an arbitrary sparse set of control points defined on or near the surface of the model. The ability to directly manipulate a facial surface with a small number of point motions facilitates an intuitive method for creating facial expressions for virtual environment applications such as an immersive teleconferencing system or entertainment. Smooth deformations of the human face or other models are possible and illustrated with examples of a variety of expressions and mouth shapes.
Keywords: Facial Animation, Geometry Deformation, MPEG-4, Radial Basis Functions
A simplified deformation for real-time 3D character animation BIBAKFull-Text 175-182
  Sang-Won Ghyme; Ki-Hong Kim; Hyun-Bin Kim
The basic ideas for realistic and real-time animation of 3D character are described through three steps. First, we just choose two deformations among all kinds of deformations that happen in the human body, because these deformations are globally noticed and can be performed in real-time. Second, we decide what body parts of a character are deformed from the anthropometrics data. Last, we propose simple and fast implementations for two deformations. Using proposed ideas, the authoring tool that make a deformable character from a rigid polygon model and test deformation of it, is introduced. And the animation player to animate a deformable character generated from the authoring tool is also introduced.
Keywords: Computer Animation, Computer Graphics, Deformation, Modeling, Modeling Tool, Motion Player, Real-Time, Skeleton, Virtual Character, Virtual Human, Virtual Reality
Web-based 3D media information system BIBAKFull-Text 183-187
  Yong-Moo Kwon; Ig-Jae Kim; Sang Chul Ahn; Hyoung-Gon Kim
This paper introduces web-based 3D media information system. We first address two promising 3D modeling techniques, i.e., image-based 3D modeling and laser scanning based 3D modeling. Especially, we present two approaches of the image-based 3D modeling. One is an off-line approach using multiview images which is captured with single camera and a robot arm. The another one is an on-line approach that extends a commercial triclops camera system. We also utilize a 3D modeling scheme based on a laser scanner and a 3D reverse modeler. Using our 3D modeling environments, we construct several kinds of 3D models. We also implement web-based 3D media information management and retrieval system using XML data server, which provides services of 3D models. Our web-based 3D media information system has a goal of services of various types of 3D models and contents through WWW, which is currently focused on the development and management of 3D models of Korea cultural heritage.
Keywords: 3D media information system, Image-based 3D modeling, laser scanning based 3D modeling

Application system

Direct haptic rendering of isosurface by intermediate representation BIBAKFull-Text 188-194
  Kwong-Wai Chen; Pheng-Ann Heng; Hanqiu Sun
With the development of volume visualization methods, we can easily extract meaningful information from volumetric data using interactive graphics and imaging. Haptic interaction of volumetric data adds a new modality to volume visualization that has an advantage in presenting complex attributes of local region. However, the benefits of haptic rendering of volumetric data have only been recognized recently. Most traditional haptic rendering methods are developed to compute realistic interaction force with geometric primitives. Direct volume haptic rendering allows haptic palpation of volumetric data, but lacks of the ability of simulating the contact sensation of stiff embedded implicit surface.
   In this paper, we propose a direct haptic rendering method for isosurface in volumetric data using a point-based haptic feedback device, without the extraction of the isosurface to geometric representations such as polygons. Our algorithm extends the intermediate representation approach that had been introduced for dealing with complex virtual environment, to haptically render volumetric data. The algorithm uses a virtual plane as an intermediate representation of the implicit isosurface, and computes the point interaction force applied to the haptic interface based on this virtual plane. Using this approach, we are able to gain higher haptic servo rate for volumetric data. It makes maintenance of the stability of the simulation easier, and applicable to noisy data without preprocessing. We have developed our algorithm and tested with synthetic data and medical data, using the PHANToM haptic interface.
Keywords: Force Feedback, Haptic Rendering, Virtual Reality, Volume Visualization
Dual projection-based VR system for the light weight motion-based driving simulator BIBAKFull-Text 195-198
  Sang-Hun Nam; Dong-Hoon Lee; Jang-Hwan Im; Young-Ho Chai
This paper proposes a projection-based VR system using the window and camera projection paradigms simultaneously. This technique can be applied to the Driving Simulator, which consists of 6 D.O.F. motion platform and projection screens separated from the motion platform.
Keywords: Driving Simulator, Motion Platform, Projection System, Quaternions, Virtual Reality
Development of tension based haptic interface and possibility of its application to virtual reality BIBAKFull-Text 199-205
  Seahak Kim; Masahiro Ishii; Yasuharu Koike; Makoto Sato
Continuous advances in computer technology are making it possible to construct virtual environments with an ever-increasing sense of visual realism. What is lacking are interfaces that allow users to manipulate virtual objects in an intuitive manner. In this paper, we present a 7 DOF tension-based haptic interface that allows users to not only grip an object but also to sense an object's width. We have developed a system to utilize the physical action of gripping to display grasp manipulation in virtual environments. We also present a method to calculate the position and display force associated with this gripping mechanism. Finally, we show the validity of our proposed haptic interface through examples.
Keywords: 7DOF, SPIDAR-G, Tension based haptic interface
Incorporating co-presence in distributed virtual music environment BIBAKFull-Text 206-211
  Byungdae Jung; Jaein Hwang; Sangyoon Lee; Gerard Jounghyun Kim; Hyunbin Kim
In this paper, we present "PODIUM (POstech Distributed virtual Music environment)", a distributed virtual environment that allows users to participate in a shared space and play music with other participants in a collaborative manner. In addition to playing virtual instruments, users can communicate and interact in various ways to enhance the collaboration and, thus, the quality of the music played together. Musical messages are generated note by note through interaction with the keyboard, mouse, and other devices, and transmitted through an IP-multicasting network among participants. In addition to such note-level information, additional messages for visualization, and interaction are supported. Real world based visualization has been chosen, against, for instance, abstract music world based visualization, to promote "co-presence" (e.g. recognize and interact with other players), which is deemed important for collaborative music production. In addition to the entertainment purpose, we hope that DVME will find great use in casual practice sessions for even professional performers/orchestras/bands.
   Since even a slight interruption in the flow of the music or out-of-synch graphics and sound would dramatically decrease utility of the system, we employ various techniques to minimize the network delay. An adapted server-client architecture and UDP's are used to ensure fast packet deliveries and reduce the data bottleneck problem. Time-critical messages such as MIDI messages are multicasted among clients, and the less time-critical and infrequently updated messages are sent through the server. Predefined animations of avatars are invoked by interpreting the musical messages. Using the latest graphics and sound processing hardware, and by maintaining an appropriate scene complexity, and a frame rate sufficiently higher than the fastest note duration, the time constraint for graphics and sound synchronization can be met. However, we expect the network delay could cause considerable problems when the system is scaled up for many users and processing simultaneous notes (for harmony). To assess the scalability, we carried out a performance analysis of our system model to derive the maximum number of simultaneous participants. For example, according to our data, about 50 participants should be able to play together without significant disruption, each using one track with five simultaneous notes and for playing a musical piece at a speed of 16 ticks per second in a typical PC/LAN environment.
   In hopes of enhancing the feeling of "co-presence" among participants, a simple sound localization technique is used to compute panning and relative volumes from positions and orientations of participants. This reduced sound localization model is used also in order to minimize the computational cost and the network traffic. Participants can send predefined messages by interacting with the keyboard, mouse, and other input devices. All of the predefined messages are mapped into simple avatar motions, such as playing various types of instruments (players), making applause (audience), and conducting gestures (conductors). We believe that for coordinated music performance, indirect interaction will be the main interaction method, for example, exchanging particular gestures, signals, and voice commands to synchronize music, confirming and reminding expression of the upcoming portion of the music, and just exchanging glances to enjoy each others' emotion. In this view, there would be mainly three groups of participants: conductor, players, and the audience, playing different roles, but creating co-presence together through mutual recognition. We ran a simple experiment comparing the music performance of two groups of participants, one provided with co-presence cues and the other without, and found no performance edge by the group with the co-presence cues. Such a result can serve as one guideline for building music-related VR applications.
Keywords: Co-presence, Distributed Virtual Reality, Interaction, Networked Virtual Reality, Virtual Music