HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2002 ACM Symposium on Virtual Reality Software and Technology

Fullname:VRST'02 ACM Symposium on Virtual Reality Software and Technology
Editors:Jiaoying Shi; Larry Hodges; Hanqiu Sun; Qunsheng Peng
Location:Hong Kong, China
Dates:2002-Nov-11 to 2002-Nov-13
Publisher:ACM
Standard No:ISBN: 1-58113-530-0; ACM DL: Table of Contents hcibib: VRST02
Papers:26
Pages:224
  1. Real-time rendering
  2. Modeling/simulation
  3. Applications
  4. Interaction
  5. Collision detection
  6. Graphics/image-based algorithms
  7. Distributed/collaborative virtual environments
  8. Hybrid VR

Real-time rendering

Time-critical rendering of discrete and continuous levels of detail BIBAKFull-Text 1-8
  Christopher Zach; Stephan Mantler; Konrad Karner
We present a novel level of detail selection method for real-time rendering, that works on hierarchies of discrete and continuous representations. We integrate point rendered objects with polygonal geometry and demonstrate our approach in a terrain flyover application, where the digital elevation model is augmented with forests. The vegetation is rendered as continuous sequence of splats, which are organized in a hierarchy. Further we discuss enhancements to our basic method to improve its scalability.
Keywords: level of detail management, point rendering, real-time rendering, rendering of vegetation
Light field duality: concept and applications BIBAKFull-Text 9-16
  George Chen; Li Hong; Kim Ng; Peter McGuinness; Christian Hofsetz; Yang Liu; Nelson Max
We propose to look at light fields from a dual space point of view. The advantage, in addition to revealing some new insights, is a framework that combines the benefits of many existing works. Using the well known two-plane-parameterization, we derive the duality between the 4-D light field and the 3-D world space. In the dual light field, rays become hyper points. We introduce the concept of hyperline. Then, cameras appear as hyperlines -- camera hyperline (CHL) -- mostly heterogeneous in color; scene points also appear as hyperlines -- geometry hyperline (GHL) -- mostly homogeneous in color. CHL's and GHL's are independent. The existence of one does not require or replace the other. When both exist, they cross each other at the dual ray hyper points. Both CHL and GHL-based light field rendering results are presented.
Keywords: dual space, light field rendering, point sample rendering
Selective quality rendering by exploiting human inattentional blindness: looking but not seeing BIBAKFull-Text 17-24
  Kirsten Cater; Alan Chalmers; Patrick Ledda
There are two major influences on human visual attention: bottom-up and top-down processing. Bottom-up processing is the automatic direction of gaze to lively or colourful objects as determined by low-level vision. In contrast, top-down processing is consciously directed attention in the pursuit of predetermined goals or tasks. Previous work in perception-based rendering has exploited bottom-up visual attention to control detail (and therefore time) spent on rendering parts of a scene. In this paper, we demonstrate the principle of Inattentional Blindness, a major side effect of top-down processing, where portions of the scene unrelated to the specific task go unnoticed. In our experiment, we showed a pair of animations rendered at different quality levels to 160 subjects, and then asked if they noticed a change. We instructed half the subjects to simply watch our animation, while the other half performed a specific task during the animation.
   When parts of the scene, outside the focus of this task, were rendered at lower quality, almost none of the task-directed subjects noticed, whereas the difference was clearly visible to the control group. Our results clearly show that top-down visual processing can be exploited to reduce rendering times substantially without compromising perceived visual quality in interactive tasks.
Keywords: human visual perception, image quality, inattentional blindness, interactive rendering of dynamic scenes, task related realistic rendering
Rendering of virtual environments based on polygonal & point-based models BIBAKFull-Text 25-32
  Wenting Zheng; Hanqiu Sun; Hujun Bao; Qunsheng Peng
Real-time rendering for large-scale, complex dynamic virtual scenes is a challenging problem in computer graphics. In this paper, we propose a hybrid rendering algorithm of dynamic virtual environments that seamlessly fuses the point-based scheme and polygon-based scheme. In our algorithm, the scene is organized into a BSP-tree. Objects in the leaf-nodes of the BSP tree are further subdivided into a quad-tree hierarchy, which contains both the sample points and polygon rendering information at each level. The accelerated rendering algorithm integrates the hierarchical occlusion map technique, image caching technique and BSP technique to fast render complex dynamic scenes. During navigation, our system adaptively determines the rendering mode and the level of details of objects, and achieves smooth transition between the two rendering modes by effectively controlling the rendering precision. The dynamic objects can be processed in the system uniformly. Our experimental results have demonstrated the satisfactory performance of the proposed hybrid-rendering scheme for the dynamic virtual environments.
Keywords: BSP tree, dynamic virtual environments, hierarchical occlusion map (HOM), point-based rendering

Modeling/simulation

Template-based generation of road networks for virtual city modeling BIBAKFull-Text 33-40
  Jing Sun; Xiaobo Yu; George Baciu; Mark Green
In modern urban areas, we often find a transportation network that follows a superimposed pattern. In this paper, we propose a novel method to generate a virtual traffic network based on (1) image-derived templates, and (2) a rule-based generating system. Using 2D images as input maps, various road maps with different patterns could be produced. This traffic network generating model adjusts itself intelligently in order to avoid restricted geographical areas or urban developments. The generative model follows closely directions of elevation and connects road ends in ways that allow various types of breakpoints.
Keywords: 3D modeling, GIS, urban synthesis, virtual reality
Modeling virtual object behavior within virtual environment BIBAKFull-Text 41-48
  Gun A. Lee; Gerard Jounghyun Kim; Chan-Mo Park
Development of virtual reality systems requires iterations of specification, implementation and evaluation. Since correct evaluations of immersive VR systems require the tedious process of wearing many devices, there exist both temporal and spatial gaps between the implementation and evaluation stage, and this usually causes delay and inefficiency in the development process. In order to overcome this gap, there have been several approaches to constructing or modeling the physical aspects of the virtual world (or objects) within the virtual environment. However, modeling their behaviors is still carried out in conventional (2D) programming environments.
   This paper proposes an interaction model, and interfaces for specifying (and modifying) object behavior, within the virtual environment, based on an underlying virtual object model. The interaction model follows the concept of programming by demonstration, and based on it, we have built a system called the PiP (Programming virtual object behavior in virtual reality Program) "in" which a user can create, modify, test, and save object behaviors. We illustrate examples of interactive virtual worlds constructed using the PiP, and discuss its merits and shortcomings as a content development platform.
Keywords: 3D interaction, interactive behavior modeling, programming by demonstration, virtual environment, virtual object
Complex deformable objects in virtual reality BIBAKFull-Text 49-56
  Young-Min Kang; Hwan-Gue Cho
In this paper, we present a real-time animation technique for deformable objects based on mass-spring models in virtual reality environments. Many researchers have proposed various techniques for representing the motion and the appearance of deformable objects. However, the animation of deformable objects in virtual reality environments is still a hard problem. One of the most intensively studied deformable objects is virtual cloth. The difficulties in cloth animation mainly lie in the fact that cloth simulation easily tends to become unstable. Although the implicit method can make the simulation stable [Baraff98], it is still impossible to generate interactive animation when the number of mass-points is sufficiently large enough to represent realistic appearance. There have been a few efficient solutions for real-time animation [Desbrun99,Kang01,Oshita01]. However, the previous techniques for real-time cloth animation are not capable of generating the plausible motion and appearance because the methods are based on excessive approximations. This paper proposes an efficient technique that can generate realistic animation of complex cloth objects, and the technique makes it possible to integrate realistic deformable objects into the virtual reality systems without violating the interactivity of the system. The proposed method is based on the implicit integration in order to keep the system stable, and the solution of the linear system involved in the implicit integration is efficiently approximated in real-time.
Keywords: cloth animation, real-time animation, virtual reality

Applications

Application and taxonomy of through-the-lens techniques BIBAKFull-Text 57-64
  Stanislav L. Stoev; Dieter Schmalstieg
In this work, we present a set of tools based on the through-the-lens metaphor. This metaphor enables simultaneous exploration of a virtual world from two different viewpoints. The one is used to display the surrounding environment and represents the user, the other is interactively manipulated and the resulting images are displayed in a dedicated window. We discuss in detail the various different states of the two viewpoints and the two synthetic worlds, introducing taxonomy for their relationship to each other. We also elaborate on navigation with the through-the-lens concept extending the ideas behind known tools. Furthermore, we also present a new remote object manipulation technique based on the through-the-lens concept.
Keywords: data manipulation, human-computer interface, interaction, interaction techniques, virtual environment interaction, virtual reality, visualization techniques
Spatialized audio rendering for immersive virtual environments BIBAKFull-Text 65-72
  Martin Naef; Oliver Staadt; Markus Gross
We present a spatialized audio rendering system for the use in immersive virtual environments. The system is optimized for rendering a sufficient number of dynamically moving sound sources in multi-speaker environments using off-the-shelf audio hardware. Based on simplified physics-based models, we achieve a good trade-off between audio quality, spatial precision, and performance. Convincing acoustic room simulation is accomplished by integrating standard hardware reverberation devices as used in the professional audio and broadcast community. We elaborate on important design principles for audio rendering as well as on practical implementation issues. Moreover, we describe the integration of the audio rendering pipeline into a scene graph-based virtual reality toolkit.
Keywords: 3D audio, spatially immersive display, virtual reality
Tour into the video: image-based navigation scheme for video sequences of dynamic scenes BIBAKFull-Text 73-80
  Hyung Woo Kang; Sung Yong Shin
Tour Into the Picture (TIP) is a method for generating a sequence of walk-through images from a single reference image. By navigating a 3D scene model constructed from the image, TIP provides convincing 3D effects. This paper presents a comprehensive scheme for creating walk-through images from a video sequence by generalizing the idea of TIP. The purpose of this work is to let users experience the feel of navigating into a video sequence with their own interpretation and imagination about a given scene. To generate images from new viewpoints, we first extract the background and the foreground information from the video, and then exploit the notion of a vanishing circle to construct a 3D scene model. The proposed scheme covers various types of video films of dynamic scenes such as sports coverage, cartoon animation, and movie films, in which objects are continuously changing their shapes and locations. It can also be used to produce a variety of synthetic video sequences by importing and merging dynamic foreign objects with the original video.
Keywords: animation, image-based rendering, video sequence

Interaction

Real-time haptic sculpting in virtual volume space BIBAKFull-Text 81-88
  Hui Chen; Hanqiu Sun
Virtual sculpture is a modeling technique for computer graphics based on the notion of sculpting a solid material with tools. Currently, most interactive sculpture is mainly focused on vision-based sensory channel. With visual feedback alone virtual sculpture cannot simulate the realistic sculpting operations in the physical world. The sense of touch, in combination with our kinesthetic sense, is capable of adding a new modality to virtual sculpture, especially in presenting complex geometry & material properties. In this paper, we propose a virtual haptic sculpting (VHS) system in the volume space, which supports real-time melting, burning, stamping, painting, constructing and peeling interactions. Based on the constructive volume methodology, we have developed sculpting tools as volumes, with each properties and size, distribution for elements, and rules of the interaction between the volumetric data and the tools. The sculpting tools are controlled directly by the 6-DOF haptic input to simulate realistic sculpting operations, in applying the computed model and tool dynamics while interacting with the volume. Both synthetic volumetric data and medical scan volumes are experimented using the 6-DOF PHANToM Desktop haptic interface.
Keywords: haptic interaction, virtual reality, virtual sculpture, volume rendering
Implementing flexible rules of interaction for object manipulation in cluttered virtual environments BIBAKFull-Text 89-96
  Roy A. Ruddle; Justin C. D. Savage; Dylan M. Jones
Object manipulation in cluttered virtual environments (VEs) brings additional challenges to the design of interaction algorithms, when compared with open virtual spaces. As the complexity of the algorithms increases so does the flexibility with which users can interact, but this is at the expense of much greater difficulties in implementation for developers. Three rules that increase the realism and flexibility of interaction are outlined: collision response, order of control, and physical compatibility. The implementation of each is described, highlighting the substantial increase in algorithm complexity that arises. Data are reported from an experiment in which participants manipulated a bulky virtual object through parts of a virtual building (the piano movers' problem). These data illustrate the benefits to users that accrue from implementing flexible rules of interaction.
Keywords: object manipulation, rules of interaction, virtual environments
DEMIS: a dynamic event model for interactive systems BIBAKFull-Text 97-104
  Hua Jiang; G. Drew Kessler; Jean Nonnemaker
Modern interaction systems are usually event-driven. New input devices often require new event types, and handling input from the user becomes increasingly more complex. Frequently, the WIMP (Windows, Icons, Menus, Pointer) paradigm widely used today is not suitable for interactive applications, such a virtual reality applications, that use more than the standard mouse and keyboard input devices.
   In this paper, we present the design and implementation of the Dynamic Event Model for Interactive System (DEMIS). DEMIS is a middleware between the operating system and the application that supports various input device events while using generic event recognition to detect composite events.
Keywords: composite events, event recognition, human-computer interaction, input devices
Towards intuitive exploration tools for data visualization in VR BIBAKFull-Text 105-112
  Gerwin de Haan; Michal Koutek; Frits H. Post
In this paper we present a basic set of intuitive exploration tools for the data visualization in a Virtual Environment on the Responsive Workbench. First, we introduce the Plexipad, a transparent acrylic panel which allows two-handed interaction in combination with a stylus. After a description of various interaction scenarios with these two devices, we present a basic set of interaction tools, which support the user in the process of exploring volumetric datasets. Besides the interaction tools for navigation and selection we present tools that are closely coupled with probing tools. These interactive probing tools are used as input for complex visualization tools and for performing virtual measurements. We illustrate the use of our tools in two applications from different research areas which use volumetric and particle data.
Keywords: data exploration, two-handed interaction, user interface, virtual reality, visualization

Collision detection

LARGE a collision detection framework for deformable objects BIBAKFull-Text 113-120
  Rynson W. H. Lau; Oliver Chan; Mo Luk; Frederick W. B. Li
Many collision detection methods have been proposed. Most of them can only be applied to rigid objects. In general, these methods precompute some geometric information of each object, such as bounding boxes, to be used for run-time collision detection. However, if the object deforms, the precomputed information may not be valid anymore and hence needs to be recomputed in every frame while the object is deforming. In this paper, we presents an efficient collision detection framework for deformable objects, which considers both inter-collisions and self-collisions of deformable objects modeled by NURBS surfaces. Towards the end of the paper, we show some experimental results to demonstrate the performance of the new method.
Keywords: NURBS surfaces, collision detection, deformable objects, interference detection
Minimal hierarchical collision detection BIBAKFull-Text 121-128
  Gabriel Zachmann
We present a novel bounding volume hierarchy that allows for extremely small data structure sizes while still performing collision detection as fast as other classical hierarchical algorithms in most cases. The hierarchical data structure is a variation of axis-aligned bounding box trees. In addition to being very memory efficient, it can be constructed efficiently and very fast.
   We also propose a criterion to be used during the construction of the BV hierarchies is more formally established than previous heuristics. The idea of the argument is general and can be applied to other bounding volume hierarchies as well. Furthermore, we describe a general optimization technique that can be applied to most hierarchical collision detection algorithms.
   Finally, we describe several box overlap tests that exploit the special features of our new BV hierarchy. These are compared experimentally among each other and with the DOP tree using a benchmark suite of real-world CAD data.
Keywords: R-trees, hierarchical data structures, hierarchical partitioning, interference detection, physically-based modeling, virtual prototyping
Hardware-assisted self-collision for deformable surfaces BIBAKFull-Text 129-136
  George Baciu; Wingo Sai-Keung Wong
The natural behavior of garments and textile materials in the presence of changing object states is potentially the most computationally demanding task in a dynamic 3D virtual environment. Cloth materials are highly deformable inducing a very large number of contact points or regions with other objects. In a natural environment, cloth objects often interact with themselves generating a large number of self-collisions areas. The interactive requirements of 3D games and physically driven virtual environments make the cloth collisions and self-collisions computations more challenging. By exploiting mathematically well-defined smoothness conditions over smaller patches of deformable surfaces and resorting to image-based collision detection tests, we developed an efficient collision detection method that achieves interactive rates while tracking self-interactions in highly deformable surfaces consisting of more that 50,000 elements. The method makes use of a novel technique for dynamically generating a hierarchy of cloth bounding boxes in order to perform object-level culling and image-based intersection tests using conventional graphics hardware support.
Keywords: cloth simulation, collision detection, deformable surfaces, graphics hardware

Graphics/image-based algorithms

The randomized sample tree: a data structure for interactive walkthroughs in externally stored virtual environments BIBAKFull-Text 137-146
  Jan Klein; Jens Krokowski; Matthias Fischer; Michael Wand; Rolf Wanka; Friedhelm Meyer auf der Heide
We present a new data structure for rendering highly complex virtual environments of arbitrary topology. The special feature of our approach is that it allows an interactive navigation in very large scenes (30 GB/400 million polygons in our benchmark scenes) that cannot be stored in main memory, but only on a local or remote hard disk. Furthermore, it allows interactive rendering of substantially more complex scenes by instantiating objects.
   For the computation of an approximate image of the scene, a sampling technique is used. In the preprocessing, a so-called sample tree is built whose nodes contain randomly selected polygons from the scene. This tree only uses space that is linear in the number of polygons. In order to produce an image of the scene, the tree is traversed and polygons stored in the visited nodes are rendered. During the interactive walkthrough, parts of the sample tree are loaded from local or remote hard disk.
   We implemented our algorithm in a prototypical walkthrough system. Analysis and experiments show that the quality of our images is comparable to images computed by the conventional z-buffer algorithm regardless of the scene topology.
Keywords: Monte Carlo techniques, level of detail algorithms, out-of-core rendering, point sample rendering, rendering systems, spatial data structures
LAM: luminance attenuation map for photometric uniformity in projection based displays BIBAKFull-Text 147-154
  Aditi Majumder; Rick Stevens
Large-area multi-projector displays show significant spatial variation in color, both within a single projector's field of view and across different projectors. Recent research in this area has shown that the color variation is primarily due to luminance variation. Luminance varies within a single projector's field of view, across different brands of projectors and with the variation in projector parameters. Luminance variation is also introduced by overlap between adjacent projectors. On the other hand, chrominance remains constant throughout a projector's field of view and varies little with the change in projector parameters, especially for projectors of the same brand. Hence, matching luminance response of all the pixels of a multi-projector display should help us to achieve photometric uniformity.
   In this paper, we present a method to do a per channel per pixel luminance matching. Our method consists of a one-time calibration procedure when a luminance attenuation map (LAM) is generated. This LAM is then used to correct any image to achieve photometric uniformity. In the one-time calibration step, we first use a camera to measure the per channel luminance response of a multi-projector display and find the pixel with the most "limited" luminance response. Then, for each projector, we generate a per channel LAM that assigns a weight to every pixel of the projector to scale the luminance response of that pixel to match with the most limited response. This LAM is then used to attenuate any image projected by the projector.
   This method can be extended to do the image correction in real time on traditional graphics pipeline by using alpha blending and color look-up-tables. To the best of our knowledge, this is the first effort to match luminance across all the pixels of a multi-projector display. Our results show that luminance matching can indeed achieve photometric uniformity.
Keywords: color calibration, color uniformity, projection based displays, tiled displays
The global occlusion map: a new occlusion culling approach BIBAKFull-Text 155-162
  Wei Hua; Hujun Bao; Qunsheng Peng; A. R. Forrest
Occlusion culling is an important technique to speed up the rendering process for walkthroughs in a complex environment. In this paper, we present a new approach for occlusion culling with respect to a view cell. A compact representation, the Global Occlusion Map (GOM), is proposed for storing the global visibility information of general 3D models with respect to the view cell. The GOM provides a collection of Directional Visibility Barriers (DVB), which are virtual occluding planes aligned with the main axes of the world coordinates that act as occluders to reject invisible objects lying behind them in every direction from a view cell. Since the GOM is a two-dimensional array, its size is bounded, depending only on the number of the sampled viewing directions. Furthermore, it is easy to conservatively compress the GOM by treating it as a depth image. Due to the axial orientations of the DVBs, both the computational and storage costs for occlusion culling based on the GOM is minimized. Our implementation shows the Global Occlusion Map is effective and efficient in urban walkthrough applications.
Keywords: global visibility, occlusion culling, potentially visible set, rendering system, visibility culling

Distributed/collaborative virtual environments

A multi-server architecture for distributed virtual walkthrough BIBAKFull-Text 163-170
  Beatrice Ng; Antonio Si; Rynson W. H. Lau; Frederick W. B. Li
CyberWalk is a distributed virtual walkthrough system that we have developed. It allows users at different geographical locations to share information and interact within a common virtual environment (VE) via a local network or through the Internet. In this paper, we illustrate that when the number of users exploring the VE increases, the server will quickly become the bottleneck. To enable good performance, CyberWalk utilizes multiple servers and employs an adaptive data partitioning techniques to dynamically partition the whole VE into regions. All objects within each region will be managed by one server. Under normal circumstances, when a viewer is exploring a region, the server of that region will be responsible for serving all requests from the viewer. When a viewer is crossing the boundary of two or more regions, the servers of all the regions involved will be serving requests from the viewer since the viewer might be able to view objects within all those regions. We evaluate the performance of this multi-server architecture of CyberWalk via a detail simulation model.
Keywords: data partition and replication, distributed virtual environments, multi-server architecture
Cooperative object manipulation in immersive virtual environments: framework and techniques BIBAKFull-Text 171-178
  Márcio S. Pinho; Doug A. Bowman; Carla M. D. S. Freitas
Cooperative manipulation refers to the simultaneous manipulation of a virtual object by multiple users in an immersive virtual environment. This paper describes a framework supporting the development of collaborative manipulation techniques, and example techniques we have tested within this framework. We describe the modeling of cooperative interaction techniques, methods of combining simultaneous user actions, and the awareness tools used to provide the necessary knowledge of partner activities during the cooperative interaction process. Our framework is based on a Collaborative Metaphor concept that defines rules to combine user interaction techniques. The combination is based on the separation of degrees of freedom between two users. Finally, we present novel combinations of two interaction techniques (Simple Virtual Hand and Ray-casting).
Keywords: cooperative interaction, interaction in virtual environments
Deployment issues for multi-user audio support in CVEs BIBAKFull-Text 179-185
  Milena Radenkovic; Chris Greenhalgh; Steve Benford
We describe an audio service for CVEs, designed to support many people speaking simultaneously and to operate across the Internet. Our service exploits a technique called Distributed Partial Mixing (DPM) to dynamically adapt to varying numbers of speakers and network congestion. Our DPM implementation dynamically manages the trade-off between congestion and audio quality when compared to the approaches of peer-to-peer forwarding and total mixing in a way that is fair to the TCP protocol and so operates as a "good Internet citizen". This paper focuses on the large-scale deployment of DPM over wide area networks. In particular we raise and examine the issues when deploying DPM within the context of large dynamic environments. We argue that DPM paradigm remains feasible and desirable in such environments.
Keywords: CVEs, real-time audio, simultaneous speakers

Hybrid VR

Placing three-dimensional models in an uncalibrated single image of an architectural scene BIBAKFull-Text 186-193
  Sara Keren; Ilan Shimshoni; Ayellet Tal
This paper discusses the problem of inserting three-dimensional models into a single image. The main focus of the paper is on the accurate recovery of the camera's parameters, so that 3D models can be inserted in the "correct" position and orientation. An important aspect of this paper is a theoretical and an experimental analysis of the errors. We also implemented a system which "plants" virtual 3D objects in the image, and tested the system on many indoor augmented reality scenes. Our analysis and experiments have shown that errors in the placement of the objects are un-noticeable.
Keywords: architectural scenes, augmented reality, camera calibration
Real space-based virtual studio seamless synthesis of a real set image with a virtual set image BIBAFull-Text 194-200
  Yuko Yamanouchi; Hideki Mitsumine; Takashi Fukaya; Masahiro Kawakita; Nobuyuki Yagi; Seiki Inoue
When making a TV program in a studio, care must be taken that the camera does not shoot beyond the boundary of the studio set. In addition, limitations in cost and space for the set must be taken into account. In a virtual studio, on the other hand, we can solve this cost and space problem, but in turn, actors are requested to perform in front of a blue background screen, which is not always an easy task for them. To solve these problems associated with real studios and virtual studios, we have developed a new type of virtual studio called Real Space-based Virtual Studio in which a real space image and virtual space image are combined naturally with no boundary seam. There are two major advantages in using this new virtual-real hybrid system. One is that the actors can concentrate on their role in the real studio sets, and the other is that camera work can be done without worrying about off-screen areas of the set. In the present study, we constructed an omnidirectional image with ultra high-definition features and combined it as a virtual studio image with a real studio image. We have developed an integration system and from the experimentation we have shown that the omnidirectional images and the real studio images can combine smoothly and naturally.
A manipulation environment of virtual and real objects using a magnetic metaphor BIBAKFull-Text 201-207
  Yoshifumi Kitamura; Susumu Ogata; Fumio Kishino
This paper describes a method for the consolidated manipulation of virtual and real objects using a "magnetic metaphor". The method reduces the behavioral differences between the virtual and real objects. A limited number of physical laws are selected and simulated for the virtual objects; at the same time, limitations are placed on the physical laws for the real objects. Accordingly, a compromise can be found between the physical laws that operate on virtual objects and those which operate on real objects. Therefore, a system with this method enables a user to manipulate virtual and real objects in a similar manner by expecting the same responses and behaviors according to the same physical laws. Experimental results show that the proposed method improves the task performance in the manipulation of virtual and real objects existing in the same environment simultaneously.
Keywords: augmented reality, haptics, magnetic metaphor, mixed reality, object manipulation, virtual reality, visual simulation