HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

# Proceedings of the 1999 ACM Symposium on Virtual Reality Software and Technology

Fullname: VRST'99 ACM Symposium on Virtual Reality Software and Technology Don Brutzman; Heedong Ko; Mel Slater London, United Kingdom 1999-Dec-20 to 1999-Dec-22 ACM ISBN: 1-58113-141-0; ACM DL: Table of Contents hcibib: VRST99 34 194
 The HiBall Tracker: high-performance wide-area tracking for virtual and augmented environments | BIBAK | Full-Text 1-ff Greg Welch; Gary Bishop; Leandra Vicci; Stephen Brumback; Kurtis Keller; D'nardo Colucci Our HiBall Tracking System generates over 2000 head-pose estimates per second with less than one millisecond of latency, and less than 0.5 millimeters and 0.02 degrees of position and orientation noise, everywhere in a 4.5 by 8.5 meter room. The system is remarkably responsive and robust, enabling VR applications and experiments that previously would have been difficult or even impossible.    Previously we published descriptions of only the Kalman filter-based software approach that we call Single-Constraint-at-a-Time tracking. In this paper we describe the complete tracking system, including the novel optical, mechanical, electrical, and algorithmic aspects that enable the unparalleled performance.Keywords: Kalman filter, autocalibration, calibration, delay, latency, optical sensor, sensor fusion, tracking, virtual enviroments
 Reality portals | BIBAK | Full-Text 11-18 Karl-Petter Akesson; Kristian Simsarian Through interactive augmented virtuality we provide the ability to interactively explore a remote space inside a virtual environment. This paper presents a tool and technique that can be used to create such virtual worlds that are augmented by video textures taken of real world objects. The system constructs and updates, in near real-time, a representation of the user-defined salient and relevant features of the real world. This technique has the advantage of constructing a virtual world that contains the relevant video-data of the real world, while maintaining the flexibility of a virtual world. The virtual-real world representation is not dependent on physical location and can be manipulated in a way not subject to the temporal, spatial, and physical constraints found in the real world. Another advantage is that spatializing the video-data may afford more intuitive examination.Keywords: augmented virtuality, collaborative virtual environments, environment visualization, teleoperation, video textures
 A video-based virtual reality system | BIBA | Full-Text 19-25 Haruo Takeda; Masami Yamasaki; Toshio Moriya; Tsuyoshi Minakawa; Fumiko Beniyama; Takafumi Koike We introduce a new environment to make and play interactive contents with more than video game quality. The system consists of a projector array, a viewer and an editor for the special contents. The projector array projects multiple digital images seamlessly both in time and space, thus a very high quality video projection system. The viewer features a function to composite a passive video and interactive CG in real time. The editor is a high-end non-linear editing system combined with some new plug-in software to pre-compute the information necessary for real-time compositing. A new method of digital image recognition assisted by human operators is used. Unlike general purpose computer vision algorithms, it minimizes the error of 3D estimation at the compositing position. We call this approach V2R or the video-based virtual reality. It allows the operator to experience interactive communications with objects in a very high quality video.
 Testbed evaluation of virtual environment interaction techniques | BIBA | Full-Text 26-33 Doug A. Bowman; Donald B. Johnson; Larry F. Hodges As immersive virtual environment (VE) applications become more complex, it is clear that we need a firm understanding of the principles of VE interaction. In particular, designers need guidance in choosing three-dimensional interaction techniques. In this paper, we present a systematic approach, testbed evaluation, for the assessment of interaction techniques for VEs. Testbed evaluation uses formal frameworks and formal experiments with multiple independent and dependent variables in order to obtain a wide range of performance data for VE interaction techniques. We present two testbed experiments, covering techniques for the common VE tasks of travel and object selection/manipulation. The results of these experiments allow us to form general guidelines for VE interaction, and to provide an empirical basis for choosing interaction techniques in VE applications. This has been shown to produce measurable usability gains in a real-world VE application.
 Patterns of network and user activity in an inhabited television event | BIBAK | Full-Text 34-41 Chris Greenhalgh; Steve Benford; Mike Craven Inhabited Television takes traditional broadcast television and combines it with multiuser virtual reality, to give new possibilities for interaction and participation in and around shows or channels. "Out Of This World" was an experimental inhabited TV show, staged in Manchester, in September 1998, using the MASSIVE-2 system. During this event we captured comprehensive records of network traffic, and additional logs of user activity (in particular movement and speaking). In this paper we present the results of our analyses of network and user activity in these shows. We contrast our results with those obtained from previous analyses of teleconferencing-style scenarios. We find that the inhabited television scenario results in much higher levels of user activity, and significant bursts of coordinated activity. We show how these characteristics must be taken into account when designing a system and infrastructure for applications of this kind. In particular, it is clear that any notion of strict turn-taking (and associated assumptions about resource sharing) is completely unfounded in this domain. We also show that the concept of "levels of participation" is a powerful tool for understanding and managing the bandwidth-requirements of an inhabited television event.Keywords: CVE, VR, inhabited television, network analysis, user behaviour
 Coping with inconsistency due to network delays in collaborative virtual environments | BIB | Full-Text 42-49 Ivan Vaghi; Chris Greenhalgh; Steve Benford
 The London Travel Demonstrator | BIBAK | Full-Text 50-57 Anthony Steed; Emmanuel Frécon; Anneli Avatare; Duncan Pemberton; Gareth Smith Collaborative Virtual Environments (CVEs) are shared virtual spaces designed to enhance collaboration between the -- usually remote -- participants. The deployment of Collaborative Virtual Environments over wide area networks increases typical network delays, potentially breaking the consistency between the replicated versions of an environment at the participants' sites. This paper presents our qualitative observations of an experiment involving two players engaged in a virtual ball game in the presence of increasing network delays. It also describes how network delay affected the participants' behaviour and produced collaboration breakdowns. We observed that, as the network delay increases, the users modify their playing strategies in an attempt to cope with the situation, presenting several types of adaptation strategy. Knowledge of the presence and effect of delays is a major factor in allowing users to adopt strategies for coping with inconsistencies. We propose that if the participants were made more aware of the behaviour of the system, e.g. the presence of delays, then they might be able to improve their performance. Consequently, we propose a number of techniques to increase the user's knowledge of infrastructural characteristics such as delay.Keywords: CVEs, collaborative virtual environments, consistency, distributed systems, network delay, perception of delay, transparency, user interfaces
 The DiveBone -- an application-level network architecture for Internet-based CVEs | BIBAK | Full-Text 58-65 Emmanuel Frécon; Chris Greenhalgh; Mårten Stenius Travel can be a stressful experience and it is an activity that is difficult to prepare for in advance. Although maps, routes and landmarks can be memorised, travellers do not get much sense of the spatial layout of the destination and can easily get confused when they arrive. There is little doubt that virtual environments techniques can assist in such situations, by, for example, providing walkthroughs of virtual cityscapes to effect route learning.    The London Travel Demonstrator supports travellers by providing an environment where they can explore London, utilise group collaboration facilities, rehearse particular journeys and access tourist information data. These services are built on the Distributed Interactive Virtual Environment (DIVE) software from SICS. In this paper we describe how the application was built, how it exploits the underlying collaboration services, and how the platform provides for scaleability both in terms of the large extent and detail of this application and in the number of participants it can support.Keywords: collaborative virtual environments, large-model support, real-time rendering, travel applications
 GNU/MAVERIK: a micro-kernel for large-scale virtual environments | BIBAK | Full-Text 66-73 Roger Hubbold; Jon Cook; Martin Keates; Simon Gibson; Toby Howard; Alan Murta; Adrian West; Steve Pettifer To allow the number of simultaneous participants and applications to grow, many Collaborative Virtual Environment (CVE) platforms are combining ideas such as loose consistency, absence of central servers and world sub-partitioning with IP multicasting. For long distance connections, most of these systems rely on the existence of the Internet multicast backbone =- the MBone. However, its generality and complexity is often an obstacle to the establishment and testing of large-scale CVEs. This paper presents the DIVEBONE, an application-level network architecture built as a stand-alone part of the DIVE toolkit [5]. The DIVEBONE is an application-level backbone that can interconnect sub-islands with multicast connectivity and/or single local networks. Furthermore, the DIVEBONE allows for visual analysis of the connection architecture and network traffic and for remote maintenance operations. The DIVEBONE capabilities have been demonstrated and successfully used in a series of large-scale pan-European tests over the Internet, as well as in various experiments involving IP over ISDN and ATM. All trials have proven the qualitative and quantitative adequacy of the DIVEBONE in heterogeneous settings where multicast connectivity in other ways is limited.Keywords: CVE, Dive, MBone, VR, multi-user, multicast, network architecture
 Distributed Open Inventor: a practical approach to distributed 3D graphics | BIB | Full-Text 74-81 Gerd Hesina; Dieter Schmalstieg; Anton Furhmann; Werner Purgathofer
 Navigating through sparse views | BIBA | Full-Text 82-87 Shachar Fleishman; Baoquan Chen; Arie Kaufman; Daniel Cohen-Or This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms one component of a complete 'VR operating system'. We give an overview of the architecture of MAVERIK, and show how it is designed to use application data in an intelligent way, via a simple, yet powerful, callback mechanism which supports an object-oriented framework of classes, objects and methods. Examples are given which illustrate different uses of the system, and typical performance levels.
 A method for progressive and selective transmission of multi-resolution models | BIBAK | Full-Text 88-95 Danny S. P. To; Rynson W. H. Lau; Mark Green Distributed Open Inventor is an extension to the popular Open Inventor toolkit for interactive 3D graphics. The toolkit is extended with the concept of a distributed shared scene graph, similar to distributed shared memory. From the application programmer's perspective, multiple workstations share a common scene graph. The proposed system introduces a convenient mechanism for writing distributed graphical applications based on a popular tool in an almost transparent manner. Local variations in the scene graph allow for a wide range of possible applications, and local low latency interaction mechanisms called input streams enable high performance while saving the programmer from network peculiarities.Keywords: computer supported cooperative work, concurrent programming, distributed graphics, distributed virtual environment, scene graph, virtual reality
 A market model for level of detail control | BIB | Full-Text 96-103 J. Howell; ; A. Steed; M. Slater
 Levels of detail (LOD) engineering of VR objects | BIBA | Full-Text 104-110 Jinseok Seo; Gerard Jounghyun Kim; Kyo Chul Kang This paper presents an image-based walkthrough technique where reference images are sparsely sampled along a path. The technique relies on a simple user interface for rapid modeling. Simple meshes are drawn to model and represent the underlying scene in each of the reference images. The meshes, consisting of only few polygons for each image are then registered by drawing a single line on each image, called model registration line, to form an aligned 3D model. To synthesize a novel view, two nearby reference images are mapped back onto their models by projective texture-mapping. Since the simple meshes are a crude approximation to the real model in the scene, image feature lines are drawn and used as aligning anchors to further register and blend two views together and form a final novel view. The simplicity of the technique yields rapid, "home-made" image-based walkthroughs. We have produced walkthroughs from a set of photographs to show the effectiveness of the technique.
 Visual speech analysis and synthesis with application to Mandarin speech training | BIBA | Full-Text 111-115 Xiaodong Jiang; Yunlai Wang; Feiye Zhang Although there are many adaptive (or view-dependent) multi-resolution methods developed, support for progressive transmission and reconstruction has not been addressed. A major reason for this is that most of these methods require large portion of the hierarchical data structure to be available at the client before rendering starts, due to the neighboring dependency constraints. In this paper, we present an efficient multi-resolution method that allows progressive and selective transmission of multi-resolution models. This is achieved by reducing the neighboring dependency to a minimum. The new method allows visually important parts of an object to be transmitted to the client at higher priority than the less important parts and progressively reconstructed there for display. We will present the new method and discuss how it works in a client-server environment. We will also show the data structure of the transmission record and some performance results of the method.
 A method for sharing interactive deformations in collaborative 3D modeling | BIBAK | Full-Text 116-123 Hiroaki Nishino; Kouichi Utsumiya; Kazuyoshi Korida; Atsunori Sakamoto; Kazuyuki Yoshida In virtual reality simulations the speed of rendering is vitally important. One of the techniques for controlling the frame rate is the assignment of different levels of detail for each object within a scene. The most well-known level of detail assignment algorithms are the Funkhouser[1] algorithm and the algorithm where the level of detail is assigned with respect to the distance of the object from the viewer.    We propose an algorithm based on an analogy to a market system where each object does not have an assigned level of detail but has the ownership of a certain amount of time which is can use to be rendered with. The optimization of the levels of detail then becomes a simplistic trading process where objects with large amounts of time that they don't need will trade with objects who have need of extra time.    The new algorithm has been implemented to run on the DIVE[2] virtual environment system. This system was then used to perform experiments with the aim of comparing the performance of the algorithm against the other two methods mentioned above.Keywords: DIVE, framerate, level of detail, rendering
 Direct 3D interaction with smart objects | BIBAK | Full-Text 124-130 Marcelo Kallmann; Daniel Thalmann For real-time performance, virtual reality systems often employ various performance optimization techniques. One of the most popular methods is using geometric models with different "Levels of Detail (LOD)". In a previous paper by this author [1], we have proposed to use software engineering principles such as the concept of hierarchical and incremental modeling, and simultaneous consideration of form, function and behavior for modeling VR objects. Each refinement stage driven by such a modeling philosophy produces step-by-step form, function and behavior specifications of a VR object. We can make good use of these by-products as LOD for adaptive display and simulation by additionally specifying conditions for LOD switching. These specifications can be simulated and analyzed in advance for an estimation of performance for a given VR execution environment. Such an engineering process deals with behavior and geometry together; different geometric LOD may possess different behaviors and vice versa. A certain function or behavior might dictate an inclusion of a particular geometric feature that may not be possible to preserve, if the geometric LOD were to created in a bottom-up fashion (e.g. using the mesh simplification algorithms). We demonstrate our approach by modeling an automobile object with three levels of geometric and behavior detail in a top-down manner, simulate their instances in a small virtual town, and based on the simulation result, make predictions to the maximum allowable number of vehicles that will maintain an acceptable frame rate if executed in a faster simulation environment. We believe that our approach combines the idea of hierarchical refinement of virtual objects and the use of LOD in a very natural and intuitive manner.Keywords: levels of detail (LOD), real-time rendering, simulation, software engineering, specificastion, top-down design
 Real-time rendering of deformable parametric free-form surfaces | BIBAK | Full-Text 131-138 Frederick W. B. Li; Rynson W. H. Lau This paper presents a novel vision-based speech analysis system STODE which is used in spoken Chinese training of oral deaf children. Its design goal is to help oral deaf children overcome two major difficulties in speech learning: the confusion of intonations for spoken Chinese characters and timing errors within different words and characters. It integrates such capabilities as real-time lip tracking and feature extraction, multi-state lip modeling, Time-delay Neural Network (TDNN) for visual speech analysis. A desk-mounted camera tracks users in real-time. At each frame, region of interest is identified and key information is extracted. The preprocessed acoustic and visual information are then fed into a modular TDNN and combined for visual speech analysis. Confusion of intonations for spoken Chinese characters can be easily identified, and timing error within words and characters also can be detected using a DTW (Dynamic Time Warping) algorithm. For visual feedback we have created an artificial talking head directly cloned from user's own images to generate correct outputs showing both correct and wrong ways of pronunciation. This system has been successfully used for spoken Chinese training of oral deaf children in cooperation with Nanjing Oral School under grants from National Natural Science Foundation of China.Keywords: DTW, TDNN, visual speech analysis
 Modeling and animation of botanical trees for interactive virtual environments | BIBAK | Full-Text 139-146 Tatsumi Sakaguchi; Jun Ohya This paper proposes a new approach to collaboratively designing original products and crafted objects in a distributed virtual environment. Special attention is paid to concept formulation and image substantiation in the early design stage. A data management strategy and its implementation method are shown to effectively share and visualize a series of shape-forming and modeling operations performed by experts on a network. A 3D object representation technique is devised to manage frequently updated geometrical information by exchanging only a small amount of data among participating systems. Additionally, we contrive a method for offloading some expensive functions usually performed on a server such as multi-resolution data management and adaptive data transmission control. Client systems are delegated to execute these functions and achieve "interactivity vs. image quality" tradeoffs based on available resources and operations in a flexible and parallel fashion.Keywords: 3D object modeling, collaborative design, computer graphics, distributed virtual environment
 Software architecture for a constraint-based virtual environment | BIBAK | Full-Text 147-154 Terrence Fernando; Norman Murray; Kevin Tan; Prasad Wimalaratne Performing 3D interactions with virtual objects easily becomes a complex task, limiting the implementation of larger applications. In order to overcome some of these limitations, this paper describes a framework where the virtual object aids the user to accomplish a pre-programmed possible interaction. Such objects are called Smart Objects, in the sense that they know how the user can interact with them, giving clues to aid the interaction. We show how such objects are constructed, and exemplify the framework with an application where the user, wearing a data glove, can easily open and close drawers of some furniture.Keywords: data glove, interaction, manipulation, virtual environments, virtual objects, virtual reality
 Sketching a virtual environment: modeling using line-drawing interpretation | BIBA | Full-Text 155-161 Alasdair Turner; Dave Chapman; Alan Penn Deformable objects are required to improve the realism of virtual reality applications. They are particularly useful in modeling clothes, facial expression, human and animal characters. A common method to render these objects is by tessellation. However, the tessellation process is computationally very expensive. If the object deforms, we need to retessellate the surface every frame, as its shape changes from one frame to the next. This computational burden poses a significant challenge to the real-time rendering of deformable objects. Consequently, deformable objects are seldom incorporated in existing virtual reality systems. In this paper, we present an incremental method for rendering deformable objects modeled by parametric free-form surfaces. We also introduce two new frame coherence techniques for crack prevention and parameter caching. Finally, we present a single hierarchical data structure which provides a multi-resolution representation of the object model.
 Formations: explicit group support in collaborative virtual environments | BIBAK | Full-Text 162-163 Dave Lloyd; Steve Benford; Chris Greenhalgh This paper proposes a new modeling and animation method of botanical tree for interactive virtual environment. Some studies of botanical tree modeling have been based on the Growth Model, which can construct a very natural tree structure. However, this model makes it difficult to predict the final form of tree from given parameters; that is, if an objective form of a tree is given and it is to be reconstructed into a three-dimensional model, we have to change the parameters to reflect the structure by a trial-and-error technique. Thus, we propose a new top-down approach in which a tree's form is defined by volume data that is made from a captured real image set, and the branch structure is realized by simple branching rules. The tree model is described as a set of connected branch segments, and leaf models that consist of leaves and twigs that are attached to the branch segments. To animate the botanical trees, dynamics simulation is performed on the branch segments in two phases. In the first phase, each segment is assumed to be a rigid stick with a fixed end on one side, and rotational movements from influence of external forces are calculated in each segment independently. And the forces propagated from the tip of a branch to the root are calculated from the restoration force and thickness of the branch. Finally, the rotational movements of segments are executed in order from the base segment, and the fixed end of each segment is moved to the free end of the segment to be connected so as to maintain the relative angles between the segments. The proposed model is applied to many kind of botanical trees, and the model can successfully animate tree movements caused by external forces such as winds and human interaction to the branches.Keywords: botanical tree modeling, image based modeling, natural phenomena, physically based animation (dynamics), virtual reality
 Meetings for real -- experiences from a series of VR-based project meetings | BIBAK | Full-Text 164-165 Olov Ståhl Virtual environment technology is now beginning to be recognised as a powerful design tool in industrial sectors such as Manufacturing, Process Engineering, Construction, Automotive and Aerospace industries. It offers the ability to visualise a design from different viewpoints by engineers from different design perspectives providing a powerful design analysis tool for supporting concurrent engineering philosophy. A common weakness of the current commercial virtual environments is the lack of efficient geometric constraint management facilities such as run-time constraint detection and the maintenance of constraint consistencies for supporting accurate part positioning and constrained 3D manipulations. The environments also need to be designed to support the user as they are completing their task. This paper describes the software architecture of a constraint-based virtual environment that supports interactive assembly of component parts, embedded within a task based environment that supports contextual help and allows for the structure of tasks to be easily altered for rapid prototyping.Keywords: component assembly, constraints, tasks, virtual environments
 Fast calibration for augmented reality | BIBAK | Full-Text 166-167 Anton Fuhrmann; Dieter Schmalstieg; Werner Purgathofer Here we demonstrate the direct input to computer of a handdrawn perspective sketch to create a virtual environment. We either start with a photograph of a real environment or an existing VRML model, and then use a mouse or pen pad to sketch line drawings onto the scene. Visual clues and constraints from the existing background and line drawing, as well as heuristics for form recognition are used to build a 3D optimization problem. We use a multiple objective genetic algorithm to find a viable solution to the problem, and VRML output is generated, either for re-entry to the system or use in another system. Our software is currently available compiled for either a PC running Windows 98/NT or an SGI machine running IRIX 6.x.Keywords: 3D modeling, line-drawing interpretation
 The Kahun project: CVE technology development based on real world application and user needs | BIB | Full-Text 168-169 Daphne Economou; Steve R. Pettifer; William L. Mitchell; Adrian J. West
 Subjectivity and the relaxing of synchronization in networked virtual environments | BIBAK | Full-Text 170-171 Steve Pettifer; Adrian West An efficient implementation of a parallel version of the Feng-Rao algorithm on a one-dimensional systolic array is presented in this paper by adopting an extended syndrome matrix. Syndromes of the same order, lying on a slant diagonal in the extended syndrome matrix, are scheduled to be examined by a series of cells simultaneously and, therefore, a high degree of concurrency of the Feng-Rao algorithm can be achieved. The time complexity of the proposed architecture is $m+g+1$ by using a series of $t+\lfloor {\frac{g-1}{2}} \rfloor +1$, nonhomogeneous but regular, effective processors, called PE cells, and $g$ trivial processors, called D cells, where $t$ is designed as the half of the Feng-Rao bound. Each D cell contains only delay units, while each PE cell contains one finite-field inverter and, except the first one, one or more finite-field multipliers. Cell functions of each PE cell are basically the same and the overall control circuit of the proposed array is quite simple. The proposed architecture requires, in total, $t+\lfloor {\frac{g-1}{2}} \rfloor +1$ finite-field inverters and ${\frac{(t+\lfloor (g-1)/2 \rfloor)(t+\lfloor (g-1)/2 \rfloor +1)}{2}}$ finite-field multipliers. For a practical design, this hardware complexity is acceptable.Keywords: Error-correcting codes, algebraic-geometric codes, Feng-Rao algorithm, systolic array
 Interactive virtual studio and immersive televiewer environment | BIBAK | Full-Text 172-173 Laehyun Kim; Heedong Ko; Mooho Park; Hyeran Byun In this paper, we describe formations, a means for the explicit support of groups and group effects in Collaborative Virtual Environments (CVEs). Being part of a formation affects a participant's experience of a virtual environment. We introduce a framework that defines the components of a formation, including an object of interest, members, non-members and leaders. We describe how formations may be applied, using the concept of Inhabited TV as a basis.Keywords: collaborative virtual environments, formations, interest managment, navigation
 Interactive task planning in virtual assembly | BIBAK | Full-Text 174-175 Hanqiu Sun; Bao Hujun; Tong Ngai Man; Wu Lam Fai Digital Meeting Environments (DiME) is an on-going project that aims to develop and study the use of Collaborative Virtual Environments (CVEs) as the basis for computer supported meetings between geographically separated persons. This paper presents some problems experienced by the project members when using a VR-based conferencing application for a series of meetings, and some examples of how these problems have been addressed.Keywords: assurance cues, tele-meetings, virtual environments
 Components for distributed virtual environments | BIBAK | Full-Text 176-177 Manuel Oliveira; Jon Crowcroft; Don Brutzman; Mel Slater Augmented Reality overlays computer generated images over the real world. These images have to be generated using transformations which correctly project a point in virtual space onto its corresponding point in the real world.    We present a simple and fast calibration scheme for head-mounted displays (HMDs), which does not require additional instrumentation or complicated procedures. The user is interactively guided through the calibration process, allowing even inexperienced users to calibrate the display to their eye distance and head geometry.    The calibration is stable -- meaning that slight errors made by the user do not result in gross miscalibrations -- and easily applicable for see-through and video-based HMDs.Keywords: augmented reality, calibration, distortion, registration
 A real-time generation algorithm for progressive meshes in dynamic environments | BIB | Full-Text 178-179 Guangzheng Fei; Enhua Wu
 Calculation of contact forces | BIBA | Full-Text 180-181 G. Hotz; A. Kerzmann; C. Lennerz; R. Schmid; E. Schömer; T. Warken We show that each distance-regular graph of valency four has known parameters.
 Rudiments for a 3D freehand sketch based human-computer interface for immersive virtual environments | BIBAK | Full-Text 182-183 Oliver Bimber Lag in network technology prevents absolute synchronization of distributed Virtual Environments (VEs); our experience of them is in this sense inherently subjective. We describe how subjectivity of this kind provides a means of enabling coherent shared experience in a VE, and present an architecture based on this.Keywords: perception, shared environments, subjectivity, virtual reality
 Using virtual reality for network management: automated construction of dynamic 3D metaphoric worlds | BIBAK | Full-Text 184-185 C. Russo Dos Santos; P. Gros; P. Abel; D. Loisel; J.-P. Paris In this paper, we propose a novel virtual studio system in which an anchor in the virtual set interacts with televiewers as if they were sharing the same environment. A televiewer participates in the virtual studio environment by sensing and controlling a dummy head equipped with camera, speaker and microphones. The dummy head acts as a surrogate televiewer, providing the viewpoint experienced by the televiewer via a video camera and the sound experienced by the televiewer via microphone in its head. The anchor can not only interact with the virtual set elements but also share the physical studio with the surrogate televiewers. A televiewer with a head-mounted display (HMD) may feel immersed in the virtual studio environment seamlessly combining the virtual set elements with the real studio elements and interact with the anchor and vice versa. The proposed system consists of Interactive Virtual Studio (IVS) environment and Immersive Televiewer Environment (ITE) in which all the physical elements are collected and managed through IVS and the seamlessly mixed virtual and real elements are experienced via ITE. The essential idea is to have a dual universe where what makes a natural interface only physically should remain as physical and what makes easier to represent virtually should remain virtual and these two parallel universe should be coordinated seamlessly to provide the proper mix of the virtual and real mixed reality experience. In practice, this new interactive virtual studio for the immersive tele-meeting environment may be applied to the production of interactive TV program, tele-conferencing, tele-education and others.Keywords: avatar, interactive TV, interactive virtual studio, mixed reality
 Visualising logic programs in virtual worlds | BIBA | Full-Text 186-187 ; ; S. Kousidou; L. Balafa We propose a task planner that incorporates with a virtual reality interface for 3D immersive interaction of CAD models and high-level task planning of mating processes. The planner is constructed with the following objectives: 3D immersive VR interface, assembly analysis, feature/constraint update, and assembly path planning.    A virtual-assembly system has been developed based on the modeling features and mating constraints proposed in our approach. Our system supports a VR interface for performing task-oriented virtual assembly, constraint analysis, feature/constraint update during assembling, and collision-free assembly planning. An assembly task is described by the relationship of parts or subassemblies, translation of constraints and operation restrictions. Our task planner provides a virtual reality interface which allows users to freely navigate in the assembly environment, select one of the parts, and move it around in all directions. To accomplish the goal of a two-part assembly, the free motion of a 3D input device (e.g. 3D mouse) is restricted by both a collision-free path and allowable motion derived from mating constraints between the parts. The allowable motion with reduced degrees of freedom guides the user assembling the parts in a constrained direction or around a specified rotating axis. Any illegal motion that will possibly cause a collision or disabled movement is not allowed, which prompts a warning sound or error messages to alert the user.    When two components are assembled, the two graphs are updated in such a way by joining the common attributes, removing those overlapped or duplicated ones after mating, and creating the new attributes or features that are produced. So a new attribute graph is generated for the new object.