| Art and technology in interface devices | | BIBAK | Full-Text | 1-7 | |
| Hiroo Iwata | |||
| This paper presents work carried out for a project to develop interface
devices for haptics that includes finger/hand manipulation, locomotion. It is
well known that sense of touch is inevitable for understanding the real world.
The last decade has seen significant advance in development of haptic
interface. However, methods for implementation of haptic devices are still in
try-and-error. Compared to visual and auditory displays, haptic display has not
been used in everyday life. In order to overcome this limitation, we have been
exhibiting the interface devices as art work. This paper introduces issues and
solutions in haptic device through 18-years history of our research activities. Keywords: embodied sensation, haptics, interface device | |||
| AMMP-Vis: a collaborative virtual environment for molecular modeling | | BIBAK | Full-Text | 8-15 | |
| Jeffrey W. Chastine; Jeremy C. Brooks; Ying Zhu; G. Scott Owen; Robert W. Harrison; Irene T. Weber | |||
| Molecular modeling is an important research area, helping scientists develop
new drugs against diseases such as AIDS and cancer. Prior studies have
demonstrated that immersive virtual environments have unique advantages over
desktop systems in visualizing molecular models. However, exploration and
interaction in existing molecular modeling virtual environments is often
limited to a single user, lacking strong support for collaboration. In
addition, scientists are often reluctant to adopt these systems because of
their lack of availability and high cost. We propose an affordable immersive
system that allows biologists and chemists to manipulate molecular models via
natural gestures, receive and visualize real-time feedback from a molecular
dynamics simulator, allow the sharing of customized views, and provide support
for both local and remote collaborative research. Keywords: augmented reality, collaboration, interaction techniques, molecular
modeling, shaders, virtual environments | |||
| A 2D-3D integrated environment for cooperative work | | BIBAK | Full-Text | 16-22 | |
| Kousuke Nakashima; Takashi Machida; Kiyoshi Kiyokawa; Haruo Takemura | |||
| This paper proposes a novel tabletop display system for natural
communication and flexible information sharing. The proposed system is
specifically designed for integration of 2D and 3D user interfaces, using a
multi-user stereoscopic display, IllusionHole. The proposed system takes
awareness into consideration and provides both 2D and 3D information and user
interfaces. On the display, a number of standard Windows desktop environments
are provided as personal workspaces, as well as a shared workspace with a
dedicated graphical user interface. In personal workspaces, users can
simultaneously access existing applications and data, and exchange information
between personal and shared workspaces. In this way, the proposed system can
seamlessly integrate personal, shared, 2D and 3D workspaces with conventional
user interfaces and effectively support communication and information sharing.
To demonstrate capabilities of the proposed display system, a modeling
application has been implemented. A preliminary experiment confirmed the
effectiveness of the system. Keywords: 2D - 3D integrated user interface, IllusionHole, VNC | |||
| First-person experience and usability of co-located interaction in a projection-based virtual environment | | BIBAK | Full-Text | 23-30 | |
| Andreas Simon | |||
| Large screen projection-based display systems are very often not used by a
single user alone, but shared by a small group of people. We have developed an
interaction paradigm allowing multiple users to share a virtual environment in
a conventional single-view stereoscopic projection-based display system, with
each of the users handling the same interface and having a full first-person
experience of the environment.
Multi-viewpoint images allow the use of spatial interaction techniques for multiple users in a conventional projection-based display. We evaluate the effectiveness of multi-viewpoint images for ray selection and direct object manipulation in a qualitative usability study and show that interaction with multi-viewpoint images is comparable to fully head-tracked (single-user) interaction. Based on ray casting and direct object manipulation, using tracked PDA's as common interaction device, we develop a technique for co-located multi-user interaction in conventional projection-based virtual environments. Evaluation of the VRGEO Demonstrator, an application for the review of complex 3D geo-seismic data sets in the oil-and-gas industry, shows that this paradigm allows multiple users to each have a full first-person experience of a complex, interactive virtual environment. Keywords: PDA interaction, co-located collaboration, projection-based virtual
environment, single display groupware | |||
| Human performance in space telerobotic manipulation | | BIBAK | Full-Text | 31-37 | |
| Philip Lamb; Dean Owen | |||
| This paper considers the utility of VR in the design of the interface to a
space-based telerobotic manipulator. An experiment was conducted to evaluate
the potential for improved operator performance in a telemanipulation task when
the operator's control interface was varied between egocentric and exocentric
frames of reference (FOR). Participants performed three tasks of increasing
difficulty using a VR-based simulation of the Space Shuttle Remote Manipulation
System (SRMS) under four different control interface conditions, which varied
in respect of two factors, virtual viewpoint FOR (fixed versus attached to arm)
and hand controller FOR (end-effector-referenced versus world-referenced.)
Results indicated a high degree of interaction between spatial properties of
the task and the optimal interface condition. Across all tasks, the conditions
under end-effector-referenced control were associated with higher performance,
as measured by rate of task completion. The mobile viewpoint conditions were
generally associated with lower performance on task completion rate but
improved performance with respect to number of collisions between the arm and
objects in the environment. We conclude with discussion of implications for
telemanipulation applications, and an approach to varying the dimension of
viewpoint egocentricity in order to improve performance under the mobile
viewpoint. Keywords: SRMS, frame of reference, telerobotics, user studies | |||
| Mixed-dimension interaction in virtual environments | | BIBAK | Full-Text | 38-45 | |
| Rudolph P. Darken; Richard Durost | |||
| In this paper, we present a study to show that matching the dimensionality
of interaction techniques with the dimensional demands of the task results in
an interface that facilitates superior performance on interaction tasks without
sacrificing performance on 2D tasks in favor of 3D tasks and vice versa. We
describe the concept of dimensional congruence and how to identify the
dimensional characteristics of a task so that appropriate interaction
techniques can be applied. We constructed a prototypical application in a
Virtual Environment Enclosure (VEE) using a hand-held device to show how this
might be done in this type of apparatus. We then describe a study that
evaluates both 2D and 3D tasks as performed using typical 2D and 3D interaction
techniques. Results show that an appropriate mix of 2D and 3D interaction
techniques is preferred over exclusive use of one or the other. The challenge
lies not in selecting independent interaction techniques for specific tasks,
but rather in constructing an overall interface that mixes 2D and 3D
interactions appropriately. Keywords: human factors, interaction technique, virtual environments | |||
| Effects of information layout, screen size, and field of view on user performance in information-rich virtual environments | | BIBAK | Full-Text | 46-55 | |
| Nicholas F. Polys; Seonho Kim; Doug A. Bowman | |||
| This paper describes our recent experimental evaluation of Information-Rich
Virtual Environment (IRVE) interfaces. To explore the depth cue/visibility
tradeoff between annotation schemes, we design and evaluate two information
layout techniques to support search and comparison tasks. The techniques
provide different depth and association cues between objects and their labels:
labels were displayed either in the virtual world relative to their referent
(Object Space) or on an image plane workspace (Viewport Space). The Software
Field of View (SFOV) was controlled to 60 or 100 degrees of vertical angle and
two groups were tested: those running on a single monitor and those on a tiled
nine-panel display. Users were timed, tracked for correctness, and gave ratings
for both difficulty and satisfaction on each task. Significant advantages were
found for the Viewport interface, and for high SFOV. The interactions between
these variables suggest special design considerations to effectively support
search and comparison performance across monitor configurations and projection
distortions. Keywords: 3D interaction, information-rich virtual environments, usability testing and
evaluation, visual design | |||
| A novel framework for athlete training based on interactive motion editing and silhouette analysis | | BIBAK | Full-Text | 56-58 | |
| Shihong Xia; Xianjie Qiu; Zhaoqi Wang | |||
| There are mainly two Hi-Tech methods for athlete training. One method is
based on virtual reality, where the athlete can learn and improve performance
mainly through using virtual equipments to interact with the virtual
environment. Another method is based on video analysis, where improvements can
be made by comparing the videos of the trainees with those of excellent
trainers. In this paper, we present a novel framework for athlete training,
which can circumvent difficulties the current methods faced in practical
applications. For retargeting the example motion to personalized virtual
athlete, the coach interactively sets motion constraints with his experience
based on motion warping and motion verification techniques. The display of the
simulated motion is adjusted semi-automatically to create the reference virtual
video with the same viewpoint as the real one. The moment invariants of both
virtual and real athlete's silhouette are computed, and motion analysis result
is presented subsequently. This method is more suitable for gymnastic athlete
training because of without virtual equipment and more instructive having the
same viewpoint in video analysis. Finally, an application of the proposed
techniques to trampoline training is implemented. Keywords: moment invariants, motion editing, motion training | |||
| An integrated system: virtual reality, haptics and modern sensing technique (VHS) for post-stroke rehabilitation | | BIBAK | Full-Text | 59-62 | |
| Shih-Ching Yeh; Albert Rizzo; Weirong Zhu; Jill Stewart; Margaret McLaughlin; Isaac Cohen; Younbo Jung; Wei Peng | |||
| In this paper, we introduce an interdisciplinary project, involving
researchers from the fields of Physical Therapy, Computer Science, Psychology,
Communication and Cell Neurobiology, to develop an integrated virtual reality,
haptics and modern sensing technique system for post-stroke rehabilitation. The
methodology to develop the system includes identification of movement pattern,
development of simulated task and diagnostics. Each part of the methodology can
be achieved through several sub-steps that are described in detail in this
paper. The system is designed from Physical Therapy perspective that can
address the motor rehabilitation needs of stroke patients. The system is
implemented through stereoscopic displays, force feedback devices and modern
sensing techniques that have game-like features and can capture accurate data
for further analysis. Diagnostics and evaluation can be made through an
Artificial Intelligence based model using collected data and clinical tests
have been conducted. Keywords: haptics, physical therapy, stroke rehabilitation, virtual reality, visual
sensing | |||
| Telepresence support for synchronous distance | | BIBAK | Full-Text | 63-67 | |
| Juliana Restrepo; Helmuth Trefftz | |||
| This paper describes a telepresence application that combines bi-directional
video, audio and a shared virtual environment as means to support synchronous
distance education sessions.
In order to validate the effectiveness of the tool, it was used in several courses at our University. Students were divided into an experimental group -- using the tool in a simulated distance learning environment -- and a control group -- receiving traditional face-to-face lectures --. The results show that the use of the telepresence application, hand in hand with an appropriate pedagogical framework, lead the students in the distance education sessions to reach at least the same levels of understanding, and sometimes even better, than those attained by students in the face-to-face sessions. We are offering the telepresence application to other education institutions in our country, with the hope that it will allow students in regions isolated by the war or by the lack of infrastructure to have access to better education, both at school and university levels. Keywords: collaborative virtual reality, computer graphics, networked virtual
environments | |||
| Myriad: scalable VR via peer-to-peer connectivity, PC clustering, and transient inconsistency | | BIBAK | Full-Text | 68-77 | |
| Benjamin Schaeffer; Peter Brinkmann; George Francis; Camille Goudeseune; Jim Crowell; Hank Kaczmarski | |||
| Distributed scene graphs are important in virtual reality, both in
collaborative virtual environments and in cluster rendering. In Myriad,
individual scene graphs form a peer-to-peer network whose connections filter
scene graph updates and create flexible relationships between scene graph nodes
in the various peers. Modern scalable visualization systems often feature high
intracluster throughput, but collaborative virtual environments (VEs) over a
WAN share data at much lower rates, complicating the use of one scene graph
system across the whole application. To avoid these difficulties, Myriad uses
fine-grained sharing, whereby sharing properties of individual scene graph
nodes can be dynamically changed from C++ and Python, and transient
inconsistency, which relaxes resource requirements in collaborative VEs. A test
application, WorldWideCrowd, implements these methods to demonstrate
collaborative prototyping of a 300-avatar crowd animation viewed on two
PC-cluster displays and edited on low-powered laptops, desktops, and even over
a WAN. Keywords: PC cluster, peer-to-peer, virtual environments | |||
| Quantifying the benefits of immersion for collaboration in virtual environments | | BIBAK | Full-Text | 78-81 | |
| Michael Narayan; Leo Waugh; Xiaoyu Zhang; Pradyut Bafna; Doug Bowman | |||
| Collaborative Virtual Environments allow multiple users to interact
collaboratively while taking advantage of the perceptual richness that Virtual
Environments (VEs) provide. In this paper, we demonstrate empirically that
increasing the level of immersion in a VE can have a beneficial effect on the
usability of that environment in a collaborative context. We present the
results of a study in which we varied two immersive factors, stereo and head
tracking, within the context of a two person collaborative task. Our results
indicate that stereo can have a positive effect on task performance; that
different levels of immersion have effects that vary with gender; and that
varying the level of immersion has a pronounced effect on communication between
users. These results show that the level of immersion can play an important
role in determining user performance on some collaborative tasks. Keywords: collaborative virtual environments, head tracking, immersion, stereo | |||
| Scalable interest management for multidimensional routing space | | BIBAK | Full-Text | 82-85 | |
| Elvis S. Liu; Milo K. Yip; Gino Yu | |||
| Interest management is essential for scalable collaborative virtual
environments (CVEs) which sought to reduce bandwidth consumption on the
network. Most of the interest management systems such as Data Distribution
Management (DDM) service of the High Level Architecture (HLA) concentrate on
providing precise message filtering mechanisms. However, in doing so a second
problem is introduced: the CPU cycle overheads of filtering process. If the
cost in terms of computational resources of interest management itself is too
high, it would be unsuitable for real time applications such as multiplayer
online games (MOGs) for which runtime performance is important. In this paper
we present a scalable interest management algorithm which is suitable for HLA
DDM. Our approach employs the collision detection method of I-COLLIDE for fast
interest matching. Furthermore, the algorithm has been implemented in our
commercialized MOG middleware -- Lucid Platform. Experimental evidence
demonstrates that it works well in practice. Keywords: collaborative virtual environments, collision detection, computer games,
data distribution management, high level architecture, interest management | |||
| Towards keyboard independent touch typing in VR | | BIBAK | Full-Text | 86-95 | |
| Falko Kuester; Michelle Chen; Mark E. Phair; Carsten Mehring | |||
| A new hand- or finger-mounted data input device, called KITTY, is presented.
KITTY is designed for keyboard independent touch typing and supports
traditional touch-typing skills as a method for alphanumeric data input. This
glove-type device provides an ultra-portable solution for "quiet" data input
into portable computer systems and full freedom of movement in mobile VR and AR
environments. The KITTY design follows the concept of the column and row layout
found on traditional keyboards, allowing users to draw from existing touch
typing skills, easing the required training time. Keywords: glove input, interaction techniques, keyboard independent touch-typing,
wearable devices for augmented reality | |||
| A layout framework for 3D user interfaces | | BIBAK | Full-Text | 96-105 | |
| Wai Leng Lee; Mark Green | |||
| Two of the main problems facing the developers of 3D user interface are the
wide range of device configurations that must be supported and the lack of
software tools for constructing 3D user interfaces. The Grappl project aims to
solve these problems by producing user interfaces that adapt to the device
configurations that they encounter at runtime. Since the user interface is
constructed at runtime one of the problems confronted by Grappl is laying out
the different user interface components and possibly some of the application
objects. This paper presents a framework for automating the layout of 3D user
interfaces, including the types of information provided by the user interface
designer, the high level architecture of the layout system and the algorithms
used for empty space management. Keywords: 3D user interface, layout techniques | |||
| A practical system for laser pointer interaction on large displays | | BIBAK | Full-Text | 106-109 | |
| Benjamin A. Ahlborn; David Thompson; Oliver Kreylos; Bernd Hamann; Oliver G. Staadt | |||
| Much work has been done on the development of laser pointers as interaction
devices. Typically a camera captures images of a display surface and extracts a
laser pointer dot location. This location is processed and used as a cursor
position. While the current literature well explains such a system, we feel
that some important practical concerns have gone unaddressed. We discuss the
design of such a tracking system, focusing on key practical implementation
details. In particular we present a robust and efficient dot detection
algorithm that allows us to use our system under a variety lighting conditions,
and allows us to reduce the amount of image parsing required to find a laser
position by an order of magnitude. Keywords: interaction, laser pointer, tiled displays | |||
| A 6-DOF user interface for grasping in VR-based computer aided styling and design | | BIBAK | Full-Text | 110-112 | |
| Frank-Lothar Krause; Johann Habakuk Israel; Jens Neumann; Boris Beckmann-Dobrev | |||
| We describe a 6-DoF (degrees of freedom) force-feedback enabled user
interface that supports grasp interaction in mixed or virtual reality
environments. The graspable part of the device is interchangeable. It can hold
physical grips, tubes, work pieces and tools which provide passive haptic
feedback, up to tangible user interfaces. The device is prepared to be
integrated into holobench environments for unifying input and output space so
that users can apply their sensorimotor skills for efficient interaction and
task solving. Current applications are in the field of VR-based Computer Aided
Styling and Design. Keywords: direct object manipulation, haptic interaction, kinesthetic augmentation,
multimodal interfaces, passive haptic devices, sensorimotor coordination,
tangible user interfaces, tool based interaction, virtual reality | |||
| A swarm algorithm for wayfinding in dynamic virtual worlds | | BIBAK | Full-Text | 113-116 | |
| Ji Soo Yoon; Mary Lou Maher | |||
| Wayfinding is a cognitive element of navigation that allows people to plan
and form strategies prior to executing them. Wayfinding in large scale virtual
environments is a complex task and even more so in dynamic virtual worlds. In
these dynamic worlds everything, including the objects, the paths, and the
landmarks, may be created, deleted, and moved at will. We propose a wayfinding
tool using swarm creatures to aid users in such dynamic environments. The tool
produces dynamic trails leading to desired destinations and generates
teleport/warp gates. These are created as a consequence of swarm creatures
exploring dynamic worlds. In this paper, we describe the swarm algorithms
developed to create such a tool to generate wayfinding aids in dynamic virtual
worlds. Keywords: navigation, navigation/wayfinding aids, swarm intelligence, virtual worlds,
wayfinding | |||
| Cognitive comparison of 3D interaction in front of large vs. small displays | | BIBAK | Full-Text | 117-123 | |
| F. Tyndiuk; G. Thomas; V. Lespinet-Najib; C. Schlick | |||
| This paper presents some experimental results on the comparison of users
performance for different kinds of 3D interaction tasks (travel, manipulation),
when using either a standard desktop display or a large immersive display. The
main results of our experimentation are the following: first, not all users
benefit similarly from the use of large displays, and second, the gains of
performance strongly depend on the nature of the interaction task. To explain
these results, we borrow some tools from cognitive science in order to identify
one cognitive factor (visual attention) that is involved in the difference of
performance that can be observed. Keywords: cognitive aids, display size, interaction, virtual reality, visual attention | |||
| A low cost and accurate guidance system for laparoscopic surgery: validation on an abdominal phantom | | BIBAK | Full-Text | 124-133 | |
| S. A. Nicolau; L. Goffin; L. Soler | |||
| In this paper we present a guiding system for laparoscopic surgery which
tracks in real time the laparoscopic tools and registers at 10 Hz a
preoperative CT/MRI model of the patient. To help surgeons, this system
provides simultaneously three kinds of supplementary information. It displays
the relative positions of the laparoscopic tools and the 3D patient model. It
provides two external views of the patient on which his 3D model and tool
positions are superimposed. Finally, it displays an endoscopic view augmented
with the 3D patient model. Contrary to most systems for laparoscopic surgery,
this one is low cost (one PC, two cameras and a printer are necessary) and we
have rigorously validated its tracking accuracy in simulated clinical
conditions (tracking accuracy of an instrument tip close to 1.5 mm, and
endoscopic overlay error under 1.0 mm). Besides, our system accuracy is
comparable with commercial system's ones. Eventually, experiments on an
abdominal phantom with surgeons demonstrate its usefulness. We plan to realize
our first experiments on patients in a few months. Keywords: accuracy evaluation, computer-aided surgery, endoscopic tool calibration and
tracking, registration | |||
| BalloonProbe: reducing occlusion in 3D using interactive space distortion | | BIBAK | Full-Text | 134-137 | |
| Niklas Elmqvist | |||
| Using a 3D virtual environment for information visualization is a promising
approach, but can in many cases be plagued by a phenomenon of literally not
being able to see the forest for the trees. Some parts of the 3D visualization
will inevitably occlude other parts, leading both to loss of efficiency and,
more seriously, correctness; users may have to change their viewpoint in a
non-trivial way to be able to access hidden objects, and, worse, they may not
even discover some of the objects in the visualization due to this inter-object
occlusion. In this paper, we present a space distortion interaction technique
called the BalloonProbe which, on the user's command, inflates a spherical
force field that repels objects around the 3D cursor to the surface of the
sphere, separating occluding objects from each other. Inflating and deflating
the sphere is performed through smooth animation, ghosted traces showing the
displacement of each repelled object. Our prototype implementation uses a 3D
cursor for positioning as well as for inflating and deflating the force field
"balloon". Informal testing suggests that the BalloonProbe is a powerful way of
giving users interactive control over occlusion in 3D visualizations. Keywords: 3D space distortion, interaction technique, occlusion reduction | |||
| Projective slice for a dynamic steering task | | BIBAK | Full-Text | 138-141 | |
| Pierre Salom; Javier Becerra; Marc Donias; Rémi Megret | |||
| Volume visualization using 2-D slices is a technique exploited in several
scientific fields such as medicine, geology or any other industrial application
using 3D data. The expert has the possibility to explore the data by displaying
a set of slices extracted from the volume. The objective is to detect some
specific three-dimensional structure and point it in 2-D, directly on slices
with a device. This type of interaction realized during the animation of slices
is named dynamic pointing. To formalize this task we define in this article a
new paradigm: the dynamic steering task (DST). We first relate it with the
other tasks studied in Human-Computer Interaction. This study helps us to
better understand the reason of the difficulty of this task and why users
produce so many positioning errors. We formulate the hypothesis that the errors
generated in a DST are coming from the impossibility for the experts to
anticipate the structural variations of the structures to be pointed. In order
to solve this problem we propose a new technique of visualization, which
facilitates anticipation, called the projective slice. The effectiveness of
this tool and the veracity of our assertions are determined experimentally. Keywords: dynamic pointing, dynamic steering task, preview, projective slice | |||
| Graphics: from a 100x data expansion to a 100x compression function | | BIBA | Full-Text | 142 | |
| Eng Lim Goh | |||
| In the mid 1980s, a typical large data visualization, involved the input of
geometry in the order of 10,000 triangles, for rendering into about a million
display pixels. However, today, we see input data sets growing to a billion
triangles, while the generated display has only grown to about 10 million
pixels. Consequently, large data visualization have, over the years, changed
from a data expansion, to a data compression function.
This four orders of magnitude change has evoked renewed interests in ray tracing rendering techniques, of which its performance can be made more input geometry-quantity insensitive. This is as opposed to scanline rendering techniques, such as OpenGL, that can be made more output pixel-quantity insensitive. For those staying with scanline techniques, the enormous growth in geometry count has made it necessary for the development of scalable parallel rendering. This is where tens of Graphics Processing Units (GPUs) are coordinated to render one large geometric data set. After the pixels are produced by each of these GPUs, ways of compositing their individual output into a single display, with feedback loops for dynamic load balancing, may be necessary. As the crossover "from expansion to compression" continues, it will become increasingly practical, in certain remote visualization sessions, to invent ways to transmit only the generated pixels; instead of the traditional method of transmitting the entire geometric data set for pixel generation at the remote user's station. Add to this the advances in display, lighting and input technologies for mobile handheld devices, interesting new applications may evolve for the scientific, engineering and creative users. | |||
| Rapid part-based 3D modeling | | BIBAK | Full-Text | 143-146 | |
| Ismail Oner Sebe; Suya You; Ulrich Neumann | |||
| An intuitive and easy-to-use 3D modeling system has become more crucial with
the rapid growth of computer graphics in our daily lives. Image-based modeling
(IBM) has been a popular alternative to pure 3D modelers (e.g. 3D Studio Max,
Maya) since its introduction in the late 1990s. However, IBM techniques are
inherently very slow and rarely user friendly. Most IBM techniques require
either very extensive manual input and/or multiple images. In this paper, we
present an IBM technique that gives high level of detail with 1-2 minutes of
manipulation from a novice user using only single, un-calibrated image. Our
system modifies a generic part-based model of the object under investigation.
User inputs are entered via a simple interface and converted into modifications
to the whole 3D model. We demonstrate the effectiveness of our modeler by
modeling several vehicles, such as SUVs, sedan/hatchback/coupe cars, minivans,
trucks and more. Keywords: image based modeling and rendering, part-based modeling, rapid 3D modeling | |||
| Rapid scene modelling, registration and specification for mixed reality systems | | BIBAK | Full-Text | 147-150 | |
| Russell Freeman; Anthony Steed; Bin Zhou | |||
| Many mixed-reality systems require real-time composition of virtual objects
with real video. Such composition requires some description of the virtual and
real scene geometries and calibration information for the real camera. Once
these descriptions are available, they can be used to perform many types of
visual simulation including virtual object placement, occlusion culling,
texture extraction, collision detection and reverse and re-illumination
methods.
In this paper we present a demonstration where we rapidly register prefabricated virtual models to a videoed scene. Using this registration information we were able to augment animated virtual avatars to create a novel mixed reality system. Rather than build a single monolithic system, we briefly introduce our lightweight modelling tool, the Mixed-Reality Toolkit (MRT) which enables rapid reconfiguration of scene objects without performing a full reconstruction. We also generalise to outline some initial requirements for a Mixed Reality Modelling Language (MRML). Keywords: camera calibration, mixed reality, modelling from images | |||
| Modeling landscapes with ridges and rivers | | BIBAK | Full-Text | 151-154 | |
| Farès Belhadj; Pierre Audibert | |||
| Generating realistic models of landscapes with drainage network is a major
field in computer graphics. In this paper, we present a fractal based method
which generates natural terrains using ridges and rivers information. As
opposed to methods that compute water erosion for a given terrain model, our
terrain mesh is generated constrained by a predefined set of ridge lines and
rivers network. A new extension of the midpoint displacement model is used in
this context. The resulting landscapes meshes leads to realistic rendering and
our method seems to be very promising. Keywords: fractals, midpoint displacement, terrain erosion, terrain models | |||
| Artistic reality: fast brush stroke stylization for augmented reality | | BIBAK | Full-Text | 155-158 | |
| Jan Fischer; Dirk Bartz; A Wolfgang Straβer | |||
| The goal of augmented reality is to provide the user with a view of the
surroundings enriched by virtual objects. Practically all augmented reality
systems rely on standard real-time rendering methods for displaying graphical
objects. Although such conventional computer graphics algorithms are fast, they
often fail to produce sufficiently realistic renderings. Therefore, virtual
models can easily be distinguished from the real environment. We have recently
proposed a novel approach for generating augmented reality images [4]. Our
method is based on the idea of applying stylization techniques for adapting the
visual realism of both the camera image and the virtual graphical objects.
Since both the camera image and the virtual objects are stylized in a
corresponding way, they appear very similar. Here, we present a new method for
the stylization of augmented reality images. This approach generates a
painterly brush stroke rendering. The resulting stylized augmented reality
video frames look similar to paintings created in the pointillism style. We
describe the implementation of the camera image filter and the
non-photorealistic renderer for virtual objects. These components have been
newly designed or adapted for this purpose. They are fast enough for generating
augmented reality images in near real-time (more than 14 frames per second). Keywords: augmented reality, brush stroke style, non-photorealistic rendering,
real-time stylization | |||
| DiReC: distributing the render cache to PC-clusters for interactive environments | | BIBAK | Full-Text | 159-162 | |
| Nils Beck; André Hinkenjann | |||
| The Render Cache [1,2] allows the interactive display of very large scenes,
rendered with complex global illumination models, by decoupling camera movement
from the costly scene sampling process. In this paper, the distributed
execution of the individual components of the Render Cache on a PC cluster is
shown to be a viable alternative to the shared memory implementation.
As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications. We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented. The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework. Keywords: clusters, render cache | |||
| Computing inverse kinematics with linear programming | | BIBAK | Full-Text | 163-166 | |
| Edmond S. L. Ho; Taku Komura; Rynson W. H. Lau | |||
| Inverse Kinematics (IK) is a popular technique for synthesizing motions of
virtual characters. In this paper, we propose a Linear Programming based IK
solver (LPIK) for interactive control of arbitrary multibody structures. There
are several advantages of using LPIK. First, inequality constraints can be
handled, and therefore the ranges of the DOFs and collisions of the body with
other obstacles can be handled easily. Second, the performance of LPIK is
comparable or sometimes better than the IK method based on Lagrange
multipliers, which is known as the best IK solver today. The computation time
by LPIK increases only linearly proportional to the number of constraints or
DOFs. Hence, LPIK is a suitable approach for controlling articulated systems
with large DOFs and constraints for real-time applications. Keywords: inverse kinematics, linear programming, real-time motion synthesis | |||
| Dynamic creation of interactive mixed reality presentations | | BIBAK | Full-Text | 167-176 | |
| Krzysztof Walczak; Rafal Wojciechowski | |||
| In this paper, we describe a method of dynamic creation of interactive
presentations for Mixed Reality environments. The presentations are created
automatically for collections of multimedia objects arbitrarily arranged in
virtual presentation spaces stored in a database. Users can navigate between
the spaces using Web and 3D multimedia contents. The objects in each space are
presented through a presentation template, which determines both the
visualization and the interaction aspects of the presentation. The possible
user interactions are described using a novel language called MR-ISL. The
presentation templates are coded in X?VRML, a high-level modeling language. The
method can be used in various application domains. Examples discussed in the
paper relate to presentation of cultural heritage and educational resources. Keywords: VRML, X?VRML, interaction scenarios, mixed reality, virtual reality | |||
| Integrated levels of detail | | BIBAK | Full-Text | 177-183 | |
| Georgios Stylianou; Yiorgos Chrysanthou | |||
| We introduce a new mesh representation for arbitrary surfaces that
integrates different levels of detail into the final representation. It is
produced after remeshing an existing model and omits storing connectivity
information. Switching between resolutions can be instantly accomplished
without extra computation. This representation is generated by chartifying
initial the mesh, parametrizing and re-meshing each chart using a regular grid
of control points in a multilevel approach. Finally, the model becomes
watertight by hierarchically stitching each chart's boundary points and
normals. Keywords: multiresolution, remeshing, surface parametrization | |||
| Generating enhanced natural environments and terrain for interactive combat simulations (GENETICS) | | BIBAK | Full-Text | 184-191 | |
| William D. Wells | |||
| Virtual battlefields devoid of vegetation deprive soldiers of valuable
training in the critical aspects of terrain tactics and terrain-based
situational awareness. Creating believable landscapes by hand is notoriously
expensive, requiring both proprietary tools and trained artists, which hampers
rapid scenario generation and limits reuse. Our approach constructs large-scale
natural environments at run-time using a procedural image-based algorithm
without the need for artists or proprietary tools.
This paper discusses the current state of the open source project GENETICS (Generating Enhanced Natural Environments and Terrain for Interactive Combat Simulations) and how simulationists can use GENETICS to quickly and cheaply build large-scale natural environments to improve training effectiveness. It will also briefly touch upon level-of-detail techniques and ecotype modeling. Keywords: automated vegetation placement, landscape visualization, run-time terrain
database generation | |||
| Real time tracking of high speed movements in the context of a table tennis application | | BIBAK | Full-Text | 192-200 | |
| Stephan Rusdorf; Guido Brunnett | |||
| In this paper we summarize the experiences we made with the implementation
of a table tennis application. After describing the hardware necessities of our
system we give insight into different aspects of the simulation. These include
collision detection, physical simulation and some aspects of the design of the
virtual opponent.
Since table tennis is one of the fastest sports the synchronization of the player's movements and the visual output on the projection wall is the most challenging problem to solve. Therefore we analysed the latencies of all subcomponents of our system and designed a prediction method that allows high speed interaction with our application. Keywords: collision detection, latency, prediction, table tennis, tracking, virtual
reality | |||
| A general method for comparing the expected performance of tracking and motion capture systems | | BIBAK | Full-Text | 201-210 | |
| B. Danette Allen; Greg Welch | |||
| We introduce a general method for evaluating and comparing the expected
performance of sensing systems for interactive computer graphics. Example
applications include head tracking systems for virtual environments, motion
capture systems for movies, and even multi-camera 3D vision systems for
image-based visual hulls.
Our approach is to estimate the asymptotic position and/or orientation uncertainty at many points throughout the desired working volume, and to visualize the results graphically. This global performance estimation can provide both a quantitative assessment of the expected performance, and intuition about the type and arrangement of sources and sensors, in the context of the desired working volume and expected scene dynamics. Keywords: computer vision, covariance analysis, information visualization, motion
capture, sensor fusion, tracking, virtual environments | |||
| Object deformation and force feedback for virtual chopsticks | | BIBAK | Full-Text | 211-219 | |
| Yoshifumi Kitamura; Ken'ichi Douko; Makoto Kitayama; Fumio Kishino | |||
| This paper proposes a virtual chopsticks system using force feedback and
object deformation with FEM (finite element model). The force feedback model is
established by using a leverage based on the correct chopsticks handling
manner, and the force is applied to the index and middle finger. The object
deformation is obtained in real-time by calculating inverse stiffness matrix
beforehand. We performed experiments to compare the hardness of virtual
objects. As a result, we found that a recognition rate of almost 100% can be
achieved between virtual objects where the logarithmic difference in hardness
is 0.4 or more, while lower recognition rates are obtained when the difference
in hardness is smaller than this. Keywords: FEM, deformation, force feedback, object manipulation, virtual chopsticks,
virtual environment | |||
| Search and transitioning for motion captured sequences | | BIBAK | Full-Text | 220-223 | |
| Suddha Basu; Shrinath Shanbhag; Sharat Chandran | |||
| Animators today have started using motion captured (mocap) sequences to
drive characters. Mocap allows rapid acquisition of highly realistic animation
data. Consequently animators have at their disposal an enormous amount of mocap
sequences which ironically has created a new retrieval problem. Thus, while
working with mocap databases, an animator often needs to work with a subset of
"useful" clips. Once the animator selects a candidate working set of motion
clips, she then needs to identify appropriate transition points amongst these
clips for maximal reuse.
In this paper, we describe methods for querying mocap databases and identifying transitions for a given set of clips. We preprocess clips (and clip subsequences), and precompute frame locations to allow interactive stitching. In contrast with existing methods that view each individual clips as nodes, for optimal reuse, we reduce the granularity. Keywords: motion capture, motion synthesis, query by example, transition | |||
| Laser scanning for the interactive walk-through fogScreen | | BIBAK | Full-Text | 224-226 | |
| Ismo Rakkolainen; Karri Palovuori | |||
| FogScreen is a free space 2D projection screen, which enables to touch and
walk through an immaterial image. FogScreen consists of flowing air with a
little visible humidity in the center of flow, and enables high-quality
projected images in thin air, and many new applications.
The FogScreen can be made interactive. Robust tracking of the user's pointing is a key element of the interactive system. In this short paper we present robust tracking employing laser scanning. Keywords: fogScreen, touch screen, tracking, walk-through screen | |||
| POLAR: portable, optical see-through, low-cost augmented reality | | BIBAK | Full-Text | 227-230 | |
| Alex Olwal; Tobias Höllerer | |||
| We describe POLAR, a portable, optical see-through, low-cost augmented
reality system, which allows a user to see annotated views of small to
medium-sized physical objects in an unencumbered way. No display or tracking
equipment needs to be worn. We describe the system design, including a hybrid
IR/vision head-tracking solution, and present examples of simple augmented
scenes. POLAR's compactness could allow it to be used as a lightweight and
portable PC peripheral for providing mobile users with on-demand AR access in
field work. Keywords: augmented reality, compact, low-cost, optical see-through, portable,
projection | |||
| Experiences in driving a cave with IBM scalable graphics engine-3 (SGE-3) prototypes | | BIBAK | Full-Text | 231-234 | |
| A Prabhat; Samuel G. Fulcomer | |||
| The IBM Scalable Graphics Engine-3 (SGE-3) prototype is a network attached
framebuffer. In its application at Brown, it is a pixel compositor for
distributed rendering and a video source for frame-sequential stereo display in
a four-wall CAVE-like display, a TAN VR-Cube. The configuration uses 4 SGE-3
prototype units (one per display wall) and 48 rendering nodes (12 per display
wall; 6 per stereo field). With favorable rendering distribution, achieved
performance per 12-node wall has been up to five times that of a single
graphics card. This report provides details of the cluster systems architecture
and the performance characteristics of the SGE-3 and sample test applications. Keywords: cave, compositors, distributed graphics, virtual reality | |||
| Efficient compression and delivery of stored motion data for avatar animation in resource constrained devices | | BIBAK | Full-Text | 235-243 | |
| Siddhartha Chattopadhyay; Suchendra M. Bhandarkar; Kang Li | |||
| Animation of Virtual Humans (avatars) is done typically using motion data
files that are stored on a client or streaming motion data from a server.
Several modern applications require avatar animation in mobile networked
virtual environments comprising of power constrained clients such as PDAs,
Pocket-PCs and notebook PCs operating in battery mode. These applications call
for efficient compression of the motion animation data in order to conserve
network bandwidth, and save power at the client side during data reception and
motion data reconstruction from the compressed file. In this paper, we have
proposed and implemented a novel file format, termed the Quantized Motion Data
(QMD) format, which enables significant, though lossy, compression of the
motion data. The motion distortion resulting from the reconstructed motion from
the QMD file is minimized by intelligent use of the hierarchical structure of
the skeletal avatar model. The compression gained by using the QMD files for
the motion data is more than twice achieved via standard MPEG-4 compression
using a pipeline comprising of quantization, predictive encoding and arithmetic
coding. In addition, considerably fewer CPU cycles are needed to reconstruct
the motion data from the QMD files compared to motion data compressed using the
MPEG-4 standard. Keywords: avatar animation, distributed virtual reality, human motion | |||
| Simulating virtual crowds in emergency situations | | BIBAK | Full-Text | 244-252 | |
| Adriana Braun; Bardo E. J. Bodmann; Soraia R. Musse | |||
| This paper presents a novel approach to simulate virtual human crowds in
emergency situations. Our model is based on two previous works, on a physical
model proposed by Helbing, where individuals are represented by a particle
system affected by "social forces" that impels them to go to a point-objective,
while avoiding collisions with obstacles and other agents. As a new property,
the virtual agents are endowed with different attributes and individualities as
proposed by Braun et al. The main contributions of this paper are the treatment
of complex environments and their implications on agents' movement, the
management of alarms distributed in space, the virtual agents endowed with
perception of emergency events and their consequent reaction as well as changes
in their individualities. The prototype reads a XML file where different
scenarios can be simulated, such as the characteristics of population, the
virtual scene description, the alarm configuration and the properties of
hazardous events. As output, the prototype generates information in order to
measure the impact of parameters on saved, injured and dead agents. In
addition, some results and validation are discussed. Keywords: behavioral animation, crowd simulation, physically based animation | |||
| Motion normalization: the preprocess of motion data | | BIBAK | Full-Text | 253-256 | |
| Yan Gao; Lizhuang Ma; Zhihua Chen; Xiaomao Wu | |||
| In this paper, we present an online algorithm to normalize all motion data
in database with a common skeleton length. Our algorithm is very simple and
efficient. The input motion stream is processed sequentially while the
computation for a single frame at each step requires only the results from the
previous step over a neighborhood of nearby backward frames. In contrast to
previous motion retargeting approaches, we simplify the constraint condition of
retargeting problem, which leads to the simpler solutions. Moreover, we improve
Shin et al.'s algorithm [10], which is adopted by a widely used Kovar's
footskate cleanup algorithm [6] through adding one case missed by it. Keywords: motion capture, motion normalization, motion retargeting | |||
| Automatic generation of personalized human avatars from multi-view video | | BIBAK | Full-Text | 257-260 | |
| Naveed Ahmed; Edilson de Aguiar; Christian Theobalt; Marcus Magnor; Hans-Peter Seidel | |||
| In multi-user virtual environments real-world people interact via digital
avatars. In order to make the step from the real world onto the virtual stage
convincing the digital equivalent of the user has to be personalized. It should
reflect the shape and proportions, the kinematic properties, as well as the
textural appearance of its real-world equivalent. In this paper, we present a
novel spatio-temporal approach to create a personalized avatar from multi-view
video data of a moving person. The avatar's geometry is generated by
shape-adapting a template human body model. Its surface texture is assembled
from multi-view video frames showing arbitrary different body poses. consistent
surface texture for the model is generated using multi-view video frames from
different camera views and different body poses. With our proposed method
photo-realistic human avatars can be robustly generated. Keywords: avatar creation, shape deformation, texturing, virtual reality | |||