| Light control and 3D video: building blocks for telepresence of the future | | BIBA | Full-Text | 8 | |
| Markus Gross | |||
| The understanding and the conception of systems for telepresence has been a
long-standing problem in research and development. In order to convey a true
sense of presence, a variety of technical and perceptual factors have to be
considered including visual, auditory, and tactile cues. While there has been
significant progress in the design of telepresence systems in recent years, we
are still far from our ultimate goal of the "holodeck experience." Yet, there
are two distinct technological building blocks for enabling telepresence in a
controlled environment: The first one relates to the ability to control light
on surfaces, either through intelligent projection or through active surface
imaging. The second one comprises holographic video, that is, a fully
3-dimensional representation of a remote scene in realtime. The proliferation
of increasingly low-cost, small, and high quality digital video cameras,
time-of-flight sensors, and projectors constitutes an important infrastructure
to implement these building blocks.
In this talk I will discuss the variety of technical challenges underlying simultaneous display and acquisition, such as synchronization, calibration, 3D reconstruction, recognition, modeling, rendering. I will summarize our own experiences in system design from monolithic projection theatres, such as blue-c, towards modular, lightweight setups. I will illustrate how projector-camera modules can be utilized for 3D video acquisition, displays on demand, spatial light control, tabletop surface computing, interactive visual tracking, and recalibration. I will present my vision for telepresence of the future and introduce to the ETH program for Research and Understanding of Telepresence. | |||
| Dive into the movie: an instant casting and immersive experience in the story | | BIBA | Full-Text | 9 | |
| Shigeo Morishima | |||
| Our research project, Dive into Movie (DIM) aims to build a new genre of interactive entertainment which enables anyone to easily participate in a movie by assuming a role and enjoying an embodied, first-hand theater experience. This is specifically accomplished by replacing the original roles of the precreated traditional movie with user created, high-realism, 3-D CG characters. DIM movie is in some sense a hybrid entertainment form, somewhere between a game and storytelling. We hope that DIM movies might enhance interaction and offer more dramatic presence, engagement, and fun for the audience. In DIM movie, audiences can experience high-realism 3-D CG character action with individualized facial characteristics, expression, gait and voice. The DIM system has two key features: First, it can full-automatically create a CG character in a few minutes from capturing the face, body, gait and voice feature of a user and generating her/his corresponding CG animation, to inserting the individualized CG character into the movie in real-time which do not cause any discomfort to the participant; Second, the DIM system makes it possible for multiple participants to take part in a movie at the same time in different roles, such as a family, a circle of friends, etc. And also DIM project proposes a panoramic image capture/projection system and 3D sound field capture/playback system from performer, Äôs view and standing point. | |||
| A taxonomy of (real and virtual world) display and control interactions | | BIBA | Full-Text | 10 | |
| Paul Milgram | |||
| Interactions between a human operator and objects in his/her world afford a
wide range of possible viewing and control options, which can vary with respect
to time, space, proximity, and frame of reference. Whereas it is conventionally
recognised that essentially any kind of interaction metaphor can in principle
be simulated in a virtual environment, modern image processing technology now
permits greatly increased flexibility also for real world interactions with
indirect viewing, Äî as the common video camera has gone beyond
being a simple eye on the (remote) real world to being an instrument that is
able to integrate and interpolate real world images spatially and temporally,
in real time.
In the talk, I shall propose a framework for classifying viewing and manipulation interactions for any task where visual feed back is provided. The framework involves identifying key components of the environment that are central to the interaction, in terms of the multidimensional couplings among the components, where the configuration and characteristics of the couplings determine the nature of the visual and manual control interactions experienced by the operator. The framework is domain independent, and is intended to be used both to classify current display-control interactions and to identify future areas of research. | |||
| Wearable imaging system for capturing omnidirectional movies from a first-person perspective | | BIBAK | Full-Text | 11-18 | |
| Kazuaki Kondo; Yasuhiro Mukaigawa; Yasushi Yagi | |||
| We propose a novel wearable imaging system that can capture omnidirectional
movies from the viewpoint of the camera wearer. The imaging system solves the
problems of resolution uniformity and gaze matching that conventional
approaches do not address. We combine cameras with curved mirrors that control
the projection of the imaging system to produce uniform resolution. Use of the
mirrors also enables the viewpoint to be moved closer to the eyes of the camera
wearer, thus reducing gaze mismatching. The optics, including the curved
mirror, have been designed to form an objective projection. The capability of
the designed optics is evaluated with respect to resolution, aberration, and
gaze matching. We have developed a prototype based on the designed optics for
practical use. Capability of the prototype and effectiveness of first-person
perspective omnidirectional movies were demonstrated through quantitative
evaluations and presentation experiments to ordinary people, respectively. Keywords: first-person perspective, omnidirectional imaging system, wearable camera | |||
| A wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror | | BIBAK | Full-Text | 19-22 | |
| Erika Sumiya; Tomohiro Mashita; Kiyoshi Kiyokawa; Haruo Takemura | |||
| In this paper, we propose a wide-view parallax-free eye-mark recorder with a
hyperboloidal half-silvered mirror. Our eye-mark recorder provides a wide
field-of-view (FOV) video recording of the user's exact view by positioning the
focal point of the mirror at the user's viewpoint. The vertical view angle of
the prototype is 122 [deg] (elevation and depression angles are 38 and 84
[deg], respectively) and its horizontal view angle is 116 [deg] (nasal and
temporal view angles are 38 and 78 [deg], respectively). We have implemented
and evaluated a gaze estimation method for our eyemark recorder. Experimental
results have verified that our eye-mark recorder successfully captures a wide
FOV of a user and estimates a rough gaze direction. Keywords: eye-mark recorder, gaze estimation, half-silvered hyperboloidal mirror,
head-mounted camera | |||
| Study on design of controllable particle display using water drops suitable for light environment | | BIBAK | Full-Text | 23-26 | |
| Shin-ichiro Eitoku; Kunihiro Nishimura; Tomohiro Tanikawa; Michitaka Hirose | |||
| Controllable particle display has been proposed, that controls the position
and blinking patterns to yield a visual representation and the representation
to touch these particles. Additionally, as an example of its implementation, a
controllable particle display using water drops as the particles was proposed.
In this system, objects are represented by projecting images upward onto
falling water drops designed to form a plane surface, depending on the
positions of the water drops. However, this method has a problem in terms of
the brightness of the object. In this paper, we propose a method by which
images are projected onto falling water drops at an angle, and users observe
the images from in front of a projector. Keywords: public space, volumetric display, water drop | |||
| Tearable: haptic display that presents a sense of tearing real paper | | BIBAK | Full-Text | 27-30 | |
| Takuya Maekawa; Yuichi Itoh; Keisuke Takamoto; Kiyotaka Tamada; Takashi Maeda; Yoshifumi Kitamura; Fumio Kishino | |||
| We propose a novel interface called Tearable that allows users to
continuously experience the real sense of tearing paper. To provide such a real
sense, we measured the actual vibration data of tearing a piece of real paper
and analyzed them. Based on this data, we utilized hook-and-loop fasteners and
a DC motor for representing the sense of tearing. We compared the force given
by Tearable with that by a piece of real paper and recommended its
reproducibility and usability. In addition, we evaluated Tearable with
questionnaires after user experiences. Keywords: haptic display | |||
| Camera-based OBDP locomotion system | | BIBAK | Full-Text | 31-34 | |
| Minghadi Suryajaya; Tim Lambert; Chris Fowler | |||
| In virtual reality, locomotion is a key factor in making a simulation
immersive. Actually walking is the most intuitive way for people to move about,
providing a better sense of presence than walking-in-place or flying [Usoh et
al. 1999]. We have built a locomotion system with a ball-bearing platform that
allows the user to walk in a natural fashion in any direction. The user's leg
motion is tracked with two cameras and turned into locomotion in the
simulation. We also track upper body motion and use this to animate the user's
avatar.
Our approach is less expensive than systems that involve complex mechanical arrangements, such as an omnidirectional treadmill [Darken et al. 1997], and more immersive than simple switch mechanisms such as the Walking-Pad [Bouguila et al. 2004]. Our system delivers real-time performance on mid-tier hardware computer and webcams. Keywords: locomotion interface, meanshift algorithm, stereo camera, virtual reality,
walking | |||
| Judgment of natural perspective projections in head-mounted display environments | | BIBAK | Full-Text | 35-42 | |
| Frank Steinicke; Gerd Bruder; Klaus Hinrichs; Scott Kuhl; Markus Lappe; Pete Willemsen | |||
| The display units integrated in todays head-mounted displays (HMDs) provide
only a limited field of view (FOV) to the virtual world. In order to present an
undistorted view to the virtual environment (VE), the perspective projection
used to render the VE has to be adjusted to the limitations caused by the HMD
characteristics. In particular, the geometric field of view (GFOV), which
defines the virtual aperture angle used for rendering of the 3D scene, is set
up according to the display's field of view. A discrepancy between these two
fields of view distorts the geometry of the VE in a way that either minifies or
magnifies the imagery displayed to the user. Discrepancies between the
geometric and physical FOV causes the imagery to be minified or magnified. This
distortion has the potential to negatively or positively affect a user's
perception of the virtual space, sense of presence, and performance on visual
search tasks.
In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects -- even if these perspectives deviate from the perspectives that are inherently defined by the display's field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD -- in some cases up to 50%. Keywords: field of view, head-mounted displays, virtual reality | |||
| Gaze behavior and visual attention model when turning in virtual environments | | BIBAK | Full-Text | 43-50 | |
| Sébastien Hillaire; Anatole Lécuyer; Gaspard Breton; Tony Regia Corte | |||
| In this paper we analyze and try to predict the gaze behavior of users
navigating in virtual environments. We focus on first-person navigation in
virtual environments which involves forward and backward motions on a
ground-surface with turns toward the left or right. We found that gaze behavior
in virtual reality, with input devices like mice and keyboards, is similar to
the one observed in real life. Participants anticipated turns as in real life
conditions, i.e. when they can actually move their body and head. We also found
influences of visual occlusions and optic flow similar to the ones reported in
existing literature on real navigations. Then, we propose three simple gaze
prediction models taking as input: (1) the motion of the user as given by the
rotation velocity of the camera on the yaw axis (considered here as the virtual
heading direction), and/or (2) the optic flow on screen. These models were
tested with data collected in various virtual environments. Results show that
these models can significantly improve the prediction of gaze position on
screen, especially when turning, in the virtual environment. The model based on
rotation velocity of the camera seems to be the best trade-off between
simplicity and efficiency. We suggest that these models could be used in
several interactive applications using gaze point as input. They could also be
used as a new top-down component in any existing visual attention model. Keywords: first-person navigation, gaze behavior, gaze tracking, perception model,
visual attention | |||
| Influence of degrees of freedom's manipulation on performances during orientation tasks in virtual reality environments | | BIBAK | Full-Text | 51-58 | |
| Manuel Veit; Antonio Capobianco; Dominique Bechmann | |||
| In this paper we investigate the influence of the integration and separation
of the Degrees Of Freedom (DOF) on the users' performances during 3-D
orientation tasks. For this purpose, we compare the performances and the level
of DOF's coordination users reached using two interaction techniques, one
integrating and the other separating the task's DOF. To evaluate the degree of
coordination, we propose a new behavioural measurement (called Magnitude of
Degrees of Freedom's Separation), which provides the number of DOF
simultaneously manipulated during an orientation task. The results of our study
suggest that users are not able to integrate the manipulation of all the DOF
during the whole task even using a direct manipulation technique. Moreover, if
the interaction eases the task's decomposition, its use can lead to significant
improvements regarding the achievement times. This result suggests that the
simultaneous manipulation of all the DOF does not necessary lead to the best
performances. Keywords: degrees of freedom, human-computer interaction, interaction technique,
measurements, rotation, virtual reality | |||
| Analyzing the effect of a virtual avatar's geometric and motion fidelity on ego-centric spatial perception in immersive virtual environments | | BIBAK | Full-Text | 59-66 | |
| Brian Ries; Victoria Interrante; Michael Kaeding; Lane Phillips | |||
| Previous work has shown that giving a user a first-person virtual avatar can
increase the accuracy of their egocentric distance judgments in an immersive
virtual environment (IVE). This result provides one of the rare examples of a
manipulation that can enable improved spatial task performance in a virtual
environment without potentially compromising the ability for accurate
information transfer to the real world. However, many open questions about the
scope and limitations of the effectiveness of IVE avatar self-embodiment
remain. In this paper, we report the results of a series of four experiments,
involving a total of 40 participants, that explore the importance, to the
desired outcome of enabling enhanced spatial perception accuracy, of providing
a high level of geometric and motion fidelity in the avatar representation. In
these studies, we assess participants' abilities to estimate egocentric
distances in a novel virtual environment under four different conditions of
avatar self-embodiment: a) no avatar; b) a fully tracked, custom-fitted, high
fidelity avatar, represented using a textured triangle mesh; c) the same avatar
as in b) but implemented with single point rather than full body tracking; and
d) a fully tracked but simplified avatar, represented by a collection of small
spheres at the raw tracking marker locations. The goal of these investigations
is to attain insight into what specific characteristics of a virtual avatar
representation are most important to facilitating accurate spatial perception,
and what cost-saving measures in the avatar implementation might be possible.
Our results indicate that each of the simplified avatar implementations we
tested is significantly less effective than the full avatar in facilitating
accurate distance estimation; in fact, the participants who were given the
simplified avatar representations performed only marginally (but not
significantly) more accurately than the participants who were given no avatar
at all. These findings suggest that the beneficial impact of providing users
with a high fidelity avatar self-representation may stem less directly from the
low-level size and motion cues that the avatar embodiment makes available to
them than from the cognitive sense of presence that the self-embodiment
supports. Keywords: head mounted displays, immersive virtual environments, presence, spatial
perception, virtual avatars | |||
| Multi-modal exploration of small artifacts: an exhibition at the Gold Museum in Bogota | | BIBAK | Full-Text | 67-74 | |
| Pablo Figueroa; Mauricio Coral; Pierre Boulanger; Juan Borda; Eduardo Londoño; Felipe Vega; Flavio Prieto; Diego Restrepo | |||
| We present the iterative development and initial evaluation of a multi-modal
platform for interacting with precious small artifacts from the Gold Museum in
Bogota. By using a commercial haptic interface, loud speakers, and stereo
displays, one can allow visitors to touch, hear, and observe in stereo those
precious artifacts. We use this multi-modal interface in a novel way and in a
novel context in order to provide virtual replicas that can be weighed,
cleaned, and explored as if they were close to a visitor's hand. This platform
is currently open to the public, and some of the lessons learned are reported
in terms of usability in a real-world museum application. Keywords: Museo del Oro Colombia, haptics, multimodal booth, virtual heritage | |||
| Haptic interaction with one-dimensional structures | | BIBAK | Full-Text | 75-78 | |
| Ugo Bonanni; Petr Kmoch; Nadia Magnenat-Thalmann | |||
| One-dimensional structures are very important for simulating a variety of
slender objects such as ropes, hair, wires, cables or tubes. Because of their
practical relevance, an increasing number of dynamic simulation methods have
been proposed during the last years. However, interaction techniques have not
followed this evolution. Hence, we propose to combine the recent advances in
the computation of physically plausible rod dynamics with dedicated force
rendering methods. We present a novel approach for computing the haptic
interaction with slender objects in a virtual environment. Our interaction
framework allows for an enhanced control over the rod by taking into account
user-induced torques in the dynamics equations. Interaction forces are computed
according to the rod's bending stiffness and frictional properties. Our force
rendering method can thus be applied to a variety of simulation models based on
the Cosserat Theory of Elastic Rods. The results of this paper are relevant for
including haptic feedback within applications involving 1D-rods, such as
virtual hair modeling systems for the digital effects industry, or assembly
simulation environments in the automotive industry using flexible parts such as
wire harnesses and hoses. Keywords: force feedback, haptics, multimodal interaction | |||
| Profiling the behaviour of 3D selection tasks on movement time when using natural haptic pointing gestures | | BIBA | Full-Text | 79-82 | |
| Vijay M. Pawar; Anthony Steed | |||
| In this paper we profiled the performance of two types of 3D selections tasks: selection of one target and the selection of two targets. We designed an Immersive Virtual Environment (IVE) to evaluate any differences that may exist, and understand the underlying human behaviour by recording the hand movements' participants made when asked to select a series of 3D objects. To do this, we implemented a natural virtual hand-like interaction technique that participants could control using a large-scale force-feedback device placed into a CAVE™-like IVE system. We also investigated the interaction of no, soft and hard haptic force-feedback responses in addition to three target sizes on user performance. From the results obtained, we show distinct differences in the movement time taken when participants used their right hand to select one target in comparison to the selection of two targets. | |||
| Haptic augmented reality interface using the real force response of an object | | BIBAK | Full-Text | 83-86 | |
| Yuichi Kurita; Atsutoshi Ikeda; Takeshi Tamaki; Tsukasa Ogasawara; Kazuyuki Nagata | |||
| This paper presents the haptic interface system that consists of a base
object and a haptic device. The desired force response is achieved by the
combination of the real force response of the base object and the virtual force
exerted by the haptic device. The proposed haptic augmented reality (AR) system
can easily generate the force response of a visco-elastic object with a cheap
haptic device and a base object that has the similar visco-elastic property to
the target object. In the demonstration, the force response of the target
object was generated by using a haptic device only (VR) and using both a haptic
device and a base object (AR), respectively. The evaluation experiments by
participants show that the AR method has better performance than the VR method.
This result indicates the potential of the proposed haptic AR interface. Keywords: augmented reality, haptic interface | |||
| Improving perceived hardness of haptic rendering via stiffness shifting: an initial study | | BIBAK | Full-Text | 87-90 | |
| Gabjong Han; Seokhee Jeon; Seungmoon Choi | |||
| Rendering a stiff virtual surface using a force-feedback haptic interface
has been one of the most classic and important research issues in haptics. In
this paper, we present an initial study for a novel haptic rendering technique,
named stiffness shifting, which greatly increases the perceived hardness of a
virtual surface. The key idea of stiffness shifting is to use a stiffness
profile that includes an instantaneous increment shortly after a contact. The
algorithm is very simple, and can be easily integrated into existing haptic
rendering algorithms for 3D objects. Furthermore, the perceptual performance of
the algorithm is impressive; a virtual wall rendered using stiffness shifting
is perceived as hard as one rendered using the common linear spring model with
2.5 times higher stiffness. This result demonstrates a great potential of
stiffness shifting to be a general means for improving the perceptual quality
of haptic rendering. Keywords: haptic rendering, hardness perception, stiffness | |||
| Virtual bone drilling for dental implant surgery training | | BIBAK | Full-Text | 91-94 | |
| Kimin Kim; Jinah Park | |||
| Mechanical removal of bone material is the most critical procedure during
dental implant surgery because it can jeopardize patient safety in several ways
such as damage in the mandibular canal and piercing of the maxillary sinus.
With recognition of the effectiveness in virtual training, many simulators with
haptic feedback have been proposed. Although there are many varieties in drill
bits, most of previously developed simulators consider only a spherically
shaped tool due to its simplicity in tool-bone interaction. In this paper, we
propose a new simulation method that can handle any arbitrarily shaped tools
with multiple contacts between the tool and the bone. The tool is represented
by a signed-distance field, and the bone is represented as voxels surrounded by
a point shell. Upon chipping away bone elements, the point shell is updated
reflecting the deformation of bone in real-time, while the collision detection
and the reflected force is efficiently and accurately computed from the
distance field encoded in the tool. We also present the experimental results
with 12 dental implantologists to evaluate realism of the proposed simulator. Keywords: dental implant surgery, haptics, virtual drilling | |||
| A generic real-time rendering approach for structural colors | | BIBAK | Full-Text | 95-102 | |
| Masataka Imura; Osamu Oshiro; Masahiko Saeki; Yoshitsugu Manabe; Kunihiro Chihara; Yoshihiro Yasumuro | |||
| Colors in nature can be classified into two categories: pigment colors and
structural colors. Structural colors, which are caused by optical path
differences of reflected rays in microstructures, change depending on viewing
angle and lighting conditions. In the present paper, we propose a generic
approach for rendering structural colors in real-time. The proposed method uses
optical path differences as common parameters to allow unified treatment of
various types of microstructures, such as thin films, multilayer films, and
diffraction gratings. To achieve real-time rendering, we store pre-computed
information related to specific microstructure and lighting conditions on
several kinds of textures. The textures are used as a look-up table in the
rendering process. The proposed method can be applied to objects that have
arbitrary shape and enables rendering from any viewing direction and under any
lighting conditions. Keywords: optical path difference, real-time rendering, structural color, texture
representation | |||
| GPU acceleration of stereoscopic and multi-view rendering for virtual reality applications | | BIBAK | Full-Text | 103-110 | |
| Jonathan Marbach | |||
| Stereo and Multi-View rendering of three-dimensional virtual environments
can be accelerated using modern GPU features such as geometry shaders and
layered rendering, allowing multiple images to be generated in a single
geometry pass. These same capabilities can be used to generate the multiple
views necessary for co-present multi-user projection environments. Previous
work has demonstrated the feasibility of applying such techniques, but has not
shown under what circumstances these techniques provide increased or decreased
rendering performance. This paper provides a detailed analysis of the
performance of single-pass stereo and multi-view generation techniques and
provides guidelines for when their application is beneficial to rendering
performance. Keywords: geometry shader, layered rendering, multi-viewer images, stereoscopic
rendering, virtual reality | |||
| A particle-based method for viscoelastic fluids animation | | BIBAK | Full-Text | 111-117 | |
| Yuanzhang Chang; Kai Bao; Youquan Liu; Jian Zhu; Enhua Wu | |||
| We present a particle-based method for viscoelastic fluids simulation. In
the method, based on the traditional Navier-Stokes equation, an additional
elastic stress term is introduced to achieve viscoelastic flow behaviors, which
have both fluid and solid features. Benefiting from the Lagrangian nature of
Smoothed Particle Hydrodynamics, large flow deformation can be handled more
easily and naturally. And also, by changing the viscosity and elastic stress
coefficient of the particles according to the temperature variation, the
melting and flowing phenomena, such as lava flow and wax melting, are achieved.
The temperature evolution is determined with the heat diffusion equation. The
method is effective and efficient, and has good controllability. Different
kinds of viscoelastic fluid behaviors can be obtained easily by adjusting the
very few experimental parameters. Keywords: heat diffusion, melting, smoothed particle hydrodynamics, viscoelastic
fluids | |||
| The virtual magic lantern: an interaction metaphor for enhanced medical data inspection | | BIBAK | Full-Text | 119-122 | |
| Eva Monclús; José Díaz; Isabel Navazo; Pere-Pau Vázquez | |||
| In Volume Rendering, it is difficult to simultaneously visualize interior
and exterior structures. Several approaches have been developed to solve this
problem, such as cut-away or exploded views. Nevertheless, in most cases, those
algorithms usually require either a preprocess of the data, or an accurate
determination of the region of interest, previous to data inspection.
In this paper we present the Virtual Magic Lantern (VML), an interaction tool tailored to facilitate volumetric data inspection. It behaves like a lantern whose virtual illumination cone provides the focal region which is visualized using a secondary transfer function or different rendering style. This may be used for simple visual inspection, surgery planning, or injure diagnosis. The VML is a particularly friendly and intuitive interaction tool suitable for an immersive Virtual Reality setup with a large screen, where the user moves a Wanda device, like a lantern pointing to the model. We show that this inspection metaphor can be efficiently and easily adapted to a GPU ray casting volume visualization algorithm. We also present the Virtual Magic Window (VMW) metaphor as an efficient collateral implementation of the VML, that can be seen as a restricted case where the lantern illuminates following the viewing direction, through a virtual window created as the intersection of the virtual lantern (guided by the Wanda device) and the bounding box of the volume. Keywords: interaction, medical models, virtual reality | |||
| Sizing avatars from skin weights | | BIBAK | Full-Text | 123-126 | |
| Mustafa Kasap; Nadia Magnenat-Thalmann | |||
| In current computer games and simulation environments, individuality of
virtual character bodies are mainly constructed using different textures and
accessories. However, this type of modeling generates anthropometrically
similar shapes due to the reliance on a single or few body models.
Alternatively, using large variety of body size models require larger storage
resources and design efforts. We present an efficient method for generating and
storing variety of body size models derived from a skinned template. Our method
doesn't require additional design efforts and uses the existing skinning data
that are already attached to the template model. Algorithm used for sizing the
model is based on anthropometric body measurement standards that are used in
ergonomic design application. Resulting new body size models use the same
skinning information for animation by adapting the underlying skeleton
according to the anthropometric parameters. Our developed system is useful in
CAD applications from ergonomic design of cloths to parametrically resizing
avatars. Keywords: anthropometry, deformation, multi-scale models, virtual human | |||
| Relocalization using virtual keyframes for online environment map construction | | BIBAK | Full-Text | 127-134 | |
| Sehwan Kim; Christopher Coffin; Tobias Höllerer | |||
| The acquisition of surround-view panoramas using a single hand-held or
head-worn camera relies on robust real-time camera orientation tracking. In
absence of robust tracking recovery methods, the complete acquisition process
has to be re-started when tracking fails. This paper presents methodology for
camera orientation relocalization, using virtual keyframes for online
environment map construction. Instead of relying on real keyframes from
incoming video, the proposed approach enables camera orientation relocalization
by employing virtual keyframes which are distributed strategically within an
environment map. We discuss our insights about a suitable number and
distribution of virtual keyframes, as suggested by our experiments on virtual
keyframe generation and orientation relocalization. After a shading correction
step, we relocalize camera orientation in real-time by comparing the current
camera frame to virtual keyframes. While expanding the captured environment
map, we continue to simultaneously generate virtual keyframes within the
completed portion of the map, as descriptors to estimate camera orientation. We
implemented our camera orientation relocalizer with the help of a GPU fragment
shader for real-time application, and evaluated the speed and accuracy of the
proposed approach. Keywords: camera pose relocalization, environment map, virtual key frame, vision-based
tracking | |||
| SparseSPOT: using a priori 3-D tracking for real-time multi-person voxel reconstruction | | BIBA | Full-Text | 135-138 | |
| Anuraag Sridhar; Arcot Sowmya | |||
| Voxel reconstruction has received increasing interest in recent times, driven by the need for efficient reconstructions of real world scenes from video images. The voxel model has proven useful for activity recognition and motion capture technologies. However most current voxel reconstruction algorithms operate on a fairly small 3-D real world volume and only allow for a single person to be reconstructed. In this paper we present SparseSPOT, an extension of the SPOT voxel reconstruction algorithm, that enables real-time reconstruction of multiple humans within a large environment. We compare SparseSPOT to SPOT and show (by extensive experimental evaluation) that the former achieves superior real time performance. | |||
| Standalone edge-based markerless tracking of fully 3-dimensional objects for handheld augmented reality | | BIBAK | Full-Text | 139-142 | |
| João P. Lima; Veronica Teichrieb; Judith Kelner; Robert W. Lindeman | |||
| This paper presents a markerless tracking technique targeted to the Windows
Mobile Pocket PC platform. The primary aim of this work is to allow the
development of standalone augmented reality applications for handheld devices
based on natural feature tracking of fully 3-Dimensional objects. In order to
achieve this goal, a model-based tracking approach that relies on edge
information was adopted. Since it does not require high processing power, it is
suitable for constrained devices such as handhelds. The OpenGL ES graphics
library was used to detect the visible edges in a given frame, taking advantage
of graphics hardware acceleration when available. In addition, a subset of two
computer vision libraries was ported to the Pocket PC platform in order to
provide some required algorithms to the markerless mobile solution. They were
also adapted to use fixed-point math, with the purpose of improving the overall
performance of the routines. The port of these libraries opens up the
possibility of having other computer-vision tasks being executed on mobile
platforms. An augmented reality application was created using the implemented
technique and evaluations were done regarding tracking performance, accuracy
and robustness. In most of the tests, the frame rates obtained are suitable for
handheld augmented reality and a reasonable estimation of the object pose was
provided. Keywords: augmented reality, computer vision, handheld, markerless tracking, mobile | |||
| Freeze-Set-Go interaction method for handheld mobile augmented reality environments | | BIBAK | Full-Text | 143-146 | |
| Gun A. Lee; Ungyeon Yang; Yongwan Kim; Dongsik Jo; Ki-Hong Kim; Jae Ha Kim; Jin Sung Choi | |||
| Mobile computing devices are getting popular as a platform for augmented
reality (AR) application, and efficient interaction methods for mobile AR
environments are considered necessary. Recently, touch interfaces are getting
popular and drawing attention as a future standard interface on mobile
computing devices. However, accurate touch interactions are not that easy in
mobile AR environments where users tend to move and viewpoints easily get
shaky. In this paper, the authors suggest a new interaction method for handheld
mobile AR environments, named 'Freeze-Set-Go'. The proposed interaction method
lets users to 'freeze' the real world view tentatively, and continue to
manipulate virtual entities within the AR scene. According to the user
experiment, the proposed method turns out to be helping users to interact with
mobile AR environments using touch interfaces in a more accurate and
comfortable way. Keywords: augmented reality, handheld interface, touch interaction | |||
| Robustness enhancement of a localization system using interior decoration with coded pattern | | BIBAK | Full-Text | 147-150 | |
| Shinya Nishizaka; Atsushi Hiyama; Tomohiro Tanikawa; Michitaka Hirose | |||
| Several indoor positioning systems have been studied to offer a public
service based on one's location information. We have studied an indoor
localization system using original fiducial markers. The original markers can
be designed freely by users; therefore, they can be used for interior
decoration. Using this system, you can obtain the three-dimensional position
and pose of your camera by capturing the image of the markers arranged on the
floor. However, there are some problems associated with the use of this system
in a public space, such as a decline in the marker recognition rate by a change
in the surrounding light condition and an instability in the marker recognition
rate depending on the type of marker used and the camera angle. In this study,
we enhanced the robustness of an indoor localization system used in a public
space and increased the number of recognizable markers. Keywords: AR, fiducial marker, indoor position tracking, neural network, p-type
Fourier descriptor | |||
| Evaluating the effects of tracker reliability and field of view on a target following task in augmented reality | | BIBAK | Full-Text | 151-154 | |
| Jonathan Ventura; Marcus Jang; Tyler Crain; Tobias Höllerer; Doug Bowman | |||
| We examine the effect of varying levels of immersion on the performance of a
target following task in augmented reality (AR) X-ray vision. We do this using
virtual reality (VR) based simulation. We analyze participant performance while
varying the field of view of the AR display, as well as the reliability of the
head tracking sensor as our components of immersion. In low reliability
conditions, we simulate sensor dropouts by disabling the augmented view of the
scene for brief time periods. Our study gives insight into the effect of
tracking sensor reliability, as well as the relationship between sensor
reliability and field of view on user performance in a target following task in
a simulated AR system. Keywords: augmented reality, immersion, simulation, user study | |||
| The magic barrier tape: a novel metaphor for infinite navigation in virtual worlds with a restricted walking workspace | | BIBAK | Full-Text | 155-162 | |
| Gabriel Cirio; Maud Marchal; Tony Regia-Corte; Anatole Lécuyer | |||
| In most virtual reality simulations the virtual world is larger than the
real walking workspace. The workspace is often bounded by the tracking area or
the display devices. This paper describes a novel interaction metaphor called
the Magic Barrier Tape, which allows a user to navigate in a potentially
infinite virtual scene while confined to a restricted walking workspace. The
technique relies on the barrier tape metaphor and its "do not cross" implicit
message by surrounding the walking workspace with a virtual barrier tape in the
scene. Therefore, the technique informs the user about the boundaries of his
walking workspace, providing an environment safe from collisions and tracking
problems. It uses a hybrid position/rate control mechanism to enable real
walking inside the workspace and rate control navigation to move beyond the
boundaries by "pushing" on the virtual barrier tape. It provides an easy,
intuitive and safe way of navigating in a virtual scene, without break of
immersion. Two experiments were conducted in order to evaluate the Magic
Barrier Tape by comparing it to two state-of-the-art navigation techniques.
Results showed that the Magic Barrier Tape was faster and more appreciated than
the compared techniques, while being more natural and less tiring. Considering
it can be used in many different virtual reality systems, it is an interaction
metaphor suitable for many different applications, from the entertainment field
to training simulations scenarios. Keywords: 3D interaction, hybrid position/rate control, interaction metaphor,
navigation, walking workspace | |||
| Visual feedback techniques for virtual pointing on stereoscopic displays | | BIBAK | Full-Text | 163-170 | |
| Ferran Argelaguet; Carlos Andujar | |||
| The act of pointing to graphical elements is one of the fundamental tasks in
Human-Computer Interaction. In this paper we analyze visual feedback techniques
for accurate pointing on stereoscopic displays. Virtual feedback techniques
must provide precise information about the pointing tool and its spatial
relationship with potential targets. We show both analytically and empirically
that current approaches provide poor feedback on stereoscopic displays,
resulting in low user performance when accurate pointing is required. We
propose a new feedback technique following a camera viewfinder metaphor. The
key idea is to locally flatten the scene objects around the pointing direction
to facilitate their selection. We present the results of a user study comparing
cursor-based and ray-based visual feedback techniques with our approach. Our
user studies indicate that our viewfinder metaphor clearly outperforms
competing techniques in terms of user performance and binocular fusion. Keywords: image-plane selection, raycasting, virtual pointer | |||
| 3D object arrangement for novice users: the effectiveness of combining a first-person and a map view | | BIBAK | Full-Text | 171-178 | |
| Luca Chittaro; Roberto Ranon; Lucio Ieronutti | |||
| Arranging 3D objects in Virtual Environments can be a complex, error prone
and time consuming task, especially for users who are not familiar with
interfaces for 3D navigation and object manipulation. In this paper, we analyze
and compare novice users' performance on 3D object arrangement tasks using
three interfaces that differ in the views of the 3D environment they provide:
the first one is based only on a first-person view; the second one combines the
first-person view and a map view in which the zoom level is manually controlled
by the user; the third one extends the second with automated assistance in
controlling the map zoom level during object manipulation. Our study shows that
users without prior experience in 3D object arrangement prefer and actually
benefit from having a map view in addition to a first person view in object
arrangement tasks. Keywords: 3D manipulation, experimental evaluation, user study, virtual environments | |||
| D3: an immersive aided design deformation method | | BIBAK | Full-Text | 179-182 | |
| Vincent Meyrueis; Alexis Paljic; Philippe Fuchs | |||
| In this paper, we introduce a new deformation method adapted to immersive
design. The use of Virtual Reality (VR) in the design process implies a
physical displacement of project actors and data between the virtual reality
facilities and the design office. The decisions taken in the immersive
environment are manually reflected on the Computed Aided Design (CAD) system.
This increases the design time and breaks the continuity of data workflow. On
this basis, there is a clear demand among the industry for tools adapted to
immersive design. But few methods exist that encompass CAD problematic in VR.
For this purpose, we propose a new method, called D3, for "Draw, Deform
and Design", based on a 2 step manipulation paradigm, consisting with 1) area
selection and 2) path drawing, and a final refining and fitting phase. Our
method is discussed on the basis of a set of CAD deformation scenarios. Keywords: computer-aided design (CAD), immersive environment, real-time 3D object
deformation, virtual reality | |||
| Facilitating system control in ray-based interaction tasks | | BIBAK | Full-Text | 183-186 | |
| André Kunert; Alexander Kulik; Christopher Lux; Bernd Fröhlich | |||
| This paper investigates the usability of tracked wands equipped with
additional input sensors for system control tasks in 3D interaction scenarios.
We integrated a thumb-operated circular touchpad into a hand-held wand and
compared the performance of our input device to common ray-based interaction in
a menu selection and parameter adjustment task. The results show that both
interfaces can be highly efficient, but ray-based interaction is only
competitive if large-sized graphical interface representations are provided. In
contrast, touchpad input performs well independent of the size of the graphical
elements due to proprioceptive and tactile feedback. Keywords: 3D input device, 3D pointing, system control | |||
| Crafting memorable VR experiences using experiential fidelity | | BIBAK | Full-Text | 187-190 | |
| Robert W. Lindeman; Steffi Beckhaus | |||
| Much of Virtual Reality (VR) is about creating virtual worlds that are
believable. But though the visual and audio experiences we provide today
technically approach the limits of human sensory systems, there is still
something lacking; something beyond sensory fidelity hinders us from fully
buying into the worlds we experience through VR technology.
We introduce the notion of Experiential Fidelity, which is an attempt to create a deeper sense of presence by carefully designing the user experience. We suggest to guide the user's frame of mind in a way that their expectations, attitude, and attention are aligned with the actual VR experience, and that the user's own imagination is stimulated to complete the experience. We propose to do this by structuring the time prior to exposure to increase anticipation, expectation, and the like. Keywords: experience design, presence, user experience, virtual reality | |||
| A semantic environment model for crowd simulation in multilayered complex environment | | BIBAK | Full-Text | 191-198 | |
| Hao Jiang; Wenbin Xu; Tianlu Mao; Chunpeng Li; Shihong Xia; Zhaoqi Wang | |||
| Simulating crowds in complex environment is fascinating and challenging,
however, modeling of the environment is always neglected in the past, which is
one of the essential problems in crowd simulation especially for multilayered
complex environment. This paper presents a semantic model for representing the
complex environment, where the semantic information is described with a
three-tier framework: a geometric level, a semantic level and an application
level. Each level contains different maps for different purposes and our
approach greatly facilitates the interactions between individuals and virtual
environment. And then a modified continuum crowd method is designed to fit the
proposed virtual environment model so that realistic behaviors of large dense
crowds could be simulated in multilayered complex environments such as
buildings and subway stations. Finally, we implement this method and test it in
two complex synthetic urban spaces. The experiment results demonstrate that the
semantic environment model can provide sufficient and accurate information for
crowd simulation in multilayered complex environment. Keywords: continuum crowd, crowd simulation, environment representation, semantic
environment model | |||
| A saliency-based method of simulating visual attention in virtual scenes | | BIBAK | Full-Text | 199-206 | |
| Oyewole Oyekoya; William Steptoe; Anthony Steed | |||
| Complex interactions occur in virtual reality systems, requiring the
modelling of next-generation attention models to obtain believable virtual
human animations. This paper presents a saliency model that is neither domain
nor task specific, which is used to animate the gaze of virtual characters. A
critical question is addressed: What types of saliency attract attention in
virtual environments and how can they be weighted to drive an avatar's gaze?
Saliency effects were measured as a function of their total frequency. Scores
were then generated for each object in the field of view within each frame to
determine the most salient object within the virtual environment. This paper
compares the resulting saliency gaze model to tracked gaze, in which avatars'
eyes are controlled by head-mounted mobile eye-trackers worn by human subjects,
random gaze model informed by head-orientation for saccade generation, and
static gaze featuring non-moving centered eyes. Results from the evaluation
experiment and graphical analysis demonstrate a promising saliency gaze model
that is not just believable and realistic but also target-relevant and
adaptable to varying tasks. Furthermore, the saliency model does not use any
prior knowledge of the content or description of the virtual scene. Keywords: behavioural realism, character animation, facial animation, gaze modeling,
target saliency, visual attention | |||
| Indexing and retrieval of human motion data by a hierarchical tree | | BIBAK | Full-Text | 207-214 | |
| Shuangyuan Wu; Zhaoqi Wang; Shihong Xia | |||
| For the convenient reuse of large-scale 3D motion capture data, browsing and
searching methods for the data should be explored. In this paper, an efficient
indexing and retrieval approach for human motion data is presented based on a
novel similarity metric. We divide the human character model into three
partitions to reduce the spatial complexity and measure the temporal similarity
of each partition by self-organizing map and Smith-Waterman algorithm. The
overall similarity between two motion clips can be achieved by integrating the
similarities of the separate body partitions. Then the hierarchical clustering
method is implemented, which can not only cluster the motion data accurately,
but also discover the relationships between different motion types by a binary
tree structure. With our typical cluster locating algorithm and motion motif
mining method, fast and accurate retrieval can be performed. The experiment
results show the effectiveness of our approach. Keywords: Smith-Waterman algorithm, hierarchical clustering, indexing, motion capture,
retrieval, self-organizing map | |||
| A location aware P2P voice communication protocol for networked virtual environments | | BIBAK | Full-Text | 215-222 | |
| Gabor Papp; Chris GauthierDickey | |||
| Multiparty voice communication, where multiple people can communicate in a
group, is an important component of networked virtual environments (NVEs),
especially in many types of online games. In this paper, we present a new
peer-to-peer protocol that uses Gabriel graphs, a subgraph of Delaunay
triangulations, to provide scalable multiparty voice communication. In
addition, our protocol uses positional information so that voice data can be
accurately modeled to listeners to increase the immersiveness of their
experience. Our simulations show that the algorithms scale well even in densely
populated areas, while prioritizing the sending of voice packets to the closest
listeners of a speaker first, thus behaving as users expect. Keywords: Delaunay triangulation, Gabriel graph, P2P, location awareness, virtual
environment, voice communication | |||
| Searching for the metaverse | | BIBAK | Full-Text | 223-226 | |
| Joshua Eno; Susan Gauch; Craig Thompson | |||
| We present a system for collecting content from 3D multi-user virtual worlds
for use in a cross-world search engine, an enabling technology for linking
virtual worlds to the wider web. We use an intelligent agent crawler designed
to collect user-generated content without relying on access to private internal
server databases. The agents navigate autonomously through the world and
interact with content to discover regions, parcels of land within regions,
user-created objects, other avatars, and user associations. The experiments we
performed are the first which focus on the content within a large virtual
world. Our results show that virtual worlds can be effectively crawled using
autonomous agent crawlers that emulate normal user behavior. Additionally, we
find that the collection of interactive content enhances our ability to
identify dynamic, immersive environments within the world. Keywords: 3D web, search engine, virtual worlds | |||
| 3D positioning techniques for multi-touch displays | | BIBAK | Full-Text | 227-228 | |
| Anthony Martinet; Géry Casiez; Laurent Grisoni | |||
| Multi-touch displays represent a promising technology for the display and
manipulation of 3D data. To fully exploit their capabilities, appropriate
interaction techniques must be designed. In this paper, we explore the design
of free 3D positioning techniques for multi-touch displays to exploit the
additional degrees of freedom provided by this technology. We present a first
interaction technique to extend the standard four viewports technique found in
commercial CAD applications and a second technique designed to allow free 3D
positioning with a single view of the scene. Keywords: 3D positioning task, direct manipulation, multi-touch displays | |||
| Compression of massive models by efficiently exploiting repeated patterns | | BIBAK | Full-Text | 229-230 | |
| Kangying Cai; Yu Jin; Wencheng Wang; QuQing Chen; Zhibo Chen; Jun Teng | |||
| We propose a new compression algorithm for massive models, which consist of
a large number of small to medium sized connected components. It is by
efficiently exploiting repetitive patterns in the input model. Compared with
the similar work by finding repetitive patterns, our new algorithm is more
efficient on detecting repeated components by recognizing instances repeating
in various scalings. We also propose an efficient compression scheme for
transformation data. As a result, it can achieve a considerably higher
compression ratio. Keywords: automatic discovery, compression, repeated pattern | |||
| Crime scene robot and sensor simulation | | BIBAK | Full-Text | 231-232 | |
| Robert Codd-Downey; Michael Jenkin | |||
| Virtual reality has been proposed as a training regime for a large number of
tasks from surgery rehearsal (cf. [Robb et al. 1996], to combat simulation (cf.
[U. S. Congress, Office of Technology Assessment 1994]) to assisting in basic
design (cf. [Fa et al. 1992]). Virtual reality provides a novel and effective
training medium for applications in which training "in the real world" is
dangerous or expensive. Here we describe the C2SM simulator system -- a virtual
reality-based training system that provides an accurate simulation of the CBRNE
Crime Scene Modeller System (see [Topol et al. 2008]). The training system
provides a simulation of both the underlying robotic platform and the C2SM
sensor suite, and allows training of the system to take place without
physically deploying the robot or the simulation of chemical and radiological
agents that might be present. This paper describes the basic structure of the
C2SM simulator and the software components that were used to construct it. Keywords: bomb disposal simulation, crime scene simulation, virtual reality | |||
| Doppler effects without equations | | BIBAK | Full-Text | 233-234 | |
| Peter Brinkmann; Michael Gogins | |||
| We present a fast and robust method for approximating sound propagation in
situations where audio and video frame rates may differ significantly and
positions of sound sources and listeners are only known at discrete times, so
that numerically stable velocities are not available. Typical applications
include 3D scenes in virtual environments where positions of sources and
listeners are determined in real time by user interaction. Our method employs a
computationally inexpensive heuristic that converges to the exact solution for
constant speeds and achieves convincing Doppler shifts in general. Keywords: Doppler effects, VR audio, sound propagation | |||
| Dynamic light amplification for head mounted displays | | BIBAK | Full-Text | 235-236 | |
| Andrei Sherstyuk; Anton Treskunov | |||
| Two common limitations of modern Head Mounted Displays (HMD): the narrow
field of view and limited dynamic range, call for rendering techniques that can
circumvent or even take advantage of these factors. We describe a simple
practical method of enhancing visual response from HMDs by using view-dependent
control over lighting. One example is provided for simulating blinding lights
in dark environments. Keywords: real-time light control, simulated lighting effects | |||
| Estimation of thinking states in cyberspace using multiple physiological parameters | | BIBAK | Full-Text | 237-238 | |
| Hiromu Miyashita; Ryo Segawa; Ken-ichi Okada | |||
| Thinking states are very important factors in the evaluation of and
interaction with virtual spaces. Physiological information is often used to
estimate thinking states objectively. However, it is difficult to estimate
complex feelings numerically from limited physiological information. In this
paper, we propose a method for the evaluation of thinking states in cyberspace
using multiple physiological parameters. We developed a mapping matrix that
converted physiological data into a composite of thinking states. In
experiments, we found that with one mapping matrix the thinking states of
different subjects for the same task could be derived. In addition, we
investigated the similarities and differences between six estimated thinking
states by using the mapping matrix. Keywords: feelings, presence, psychology | |||
| Event-related de-synchronization and synchronization (ERD/ERS) of EEG for controlling a brain-computer-interface driving simulator | | BIBAK | Full-Text | 239-240 | |
| Junichi Toyama; Jun Ando; Michiteru Kitazaki | |||
| The purpose of this study was to investigate ERD/ERS during hand movements
and their imagery for developing ERD/ERS-utilized speed control on the
brain-computer-interface driving simulator. We found clear ERD in
contra-lateral cortex after 9-days learning of motor imagery. We developed a
driving simulator equipped with ERD speed control, and showed a driver could
control the speed of car. Keywords: brain computer interface, driving simulator, speed control, vision | |||
| Experiments for developing touchable online shopping system | | BIBAK | Full-Text | 241-242 | |
| Kenji Funahashi; Masahisa Ichino; Mototoshi Teshigahara | |||
| The touchable online shopping system we propose enables users can touch a
virtual commodity with their own hands. One purpose of our proposal is to
evaluate whether it is easy to measure the size and weight of the commodity
intuitively only with small-scale size and low-price elements. Another is to
make a general-purpose system only with them for general applications. With
such virtual reality, we used a data-glove with vibrators and a simple force
feedback device to cover any impressions such as interface interference during
use. We evaluated whether users can measure the size and weight of a virtual
commodity. Using this system, we found that users could measure them similar to
the use of real ones. Keywords: haptic and force feedback, online shopping, virtual reality | |||
| HardBorders: a new haptic approach for selection tasks in 3D menus | | BIBAK | Full-Text | 243-244 | |
| Caroline Essert-Villard; Antonio Capobianco | |||
| In this paper, we introduce a 3D menu with a new technique of haptic
guidance, for virtual environments. The 3D menu consists in a thin polyhedral
shape, with the items at the corners. The HardBorders technique haptically
simulates the collisions of the pointer with the borders of the polyhedron,
making it glide towards the items of the menu. A comparison with 2 reference
modalities has been performed, showing a clear advantage of our HardBorders
technique. Keywords: 3D interaction, computer-human interfaces, force feedback devices, haptic
interfaces, menus, virtual reality | |||
| High resolution insets for stereoscopic immersive displays: a method for 3D content dependent inset boundaries | | BIBAK | Full-Text | 245-246 | |
| Alexis Paljic; Caroline de Bossoreille; Philippe Fuchs; Gabriel Soubies | |||
| This poster presents a novel approach for integrating, as insets, high
resolution stereoscopic areas within a wider low resolution stereoscopic image.
The objective is to propose ways to increase local resolution in virtual
reality systems, with the use of a small number of additional projectors. Our
novel approach consists of inserting high resolution areas whose shape depends
on the geometry of 3D objects. For comparison purposes, an existing method is
implemented, it consists of setting a rectangular high resolution inset. Both
methods are implemented in a large screen VR system with active stereoscopy and
head tracking. A user study meant to check for shape perception discrepancies
between the two methods was performed. Results show that the novel technique
does not introduce any perception distortion in shape size. Keywords: immersive display, occlusion, seamless high resolution inset, stereoscopy | |||
| Importance masks for revealing occluded objects in augmented reality | | BIBAK | Full-Text | 247-248 | |
| Erick Mendez; Dieter Schmalstieg | |||
| When simulating "X-ray vision" in Augmented Reality, a critical aspect is
ensuring correct perception of the occluded objects position. Naïve
overlay rendering of occluded objects on top of real-world occluders can lead
to a misunderstanding of the visual scene and a poor perception of the depth.
We present a simple technique to enhance the perception of the spatial
arrangements in the scene. An importance mask associated with occluders informs
the rendering what information can be overlaid and what should be preserved.
This technique is independent of scene properties such as illumination and
surface properties, which may be unknown. The proposed solution is computed
efficiently in a single-pass fragment shaders on the GPU. Keywords: augmented reality, focus and context, importance masks, x-ray vision | |||
| Influence of event-based haptic on the manipulation of rigid objects in degraded virtual environments | | BIBAK | Full-Text | 249-250 | |
| Mathieu Gautier; Jean Sreng; Claude Andriot | |||
| In this paper, we propose to evaluate the benefits of event-based haptic
rendering in virtual environments subject to high damping or low contact
stiffness. In such context, the degraded perception of contact impairs the
manipulation performance and the overall user's experience. Haptic rendering
techniques known as event-based (or open-loop) rendering have been proposed to
improve the realism of contacts in such contexts.
We conducted a preliminary experiment to investigate the effect in terms of performance of haptic event-based technique on virtual environments using different parameters of damping and stiffness. The first results suggest that event-based haptic can improve the perception of contact for such environments (particularly with high damping and/or low stiffness). Keywords: contact, event-based haptic, haptic feedback, open loop haptic | |||
| Interactive high dynamic range rendering for virtual reality applications | | BIBAK | Full-Text | 251-252 | |
| Josselin Petit; Roland Brémond | |||
| Realistic images can be computed at interactive frame rates for Virtual
Reality applications. Meanwhile, High Dynamic Range (HDR) rendering has a
growing success in video games and virtual reality, as it improves the image
quality and the player's immersion feeling. We propose a new method, based on a
simplified physical model of light propagation, to compute in real time a HDR
illumination in a Virtual Environment (VE). Our method allows to re-use
existing Low Dynamic Range (LDR) Virtual Databases as input in order to compute
the HDR images. Then, from these HDR images, displayable 8-bit images are
rendered with a Tone Mapping Operator (TMO) and displayed on a standard display
device. The HDR computation and the TMO are implemented in OpenSceneGraph
(OSG), working in real time with pixel shaders. The method is illustrated with
a practical application where the Dynamic Range is a key rendering issue:
driving at night. The VE includes light sources such as road lighting and car
headlights. Keywords: high dynamic range, pixel shader, virtual reality | |||
| Physically based grasping and manipulation method using pre-contact grasping quality measure | | BIBAK | Full-Text | 253-254 | |
| Yongwan Kim; Jinseong Choi; Jinah Park | |||
| While the human hand is the most natural tool for interacting with objects,
hand interaction with virtual objects is still a challenging research field. We
recognize that common hand interaction involves two separate stages with
different aspects: One is the robustness of grasping, and the other is the
dexterity of manipulation after grasping a virtual object. In this paper, we
address virtual hand interaction by decoupling these two aspects and propose
strategic simulation algorithms for two different stages. As for the initial
grasping, a quality measure-based grasping algorithm is applied for robustness,
and the manipulation is simulated by physically based methods to meet the
requirements of dexterity. We conducted experiments to evaluate the
effectiveness of our proposed method under different display environments --
monoscopic and stereoscopic. From the 2-way ANOVA test, we were able to show
that the proposed scheme that separates a pre-contact grasping phase and a
post-contact manipulation phase gives us the robustness of grasping quality and
dexterity of tool operation. Furthermore, we demonstrate various assembly
manipulations of the relatively complex models using our interaction scheme. Keywords: grasping, hand interaction, manipulation | |||
| Realization of a vibro-tactile glove type mouse | | BIBAK | Full-Text | 255-256 | |
| Jun-Hyung Park; Hyun Suk Lee; Ju Seok Jeong; Tae-Jeong Jang | |||
| In this paper, we suggested a glove type mouse using a gyroscope sensor, an
acceleration sensor, and six pin-type vibro-tactile modules. It was designed as
a USB HID (human interface device) so that it can be automatically recognized
and installed as a general mouse when it is plugged in PC USB socket. This
mouse recognizes a user's wrist movement by the gyroscope sensor and the
acceleration sensor in the glove and transmits coordinate value to the PC
through wireless Bluetooth. This vibro-tactile glove type mouse accommodates
all circuits and devices in the glove, implementing a wearable system. A user
can use general spatial mouse without any driver or application program.
However, since tactile devices are not included in the USB HID, we made an
application program for vibro-tactile display so that a PC program can transmit
specific vibro-tactile information to the user to represent gray scale of
pictures, braille codes, directions to go, and so on. Keywords: HID (human interface device), glove type mouse, haptics, vibro-tactile
display | |||
| Reconstructing chat history for avatar agents using spatio-temporal features of virtual space | | BIBAK | Full-Text | 257-258 | |
| Seung-Hyun Ji; Tae-Jin Yoon; Hwan-Gue Cho | |||
| One of the crucial issues in Internet chat is how to manage the
corresponding pairs of questions and answers in a sequence of conversations.
This paper addresses the problems of ambiguous dialogue logs, lack of a social
interaction network of chat agents, and the rupture of the turn sequence in the
plain chat room. Therefore we can resolve the ambiguity in rupturing
connections between turns and replies. Also, our system supports a graphical
visualization interface for tracking the chat dialogue using Chat Flow Graph
(CFG) [Park et al. 2008a]. In this paper, we improve our previous work and
construct a social network between avatar agents. Our experiment shows that our
system is highly effective in a virtual chat environment. Keywords: chat program, communication, virtual reality | |||
| A study of multimodal feedback to support collaborative manipulation tasks in virtual worlds | | BIBAK | Full-Text | 259-260 | |
| Arturo S. García; José P. Molina; Pascual González; Diego Martínez; Jonatan Martínez | |||
| In the research community, developers of Collaborative Virtual Environments
(CVEs) usually refer to the terms awareness and feedback as something necessary
to maintain a fluent collaboration when highly interactive tasks have to be
performed. However, it is remarkable that few studies address the effect that
including special kinds of feedback has on user awareness and task performance.
This work follows a preliminary experiment where we already studied awareness
in CVEs, evaluating the effect of visual cues in the performance of
collaborative tasks and showing that users tend to make more mistakes when such
feedback is not provided, that is, they are less aware. These early results
were promising and encouraged us to continue investigating the benefit of
improving awareness in tasks that require close collaboration between users,
but this time analyzing more types of awareness and experimenting with visual,
audio and vibrotactile feedback cues. Keywords: CSCW, awareness, collaborative virtual environments, feedback | |||
| Thermal display for scientific haptization of numerical simulations | | BIBAK | Full-Text | 261-262 | |
| Yuichi Tamura; Hiroaki Nakamura | |||
| Thermal display is a useful device for representing the haptic sense;
however most such devices are used in a sitting position. We have used an
immersive projection technology (IPT) display for scientific visualization,
which is very useful for comprehending complex phenomena. However, in the IPT,
the observer generally watches and controls 3D objects in standing position.
Therefore it is necessary to develop a thermal display that can be used in a
standing position. In addition, in scientific visualization, the response time
is very important for observing physical phenomena, especially in dynamic
numerical simulation results. One solution is to provide two types of thermal
information: information about the rate of thermal change, and information
about the real temperature. Using the two types of information, the observer
can recognize the change and the real thermal value immediately. Keywords: immersive projection display, numerical simulation, thermal display, virtual
reality | |||
| Toolkit-independent interaction specification for VR-based visualization | | BIBAK | Full-Text | 263-264 | |
| Irene Tedjo-Palczynski; Bernd Hentschel; Marc Wolter; Thomas Beer; Torsten Kuhlen | |||
| The application of virtual environments to scientific visualization provides
the user with an intuitive interface to interact with the data to be analyzed.
We present an approach that leverages the utilization of 3D interaction
techniques to directly interact with scientific data. First, we propose an
abstraction of VR-based interaction by decoupling the low-level interaction
groundwork from high-level declarations of interaction behaviors. Second, we
describe how recurring interaction patterns in scientific visualizations can be
distilled into 3D widgets, exhibiting reusability for diverse participants in
the development of interactive visualization. The benefits which follow from
this approach are exchangeability of VR-toolkits and devices, faster
development cycles and better code maintainability. Keywords: 3D interaction, scientific visualization, virtual reality | |||
| User-centered design of a maxillo-facial surgery training platform | | BIBAK | Full-Text | 265-266 | |
| Christine Mégard; Florian Gosselin; Sylvain Bouchigny; Fabien Ferlay; Farid Taha | |||
| The paper describes the requirements specification process involved in the
design of a Virtual Reality trainer for Epker maxillo-facial surgery. This
surgery is considered as very delicate and difficult to teach. Vision guided
movements are very limited and haptic sense is largely used. The user-centered
methodology and the first experiments designed to develop the training platform
are presented. Finally the Virtual training platform is sketched out. Keywords: multimodal, skills, surgery, task analysis, training | |||
| A virtual performance system and its application on a Noh stage | | BIBAK | Full-Text | 267-268 | |
| Masahito Shiba; Asako Soga; Jonah Salz | |||
| We have developed a virtual performance system that projects visual images
and controls them in a theater easily. This system manages videos and 3D
models, and users can control the projected image in real-time. We used the
system on a Noh stage. We displayed virtual actors and had them act with the
real actors. As a result of evaluations by the audience, we verified that using
virtual actors acting with real actors can provide effective stage
presentations. Keywords: Noh, live performance, virtual actor | |||
| Visualization of virtual weld beads | | BIBAK | Full-Text | 269-270 | |
| Dongsik Jo; Yongwan Kim; Ungyeon Yang; Gun A. Lee; Jin Sung Choi | |||
| In this paper, we present a visualization method of weld beads for a welding
training under virtual environments. To represent virtual beads, a bead shape
is defined according to the datasets which consist of bead width, height,
angle, penetration acquired from real welding operations. A curve equation of
bead's sectional shape is mathematical modeled, and a height map is generated
according to this numerical equation, which is used for generating the bead's
mesh data using its height information. Finally, virtual weld beads are
visualized in real-time according to the accurately simulated results from
user's input motion. Keywords: bead, training, virtual reality, visualization, welding | |||
| Weathering fur simulation | | BIBAK | Full-Text | 271-272 | |
| Shaohui Jiao; Gang Yang; Enhua Wu | |||
| The paper presents a novel approach for simulating the weathering fur. Dusty
effects on fur is generated by volumetric γ-ton tracing method and the
geometry deformation is modeled through a dynamic PBS. The proposed approach
can efficiently simulate the weathering effects of fur. Keywords: γ-ton, PBS, fur texel, weathering simulation | |||