| Editorial | | BIB | Full-Text | 1-3 | |
| James Ritchie; Judy Vance; Satyandra Gupta | |||
| Virtual reality for assembly methods prototyping: a review | | BIBAK | Full-Text | 5-20 | |
| Abhishek Seth; Judy M. Vance; James H. Oliver | |||
| Assembly planning and evaluation is an important component of the product
design process in which details about how parts of a new product will be put
together are formalized. A well designed assembly process should take into
account various factors such as optimum assembly time and sequence, tooling and
fixture requirements, ergonomics, operator safety, and accessibility, among
others. Existing computer-based tools to support virtual assembly either
concentrate solely on representation of the geometry of parts and fixtures and
evaluation of clearances and tolerances or use simulated human mannequins to
approximate human interaction in the assembly process. Virtual reality
technology has the potential to support integration of natural human motions
into the computer aided assembly planning environment (Ritchie et al. in Proc I
MECH E Part B J Eng 213(5):461-474, 1999). This would allow evaluations of an
assembler's ability to manipulate and assemble parts and result in reduced time
and cost for product design. This paper provides a review of the research in
virtual assembly and categorizes the different approaches. Finally, critical
requirements and directions for future research are presented. Keywords: Virtual assembly; Collision detection; Physics-based modeling;
Constraint-based modeling; Virtual reality; Haptics; Human-computer interaction | |||
| A self-configurable large-scale virtual manufacturing environment for collaborative designers | | BIBAK | Full-Text | 21-40 | |
| Hyunsoo Lee; Amarnath Banerjee | |||
| As manufacturing environments are getting distributed and increasing in
size, the related virtual environments are getting larger and more closely
networked together. This trend has led to a new paradigm -- large-scale virtual
manufacturing environment (LSVME). It supports networked and distributed
virtual manufacturing to meet manufacturing system requirements. Since it
contains a large number of virtual components, an effective data structure and
collaborative construction methodology are needed. A metaearth architecture is
proposed as the data structure for representing LSVME. This architecture
consists of virtual space layer, mapping layer, library layer and ontology
layers, which describe interaction among virtual components and has the ability
to analyze the characteristics of virtual environment. In addition, it
increases reusability of virtual components and supports self-reconfiguration
for manufacturing simulation. A heuristic construction method based on graph
theory is proposed using this architecture. It prevents redundant design of
virtual components and contributes to an effective construction scheduling
technique for collaborative designers. Keywords: Large-scale virtual manufacturing environment (LSVME); Metaearth
architecture; Self-reconfiguration; Collaborative construction | |||
| Physics-based virtual reality for task learning and intelligent disassembly planning | | BIBAK | Full-Text | 41-54 | |
| Jacopo Aleotti; Stefano Caselli | |||
| Physics-based simulation is increasingly important in virtual manufacturing
for product assembly and disassembly operations. This work explores potential
benefits of physics-based modeling for automatic learning of assembly tasks and
for intelligent disassembly planning in desktop virtual reality. The paper
shows how realistic physical animation of manipulation tasks can be exploited
for learning sequential constraints from user demonstrations. In particular, a
method is proposed where information about physical interaction is used to
discover task precedences and to reason about task similarities. A second
contribution of the paper is the application of physics-based modeling to the
problem of disassembly sequence planning. A novel approach is described to find
all physically admissible subassemblies in which a set of rigid objects can be
disassembled. Moreover, efficient strategies are presented aimed at reducing
the computational time required for automatic disassembly planning. The
proposed strategies take into account precedence relations arising from user
assembly demonstrations as well as geometrical clustering. A motion planning
technique has also been developed to generate non-destructive disassembly paths
in a query-based approach. Experiments have been performed in an interactive
virtual environment including a dataglove and motion tracker that allows
realistic object manipulation and grasping. Keywords: Virtual reality; Disassembly planning; Precedence graphs; Physics-based
animation; Programming by demonstration | |||
| Fatigue evaluation in maintenance and assembly operations by digital human simulation in virtual environment | | BIBAK | Full-Text | 55-68 | |
| Liang Ma; Damien Chablat; Fouad Bennis; Wei Zhang; Bo Hu | |||
| Virtual human techniques have been used a lot in industrial design in order
to consider human factors and ergonomics as early as possible, and it has been
integrated into VR applications to complete ergonomic evaluation tasks. In
order to generalize the evaluation task in VE, especially for physical fatigue
evaluation, we integrated a new fatigue model into a virtual environment
platform. Virtual Human Status is proposed in this paper in order to assess the
difficulty of manual handling operations, especially from the physical
perspective. The decrease of the physical capacity before and after an
operation is used as an index to indicate the difficulty level. The reduction
of physical strength is simulated in a theoretical approach on the basis of a
fatigue model in which fatigue resistances of different muscle groups were
regressed from 24 existing maximum endurance time models. A framework based on
digital human modeling technique is established to realize the comparison of
physical status. An assembly case in airplane assembly is simulated and
analyzed under the framework in VRHIT experiment platform. The endurance time
and the decrease of the joint moment strengths are simulated. The experimental
result in simulated operations under laboratory conditions confirms the
feasibility of the theoretical approach: integration of virtual human
simulation into virtual reality for physical fatigue evaluation. Keywords: Virtual human simulation; Muscle fatigue model; Fatigue resistance; Physical
fatigue evaluation; Human status | |||
| Low-cost simulated MIG welding for advancement in technical training | | BIBAK | Full-Text | 69-81 | |
| Steven A. White; Mores Prachyabrued; Terrence L. Chambers | |||
| The simulated MIG lab (sMIG) is a training simulator for Metal Inert Gas
(MIG) welding. It is based on commercial off the shelf (COTS) components and
targeted at familiarizing beginning students with the MIG equipment and best
practices to follow to become competent and effective MIG welders. To do this,
it simulates the welding process as realistically as possible using standard
welding hardware components (helmet, gun) for input and by using head-tracking
and a 3D-capable low-cost monitor and standard speakers for output. We
developed a simulation to generate realistic audio and visuals based on
numerical heat transfer methods and verified the accuracy against real welds.
sMIG runs in real time producing a realistic, interactive, and immersive
welding experience while maintaining a low installation cost. In addition to
being realistic, the system provides instant feedback beyond what is possible
in a traditional lab. This help students avoid learning (and unlearning)
incorrect movement patterns. Keywords: Virtual reality; Welding; Finite difference; Simulation; Acoustics | |||
| Modeling and real-time simulation architectures for virtual prototyping of off-road vehicles | | BIBAK | Full-Text | 83-96 | |
| Manoj Karkee; Brian L. Steward; Atul G. Kelkar; Zachary T., II Kemp | |||
| Virtual Reality-based simulation technology has evolved as a useful design
and analysis tool at an early stage in the design for evaluating performance of
human-operated agricultural and construction machinery. Detecting anomalies in
the design prior to building physical prototypes and expensive testing leads to
significant cost savings. The efficacy of such simulation technology depends on
how realistically the simulation mimics the real-life operation of the
machinery. It is therefore necessary to achieve 'real-time' dynamic simulation
of such machines with operator-in-the-loop functionality. Such simulation often
leads to intensive computational burdens. A distributed architecture was
developed for off-road vehicle dynamic models and 3D graphics visualization to
distribute the overall computational load of the system across multiple
computational platforms. Multi-rate model simulation was also used to simulate
various system dynamics with different integration time steps, so that the
computational power can be distributed more intelligently. This architecture
consisted of three major components: a dynamic model simulator, a virtual
reality simulator for 3D graphics, and an interface to the controller and input
hardware devices. Several off-road vehicle dynamics models were developed with
varying degrees of fidelity, as well as automatic guidance controller models
and a controller area network interface to embedded controllers and user input
devices. The simulation architecture reduced the computational load to an
individual machine and increased the real-time simulation capability with
complex off-road vehicle system models and controllers. This architecture
provides an environment to test virtual prototypes of the vehicle systems in
real-time and the opportunity to test the functionality of newly developed
controller software and hardware. Keywords: Real-time simulation; Distributed architecture; Virtual reality; Vehicle
dynamics models; Multi-rate simulation | |||
| Editorial: special issue on augmented reality | | BIB | Full-Text | 97-98 | |
| Mark Billinghurst; Dieter Schmalstieg | |||
| Benchmarking template-based tracking algorithms | | BIBAK | Full-Text | 99-108 | |
| Sebastian Lieberknecht; Selim Benhimane; Peter Meier; Nassir Navab | |||
| For natural interaction with augmented reality (AR) applications, good
tracking technology is key. But unlike dense stereo, optical flow or multi-view
stereo, template-based tracking which is most commonly used for AR applications
lacks benchmark datasets allowing a fair comparison between state-of-the-art
algorithms. Until now, in order to evaluate objectively and quantitatively the
performance and the robustness of template-based tracking algorithms, mainly
synthetically generated image sequences were used. The evaluation is therefore
often intrinsically biased. In this paper, we describe the process we carried
out to perform the acquisition of real-scene image sequences with very precise
and accurate ground truth poses using an industrial camera rigidly mounted on
the end effector of a high-precision robotic measurement arm. For the
acquisition, we considered most of the critical parameters that influence the
tracking results such as: the texture richness and the texture repeatability of
the objects to be tracked, the camera motion and speed, and the changes of the
object scale in the images and variations of the lighting conditions over time.
We designed an evaluation scheme for object detection and interframe tracking
algorithms suited for AR and other computer vision applications and used the
image sequences to apply this scheme to several state-of-the-art algorithms.
The image sequences are freely available for testing, submitting and evaluating
new template-based tracking algorithms, i.e. algorithms that detect or track a
planar object in an image sequence given only one image of the object (called
the template). Keywords: Augmented reality; Optical tracking; Template-based tracking; Benchmark;
Evaluation | |||
| Camera tracking by online learning of keypoint arrangements using LLAH in augmented reality applications | | BIBAK | Full-Text | 109-117 | |
| Hideaki Uchiyama; Hideo Saito; Myriam Servières; Guillaume Moreau | |||
| We propose a camera-tracking method by on-line learning of keypoint
arrangements in augmented reality applications. As target objects, we deal with
intersection maps from GIS and text documents, which are not dealt with by the
popular SIFT and SURF descriptors. For keypoint matching by keypoint
arrangement, we use locally likely arrangement hashing (LLAH), in which the
descriptors of the arrangement in a viewpoint are not invariant to the wide
range of viewpoints because the arrangement is changeable with respect to
viewpoints. In order to solve this problem, we propose online learning of
descriptors using new configurations of keypoints at new viewpoints. The
proposed method allows keypoint matching to proceed under new viewpoints. We
evaluate the performance and robustness of our tracking method using view
changes. Keywords: LLAH; Feature descriptor; Camera tracking; Augmented reality | |||
| Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment | | BIBAK | Full-Text | 119-132 | |
| Momoyo Nagase; Daisuke Iwai; Kosuke Sato | |||
| This paper presents a novel model-based approach of dynamic defocus and
occlusion compensation method in a multi-projection environment. Conventional
defocus compensation research applies appearance-based method, which needs a
point spread function (PSF) calibration when either position or orientation of
an object to be projected is changed, thus cannot be applied to interactive
applications in which the object dynamically moves. On the other hand, we
propose a model-based method in which PSF and geometric calibrations are
required only once in advance, and projector's PSF is computed online based on
geometric relationship between the projector and the object without any
additional calibrations. We propose to distinguish the oblique blur (loss of
high-spatial-frequency components according to the incidence angle of the
projection light) from the defocus blur and to introduce it to the PSF
computation. For each part of the object surfaces, we select an optimal
projector that preserves the largest amount of high-spatial-frequency
components of the original image to realize defocus-free projection. The
geometric relationship can also be used to eliminate the cast shadows of the
projection images in multi-projection environment. Our method is particularly
useful in the interactive systems because the movement of the object
(consequently geometric relationship between each projector and the object) is
usually measured by an attached tracking sensor. This paper describes details
about the proposed approach and a prototype implementation. We performed two
proof-of-concept experiments to show the feasibility of our approach. Keywords: Projection-based mixed reality; Multi-projection environment; Defocus
compensation; Shadow removal; PSF computation | |||
| Two-handed tangible interaction techniques for composing augmented blocks | | BIBAK | Full-Text | 133-146 | |
| Hyeongmook Lee; Mark Billinghurst; Woontack Woo | |||
| Modeling tools typically have their own interaction methods for combining
virtual objects. For realistic composition in 3D space, many researchers from
the fields of virtual and augmented reality have been trying to develop
intuitive interactive techniques using novel interfaces. However, many modeling
applications require a long learning time for novice users because of
unmanageable interfaces. In this paper, we propose two-handed tangible
augmented reality interaction techniques that provide an easy-to-learn and
natural combination method using simple augmented blocks. We have designed a
novel interface called the cubical user interface, which has two tangible cubes
that are tracked by marker tracking. Using the interface, we suggest two types
of interactions based on familiar metaphors from real object assembly. The
first, the screw-driving method, recognizes the user's rotation gestures and
allows them to screw virtual objects together. The second, the block-assembly
method, adds objects based on their direction and position relative to
predefined structures. We evaluate the proposed methods in detail with a user
experiment that compares the different methods. Keywords: Two-handed interaction; Tangible interaction; Augmented reality; 3D model
assembly; Multi-modal feedback | |||
| Document search support by making physical documents transparent in projection-based mixed reality | | BIBAK | Full-Text | 147-160 | |
| Daisuke Iwai; Kosuke Sato | |||
| This paper presents Limpid Desk that supports document search on a physical
desktop by making the upper layer of a document stack transparent in a
projection-based mixed reality environment. A user can visually access a
lower-layer document without physically removing the upper documents. This is
accomplished by superimposition of cover textures of lower-layer documents on
the upper documents by projected imagery. This paper introduces a method of
generating projection images that make physical documents transparent.
Furthermore, a touch sensing method based on thermal image processing is
proposed for the system's input interface. Areas touched by a user on physical
documents can be detected without any user-worn or handheld devices. This
interface allows a user to select a stack to be made transparent by a simple
touch gesture. Three document search support techniques are realized using the
system. User studies are conducted, and the results show the effectiveness of
the proposed techniques. Keywords: Projection-based mixed reality; Document search support; Making documents
transparent; Thermal image processing; Thermal trace; Touch sensing | |||
| An augmented reality interface to contextual information | | BIBAK | Full-Text | 161-173 | |
| Antti Ajanki; Mark Billinghurst; Hannes Gamper; Toni Järvenpää | |||
| In this paper, we report on a prototype augmented reality (AR) platform for
accessing abstract information in real-world pervasive computing environments.
Using this platform, objects, people, and the environment serve as contextual
channels to more information. The user's interest with respect to the
environment is inferred from eye movement patterns, speech, and other implicit
feedback signals, and these data are used for information filtering. The
results of proactive context-sensitive information retrieval are augmented onto
the view of a handheld or head-mounted display or uttered as synthetic speech.
The augmented information becomes part of the user's context, and if the user
shows interest in the AR content, the system detects this and provides
progressively more information. In this paper, we describe the first use of the
platform to develop a pilot application, Virtual Laboratory Guide, and early
evaluation results of this application. Keywords: Augmented reality; Gaze tracking; Information retrieval; Machine learning;
Pattern recognition | |||
| User interface design for military AR applications | | BIBAK | Full-Text | 175-184 | |
| Mark A. Livingston; Zhuming Ai; Kevin Karsch; Gregory O. Gibson | |||
| Designing a user interface for military situation awareness presents
challenges for managing information in a useful and usable manner. We present
an integrated set of functions for the presentation of and interaction with
information for a mobile augmented reality application for military
applications. Our research has concentrated on four areas. We filter
information based on relevance to the user (in turn based on location),
evaluate methods for presenting information that represents entities occluded
from the user's view, enable interaction through a top-down map view metaphor
akin to current techniques used in the military, and facilitate collaboration
with other mobile users and/or a command center. In addition, we refined the
user interface architecture to conform to requirements from subject matter
experts. We discuss the lessons learned in our work and directions for future
research. Keywords: Augmented reality; Mobile systems; User interface; Interaction; Evaluation | |||
| Augmenting aerial earth maps with dynamic information from videos | | BIBAK | Full-Text | 185-200 | |
| Kihwan Kim; Sangmin Oh; Jeonggyu Lee; Irfan Essa | |||
| We introduce methods for augmenting aerial visualizations of Earth (from
tools such as Google Earth or Microsoft Virtual Earth) with dynamic information
obtained from videos. Our goal is to make Augmented Earth Maps that visualize
plausible live views of dynamic scenes in a city. We propose different
approaches to analyze videos of pedestrians and cars in real situations, under
differing conditions to extract dynamic information. Then, we augment an Aerial
Earth Maps (AEMs) with the extracted live and dynamic content. We also analyze
natural phenomenon (skies, clouds) and project information from these to the
AEMs to add to the visual reality. Our primary contributions are: (1) Analyzing
videos with different viewpoints, coverage, and overlaps to extract relevant
information about view geometry and movements, with limited user input. (2)
Projecting this information appropriately to the viewpoint of the AEMs and
modeling the dynamics in the scene from observations to allow inference (in
case of missing data) and synthesis. We demonstrate this over a variety of
camera configurations and conditions. (3) The modeled information from videos
is registered to the AEMs to render appropriate movements and related dynamics.
We demonstrate this with traffic flow, people movements, and cloud motions. All
of these approaches are brought together as a prototype system for a real-time
visualization of a city that is alive and engaging. Keywords: Augmented reality; Augmented virtual reality; Video analysis; Computer
vision; Computer graphics; Tracking; View synthesis; Procedural rendering | |||
| In-Place Augmented Reality | | BIBAK | Full-Text | 201-212 | |
| Oriel Bergig; Nate Hagbi; Jihad El-Sana; Klara Kedem; Mark Billinghurst | |||
| In this paper, we present a vision-based approach for transmitting virtual
models for Augmented Reality, which we name In-Place Augmented Reality (IPAR).
A two-dimensional representation of the virtual models is embedded in a printed
image. We apply computer vision techniques to interpret the printed image and
extract the virtual models, which are then overlaid on the printed image. The
main advantages of our approach are: (1) the image of the embedded virtual
models and their behaviors are understandable to a human without using an AR
system and (2) no database or network communication is required to retrieve the
models. To demonstrate the technology and test its usability, we implemented
several applications and performed a user evaluation. We discuss how the
proposed technique can be used for the development of applications in different
domains such as education, advertisement, and gaming. Keywords: Augmented Reality content; Content transmission; Model embedding; Dual
perception encoding; In-Place Augmented Reality | |||
| Pick-by-vision: there is something to pick at the end of the augmented tunnel | | BIBAK | Full-Text | 213-223 | |
| Björn Schwerdtfeger; Rupert Reif; Willibald A. Günthner; Gudrun Klinker | |||
| We report on the long process of exploring, evaluating and refining
augmented reality-based methods to support the order picking process of
logistics applications. Order picking means that workers have to pick items out
of numbered boxes in a warehouse, according to a work order. To support those
workers, we have evaluated different HMD-based visualizations in six user
studies, starting in a laboratory setup and continuing later in an industrial
environment. This was a challenging task, as we had to conquer different kinds
of navigation problems from very coarse to very fine granularity and accuracy.
The resulting setup consists of a combined and adaptive visualization to
precisely and efficiently guide the user even if the actual picking target is
not always in the field of view of the HMD. Keywords: Augmented reality; User studies; Order picking; Logistics; Tracked
head-mounted display; Guidance | |||
| Animatronic shader lamps avatars | | BIBAK | Full-Text | 225-238 | |
| Peter Lincoln; Greg Welch; Andrew Nashel; Andrei State; Adrian Ilie | |||
| Applications such as telepresence and training involve the display of real
or synthetic humans to multiple viewers. When attempting to render the humans
with conventional displays, non-verbal cues such as head pose, gaze direction,
body posture, and facial expression are difficult to convey correctly to all
viewers. In addition, a framed image of a human conveys only a limited physical
sense of presence -- primarily through the display's location. While progress
continues on articulated robots that mimic humans, the focus has been on the
motion and behavior of the robots rather than on their appearance. We introduce
a new approach for robotic avatars of real people: the use of cameras and
projectors to capture and map both the dynamic motion and the appearance of a
real person onto a humanoid animatronic model. We call these devices
animatronic Shader Lamps Avatars (SLA). We present a proof-of-concept prototype
comprised of a camera, a tracking system, a digital projector, and a life-sized
Styrofoam head mounted on a pan-tilt unit. The system captures imagery of a
moving, talking user and maps the appearance and motion onto the animatronic
SLA, delivering a dynamic, real-time representation of the user to multiple
viewers. Keywords: Telepresence; Avatar; Shader lamps; Teleconferencing; Conferencing;
Animatronic | |||
| Modeling literary culture through interactive digital media | | BIBAK | Full-Text | 239-247 | |
| Chamari Edirisinghe; Kening Zhu; Nimesha Ranasinghe; Eng Tat Khoo | |||
| In the rapidly transforming landscape of modern world, people unconsciously
refrain from interacting in public spaces, containing their communications that
are extensive and universal, within home and relatively individually. The mass
connectivity and technological advancement created new cultural values, thus
altering the human perception of the world around him. This state of affairs is
jeopardizing some of the cultural identities that have surmounted few
centuries, shaping the values and associated customs of numerous generations.
Furthermore, the computer technology became integrated exceedingly with the
modern culture, which prompted us to introduce and explore the avenues of
cultural computing that is the familiar ground of the modern society. With the
intention of promoting values of distinct cultures, which will greatly assist
in enhancing the social relationship, we have developed a framework to
communicate literature through digital media, which introduced the platform to
create Poetry Mix-up. Keywords: Cultural computing; Social interaction; Literature; SMS; Mobile culture;
Digital media; Interactive computing system | |||
| Confucius Computer: bridging intergenerational communication through illogical and cultural computing | | BIBAK | Full-Text | 249-265 | |
| Eng Tat Khoo; Adrian David Cheok; Wei Liu; Xiaoming Hu; Peter Marini | |||
| Confucius Computer is a new form of illogical cultural computing based on
the Eastern paradigms of balance and harmony. The system uses new media to
revive and model ancient Eastern and Confucius philosophies and teachings,
presenting them in new contexts, such as online social chat, music and food.
Based on the model of Eastern mind and teaching, the system enables users to
have meaningful social network communication with a virtual Confucius. The
Confucius Computer system offers a new artistic playground for interactive
music-painting creation based on our Confucius music filters and the ancient
model of Cycles of Balance. Confucius Computer also allows users to explore the
traditional Chinese medicine concept of Yin-Yang through interactive recipe
creation. Detailed descriptions of the systems are presented in this paper. Our
user studies showed that users gave positive feedbacks to their experience of
interacting with Confucius Computer. They believed that this media could
improve intergenerational interaction and promote a sense of calmness. Keywords: Cultural computing; Intergenerational communication; Illogical computing;
Confucius | |||
| Immersive interactive reality: Internet-based on-demand VR for cultural presentation | | BIBAK | Full-Text | 267-278 | |
| Barnabás Takács | |||
| This paper presents an Internet-based virtual reality technology, called
panoramic broadcasting (PanoCAST) where multiple viewers share an experience
yet each having full control of what they see independent from other viewers.
Our solution was developed for telepresence-based cultural presentation and
entertainment services. The core architecture involves a compact spherical
vision system that compresses and transmits data from multiple digital video
sources to a central host computer, which in turn distributes the recorded
information among multiple render- and streaming servers for personalized
viewing over the Internet or mobile devices. In addition, using advanced
computer vision, tracking and animation features, the PanoCAST architecture
introduces the notion of Clickable Content Management (CCM), where each visual
element in the image becomes a source for providing further information,
educational content and cultural detail. Key contributions of our application
to advance the state-of-the-art include bringing streaming panoramic video onto
mobile platforms, an advanced tracking interface to turn visual elements into
sources of interaction, physical simulation to combine the benefits of
panoramic video with that of 3D models and animated, photo-realistic faces to
help users express their emotions in shared online virtual cultural experiences
as well as a feedback mechanism in such environments. Therefore, we argue that
the PanoCAST system offers a low-cost and economical solution for personalized
content distribution and as such it can serve as a unified basis for novel
applications many of which are demonstrated in this paper. Keywords: Immersive interactive reality; Cultural presentation; Virtual reality
on-demand; Virtual human interface; Panoramic broadcasting (PanoCAST) | |||
| Virtual reality for cultural landscape visualization | | BIBAK | Full-Text | 279-294 | |
| Sébastien Griffon; Amélie Nespoulous; Jean-Paul Cheylan; Pascal Marty | |||
| Although land managers and policy-makers generally have a good experience of
what result can be expected from their decisions, they are often faced with
difficulty when trying to communicate the visual impact of a management option
to stakeholders, particularly when the landscape exhibits a high cultural
value. Three-dimensional visualization of the landscape is often used for
communicating with the stakeholders. A challenge in participatory methods for
integrated assessment and policy planning is to view future changes in land
use, according to scenarios. A 3-D landscape visualization component, SLE
("Seamless Landscape Explorer"), has been developed, which is launched after a
scenario simulation to allow for exploration of landscape changes. Pressures
causing such changes are translated into changes in the spatial configuration
of the landscape. The different types of land-use are visualized thanks to a
library of detailed textures, and vegetation can be added. This has been
applied to a study of four scenarios in the French Mediterranean region, which
were set up as part of a participatory process for discussing the planning of
the regional peri-urban and agricultural policy, in an area dominated by the
typical culturally sensitive Mediterranean matorral, ("garrigue" shrubland)
surrounding the Pic Saint-Loup mountain. Examples of visualization are shown
and discussed here. Keywords: Landscape; Visualization; Computer imagery; 3D modeling; Virtual reality | |||
| Digilog book for temple bell tolling experience based on interactive augmented reality | | BIBAK | Full-Text | 295-309 | |
| Taejin Ha; Youngho Lee; Woontack Woo | |||
| We first present the concept of the Digilog Book, an augmented paper book
that provides additional multimedia content stimulating readers' five senses
using augmented reality (AR) technologies. We also develop a prototype to show
the usefulness and effectiveness of the book. The Digilog book has the
following characteristics: AR content descriptions for updatable multisensory
AR contents; enhanced experience with multisensory feedback; and interactive
experience with computerized vision-based manual input methods. As an example
of an entertaining and interactive Digilog Book, this paper presents a "temple
bell experience" book and its implementation details. Informal user observation
and interviews were conducted to verify the feasibility of the prototype book.
As a result, this case study of the Digilog book can be useful in guiding the
design and implementation of other Digilog applications, including posters,
pictures, newspapers, and sign boards. Keywords: Digilog book; Culture technology; User interaction; Multisensory experience;
Augmented reality | |||
| Virtual and augmented reality for cultural computing and heritage: a case study of virtual exploration of underwater archaeological sites (preprint) | | BIBAK | Full-Text | 311-327 | |
| Mahmoud Haydar; David Roussel; Madjid Maïdi; Samir Otmane; Malik Mallem | |||
| The paper presents different issues dealing with both the preservation of
cultural heritage using virtual reality (VR) and augmented reality (AR)
technologies in a cultural context. While the VR/AR technologies are mentioned,
the attention is paid to the 3D visualization, and 3D interaction modalities
illustrated through three different demonstrators: the VR demonstrators
(immersive and semi-immersive) and the AR demonstrator including tangible user
interfaces. To show the benefits of the VR and AR technologies for studying and
preserving cultural heritage, we investigated the visualisation and interaction
with reconstructed underwater archaeological sites. The base idea behind using
VR and AR techniques is to offer archaeologists and general public new insights
on the reconstructed archaeological sites allowing archaeologists to study
directly from within the virtual site and allowing the general public to
immersively explore a realistic reconstruction of the sites. Both activities
are based on the same VR engine, but drastically differ in the way they present
information and exploit interaction modalities. The visualisation and
interaction techniques developed through these demonstrators are the results of
the ongoing dialogue between the archaeological requirements and the
technological solutions developed. Keywords: Underwater archaeology; Mixed reality; Virtual reality; Augmented reality;
Cultural heritage; Cultural computing | |||