| The 'H' in HCI: Enhancing Perception of Interaction through the Performative | | BIBAK | Full-Text | 3-12 | |
| Simon Biggs; Mariza Dima; Henrik Ekeus; Sue Hawksley; Wendy Timmons; Mark Wright | |||
| Motion sensing technologies are well developed at the bio-mechanical (motion
capture) and geo-locative (GPS) scales. However, there are many degrees of
scale between these extremes and there have been few attempts to seek the
integration of systems that were designed for distinct contexts and tasks. The
proposition that motivated the Scale project team was that through such systems
integration it would be possible to create an enhanced perception of
interaction between human participants who might be co-located or remotely
engaged, separated in either (or both) time or space. A further aim was to
examine how the use of these technologies might inform current s discourse on
the performative. Keywords: multi-modal; scaleable; interactive environments; interdisciplinary
research; perception | |||
| Advanced Interaction Techniques for Augmented Reality Applications | | BIBAK | Full-Text | 13-22 | |
| Mark Billinghurst; Hirokazu Kato; Seiko Myojin | |||
| Augmented Reality (AR) research has been conducted for several decades,
although until recently most AR applications had simple interaction methods
using traditional input devices. AR tracking, display technology and software
has progressed to the point where commercial applications can be developed.
However there are opportunities to provide new advanced interaction techniques
for AR applications. In this paper we describe several interaction methods that
can be used to provide a better user experience, including tangible user
interaction, multimodal input and mobile interaction. Keywords: Augmented Reality; Interaction Techniques; Tangible User Interfaces;
Multimodal Input | |||
| Methods for Quantifying Emotion-Related Gait Kinematics | | BIBAK | Full-Text | 23-31 | |
| Elizabeth A. Crane; M. Melissa Gross; Ed Rothman | |||
| Quantitative models of whole body expressive movement can be developed by
combining methods form biomechanics, psychology, and statistics. The purpose of
this paper was to use motion capture data to assess emotion-related gait
kinematics of hip and shoulder sagittal plane movement to evaluate the
feasibility of using functional data analysis (FDA) for developing quantitative
models. Overall, FDA was an effective method for comparing gait waveforms and
emotion-related kinematics were associated with emotion arousal level. Keywords: Whole Body Interaction; Motion Capture; Functional Data Analysis; Affective
Computing | |||
| Towards an Advanced Framework for Whole Body Interaction | | BIBAK | Full-Text | 32-40 | |
| David England; Martin Randles; Paul Fergus; A. Taleb-Bendiab | |||
| Whole Body Interaction has emerged in recent years as a discipline that
integrates the physical, physiological, cognitive and emotional aspects of a
person's complete interaction with a digital environment. In this paper we
present a framework to handle the integration of the complex of input signals
and the feedback required to support such interaction. The framework is based
on the principles of Autonomic Computing and aims to provide adaption and
robustness in the management of whole body interaction. Finally we present some
example case studies of how such a framework could be used. Keywords: Whole Body Interaction; Motion Capture; Autonomic Computing | |||
| Evaluation of Body Sway and the Relevant Dynamics While Viewing a Three-Dimensional Movie on a Head-Mounted Display by Using Stabilograms | | BIBAK | Full-Text | 41-50 | |
| Kazuhiro Fujikake; Masaru Miyao; Tomoki Watanabe; Satoshi Hasegawa; Masako Omori; Hiroki Takada | |||
| The viewers of three-dimensional (3D) movies often complain of blurring and
bleeding. They sometimes experience visually induced motion sickness (VIMS). In
this study, the effect of VIMS on body sway was examined using stabilograms. We
measured the sway in the center of gravity before and during the exposure to
images projected on a head-mounted display (HMD). While viewing, the subjects
were instructed to remain in the Romberg posture for the first 60 seconds and
maintain a wide stance (midline of the heels, 20 cm apart) for the next 60
seconds. Employing Double-Wayland algorithm, we measured the degree of
determinism in the dynamics of the sway in the center of gravity with respect
to viewing 3D movies on HMD. As a result, the dynamics of the sway during and
before the exposure was considered to be stochastic. Thus, exposure to 3D
movies would not change the dynamics to a deterministic one. Keywords: Three-dimensional (3D) movie; Visually induced motion sickness; Stabilogram;
Degree of determinism; Double-Wayland algorithm | |||
| Estimation of User Interest from Face Approaches Captured by Webcam | | BIBAK | Full-Text | 51-59 | |
| Kumiko Fujisawa; Kenro Aihara | |||
| We propose a methodology for estimating a user's interest in documents
displayed on a computer screen from his or her physical actions. Some studies
show that physical actions captured by a device can be indicators of a user's
interest. We introduce the ongoing pilot study's results, which show the
possible relationship between a user's face approaching the screen, as captured
by a webcam, and their interest in the document on the screen. Our system uses
a common user-friendly device. We evaluate our prototype system from the
viewpoint of presuming an interest from such a face approach and the
practicality of the system, and discuss the future possibilities of our
research. Keywords: Interface design; knowledge acquisition; user interest; motion capture | |||
| Spatial Navigation in a Virtual Multilevel Building: The Role of Exocentric View in Acquiring Survey Knowledge | | BIBA | Full-Text | 60-69 | |
| Zhiqiang Luo; Henry Been-Lirn Duh; I-Ming Chen; Wenshu Luo | |||
| The present study aimed to test the function of the exocentric view on the acquisition of survey knowledge during spatial navigation in a virtual multilevel building. Subjects navigated a virtual three-level building in three conditions. In the first condition, subjects navigated the building without any aid. In the second condition, subjects navigated the building with the aid of a three-dimensional (3D) floor map which illustrated the spatial layout on each level from one exocentric perspective. In the third condition, subjects could watch the spatial layout on each level from the exocentric perspective when traveling to another level by an elevator. After navigation, all subjects made the judgment of spatial relative direction. The analyses of the accuracy of spatial judgments showed that the accuracy of judgment of spatial horizontal direction was significantly improved when subjects observed the exocentric views of levels in the last two conditions; the judgment of spatial vertical direction was significantly worse in the 3D floor map condition than in other two conditions. Furthermore, the accuracy of judgment of both spatial horizontal and vertical directions was best in the direction faced by subjects when they first enter each level. The results suggested that the content of exocentric view should be carefully designed to improve the acquisition of survey knowledge. The application of the findings included the design of 3D map for the navigation in the virtual multilevel building. | |||
| A Real-World Pointing Device Based on an Optical Communication System | | BIBA | Full-Text | 70-79 | |
| Yuichi Mitsudo | |||
| In the present paper, a new augmented reality environment that is based on an optical communication system is described. Optical communication devices have been used in several studies on ubiquitous computing. A novel physical structure of an optical communication system that enables the user to select the optical signal by simply pointing its transmitter with his/her finger is developed. In such an environment, the optical transmitter can be treated as a visual tag, referred to as a GhostTag, that includes continuous data, such as audio files. In addition, the PointSpeech application, which provides the user with audio assistant data via GhostTag, is presented herein. | |||
| VR Based Movie Watching Method by Reproduction of Spatial Sensation | | BIBAK | Full-Text | 80-89 | |
| Kunihiro Nishimura; Aoi Ito; Tomohiro Tanikawa; Michitaka Hirose | |||
| A conventional movie watching method is to view movies in front of a large
screen such as theaters. Conventional presenting images in fixed position have
a problem that it is easy for audiences to lose their spatial sensation of
existing movies. In this paper, we propose a novel movie watching method in
order to improve presence in existing media contents using virtual reality
technology. We assumed when frames are presented with shooting angle based on
audiences' looking position, their presence will be much higher. To represent
the camera-shooting angle, we used a optical flow method. We proposed a movie
watching viewing method based on the reconstructed camera shooting angle which
is presented with a moving projector or a wall screen. We thought that our
method made it possible to reconstruct lost spatial in movies. Keywords: Presence; Camera Work; Roaming Images; a Moving Projector; Spatial Sensation | |||
| Comparison of Measurement of Accommodation between LCD and CRT at the Stereoscopic Vision Gaze | | BIBAK | Full-Text | 90-96 | |
| Masako Omori; Satoshi Hasegawa; Tomoyuki Watanabe; Kazuhiro Fujikake; Masaru Miyao | |||
| In the present study, we examined the visual accommodation of subjects who
were gazing fixedly at 3D images from two different displays: a cathode ray
tube (CRT) while wearing special glasses and a liquid crystal display (LCD)
while not wearing special glasses. The subjects in this experiment were two
healthy people aged 22 and 39 years, all with normal vision. The instrument
objectively measured visual accommodative changes of the right eye in both
binocular and natural viewing conditions. The results suggested that it was
easy and comfortable to focus on both the LCD and CRT. When the subjects viewed
the progressively receding target, their accommodation was about 0.8 D at the
presumed furthest points, a level at which the ciliary muscle is relaxed. The
accommodative power differed by about 1.5 D from the near to far point. Thus,
the ciliary muscle is repeatedly strained and relaxed while the subject views
the moving target. Keywords: Accommodation; binocular and natural viewing; stereoscopic image; display;
LCD and CRT | |||
| Is Embodied Interaction Beneficial When Learning Programming? | | BIBA | Full-Text | 97-105 | |
| Pablo Romero; Benedict du Boulay; Judy Robertson; Judith Good; Katherine Howland | |||
| Embodied interaction has been claimed to offer important advantages for learning programming. However frequently claims have been based on intuitions and work in the area has focused largely around system-building rather than on evaluation and reflection around those claims. Taking into account research in the area as well as in areas such as tangibles, psychology of programming and the learning and teaching of programming, this paper identifies a set of important factors to take into account when analysing the potential of learning environments for programming employing embodied interaction. These factors are formulated as a set of questions that could be asked either when designing or analysing this type of learning environments. | |||
| Mobile Interfaces Using Body Worn Projector and Camera | | BIBAK | Full-Text | 106-113 | |
| Nobuchika Sakata; Teppei Konishi; Shogo Nishida | |||
| Unlike most desktop computer and laptop, mobile interface are designed to
facilitate user operating the information easily with various situations that
is standing, walking, and moving. However, almost mobile devices such like cell
phones have a small key pad and small display because those devices should keep
compact and light weight for bringing and pocketing. Therefore, they impose a
lot of burdens to users in terms of watching a small display and typing with a
small keyboard. Such devices do not focus to provide implicit and awareness
information. In this paper, we describe features of body worn projector, which
has capability for projecting information to user's peripheral vision, and body
worn camera, which has capability for recognizing user's posture and estimating
user's behavior, is suitable interface for providing awareness, implicit, and
even explicit information. Finally, we propose two mobile interfaces which are
"Palm top display for glance information" and "Floor projection from Lumbar
mounted projector". Keywords: Mobile AR; Wearable Computer; Mobile Interface; Mobile Projector; Procams | |||
| Relationship between Physiological Indices and a Subjective Score in Evaluating Visually Induced Motion Sickness | | BIBAK | Full-Text | 114-119 | |
| Norihiro Sugita; Makoto Yoshizawa; Akira Tanaka; Makoto Abe; Shigeru Chiba; Tomoyuki Yambe; Shin-ichi Nitta | |||
| Visual environments are evolving rapidly along with the popularization of
high resolution and wide field-of-view displays. However, there is a concern
that these environments may give negative effects on viewers' health such as
visually-induced motion sickness (VIMS). Previous studies reported that some
physiological indices were useful to assess the effect of visual stimulation.
However, we have little knowledge about temporal relationship between the
severity of sickness and the change in the physiological indices. In this
study, the average mutual information has been employed to investigate this
relationship. The analysis of experimental data has suggested that there is a
possibility to detect a sign of VIMS prior to the development of symptoms of
VIMS with the physiological indices. Keywords: visually-induced motion sickness; physiological index; subjective score;
averaged mutual information | |||
| Effect of a Stereoscopic Movie on the Correlation between Head Acceleration and Body Sway | | BIBAK | Full-Text | 120-127 | |
| Hiroki Takada; Tetsuya Yamamoto; Masaru Miyao; Tatehiko Aoyama; Masashi Furuta; Tomoki Shiozawa | |||
| Visually induced motion sickness (VIMS) is caused by sensory conflict, the
disagreement between vergence and visual accommodation while observing
stereoscopic images. VIMS can be measured by psychological and physiological
methods. We quantitatively measured the head acceleration and body sway before
and during exposure to a conventional 3D movie. The subjects wore a head mount
display and maintained the Romberg posture for the first 60 s and a wide stance
(midlines of the heels 20 cm apart) for the next 60 s. Head acceleration was
measured using an Active Tracer with 50 Hz sampling. The Simulator Sickness
Questioner (SSQ) was completed immediately afterward. For the SSQ sub-scores
and each index for stabilograms, we employed two-way ANOVA with leg postures
and presence/absence of stereoscopic images as factors. Moreover, we assumed
that the input signal was the head acceleration in the transfer system to
control the body sway and estimate the transfer function. Keywords: visually induced motion sickness; stabilometry; sparse density; head
acceleration; transfer function analysis | |||
| AR City Representation System Based on Map Recognition Using Topological Information | | BIBAK | Full-Text | 128-135 | |
| Hideaki Uchiyama; Hideo Saito; Myriam Servieres; Guillaume Moreau | |||
| This paper presents a system for overlaying 3D GIS data information such as
3D buildings onto a 2D physical urban map. We propose a map recognition
framework by analysis of distribution of local intersections in order to
recognize the area of the physical map from a whole map. The retrieval of the
geographical area described by the physical map is based on a hashing scheme,
which is called LLAH. In the results, we will show some applications augmenting
additional information on the map. Keywords: GIS; Augmented Reality; LLAH | |||
| Estimation of Visually Induced Motion Sickness from Velocity Component of Moving Image | | BIBA | Full-Text | 136-142 | |
| Hiroyasu Ujike | |||
| The purpose of the study is to examine whether the effects of global motion, (GM), on visually induced motion sickness, (VIMS), found with visual stimulus consisting of simple global motion will be applied to the effects of moving images including combination of global motion on VIMS. We, previously, found that velocity, but not temporal frequency component, of GM dominates subjective scores related to VIMS in the experiments presenting simple GM. To achieve the purpose, I made a model to estimate discomfort level of a standard observer during watching a moving image. The model, at the beginning, analyses GM included in the movie; and then, the time-series of velocity data in each element of analyzed GM is compared with the characteristics of simple GM on VIMS for estimating discomfort level. The validity of the model was examined by comparing the estimated discomfort level and actually measured average discomfort level using identical video movie which rather easily inducing VIMS. As a result, the model well estimates the values of subjective score actually measured during observers watching video movies. | |||
| Supporting Reusability of VR and AR Interface Elements and Interaction Techniques | | BIBAK | Full-Text | 145-153 | |
| Wolfgang Broll; Jan Herling | |||
| In contrast to 2D environments which apply well established user interface
elements and generally accepted interaction techniques, VR and AR applications
typically provide rather individual and specific realizations. This often leads
to inconsistent user interfaces and a long and cumbersome development process.
In this paper we show how we extended our approach on modeling VR and AR
interface elements and interaction techniques represented by interaction and
behavior objects by some simple yet powerful mechanisms: modules, templates,
and inheritance. We will also show how specific examples could benefit from
that approach. Keywords: Virtual Reality; Augmented Reality; Mixed Reality; 3D User Interfaces;
Multi-modal User Interfaces; Interaction Techniques | |||
| Development of 3D Avatars for Professional Education | | BIBA | Full-Text | 154-158 | |
| Miglena Dontschewa; Andreas Künz; Sabahat Kovanci | |||
| This article covers avatars as anthropomorphic tutors in learning processes within teaching and learning settings. Starting points and objectives are presented, as well as the requirements for the creation, the environment and the tools. The article focuses on the latter, the media-synergetic problems the different tools pose, and possible solutions to those problems. | |||
| Rapidly Prototyping Marker Based Tangible User Interfaces | | BIBAK | Full-Text | 159-168 | |
| Maribeth Gandy; Brian Jones; Scott Robertson; Tiffany O'Quinn; Amos Y. Johnson | |||
| Tangible user interfaces (TUIs) can create engaging and useful interactive
systems. However, along with the power of these interfaces comes challenges;
they are often so specialized and novel that building a TUI system involves
working at a low level with custom hardware and software. As a result the
community of people that are capable of creating TUIs is limited. With this
project we aim to make a particular class of TUIs accessible to a broader range
of designers and HCI researchers by exposing TUI specific tools in a
mixed-reality rapid prototyping environment know as DART (The Designer's
Augmented Reality Toolkit). In this paper we discuss the creation of a system
for rapidly prototyping marker based tangible user interfaces. These
prototyping tools were then used to create a set of TUI-based applications with
the goal of raising students' interest in science via an exploration of fine
art concepts. Keywords: Tangible User Interfaces; rapid prototyping; mixed reality; toolkits | |||
| Evaluation of Non-photorealistic 3D Urban Models for Mobile Device Navigation | | BIBAK | Full-Text | 169-178 | |
| Christos Gatzidis; Vesna Brujic-Okretic; Maria Mastroyanni | |||
| This research presents a user evaluation study examining the effect
different rendering styles of 3D virtual city models, as intended for
navigational purposes, could potentially have on users with emphasis on
non-photorealistically rendered (NPR) stylizations. The purpose of this
experiment is to establish whether, particularly for the application area
mentioned above, non-photorealistic, expressive rendering could provide
alternative, more effective visual styles than the photorealistic
representations of urban areas usually opted for by developers today. 50
participants were exposed to a predominably questionnaire-based study assessing
various parameters by observation of the models on a UMPC (Ultra Mobile PC).
The results of this research could potentially have significant implications on
how future pedestrian navigational software should be visualized in the future. Keywords: non-photorealistic rendering; mobile navigation; urban modeling; user
studies | |||
| Integrating and Delivering Sound Using Motion Capture and Multi-tiered Speaker Placement | | BIBAK | Full-Text | 179-185 | |
| Darin E. Hughes | |||
| Creating effective and compelling soundscapes for simulations is a
challenging process that requires non-traditional tools and techniques outside
the scope of standard production methods. In an immersive simulation, sound is
at least as important as graphics; auditory cues can be heard from behind
walls, around corners, and out of the line of sight. This paper describes a
novel approach to interactive 3D sound design utilizing vision-based motion
capture and multi-tiered, configurable loudspeaker delivery. Keywords: Sound Design; Immersive Simulation; Motion Capture; 3D Sound; XACT | |||
| The Design of a Virtual Trailblazing Tool | | BIBAK | Full-Text | 186-195 | |
| Daniel Iaboni; Carolyn MacGregor | |||
| Trails are a proven means of improving performance in virtual environments
(VE) but there is very little understanding or support for the role of the
trailblazer. The Use-IT Lab is currently designing a tool, the VTrail System,
to support trailblazing in VE's. The objective of this document is to introduce
the concept of trailblazing, present the initial prototype for a tool designed
specifically to support trailblazing and discuss results from an initial
usability study. Keywords: trailblazing; virtual environments; wayfinding | |||
| User-Centered Evaluation of a Virtual Environment Training System: Utility of User Perception Measures | | BIBAK | Full-Text | 196-205 | |
| Dawei Jia; Asim Bhatti; Chris Mawson; Saeid Nahavandi | |||
| This study assessed the utility of measures of Self-efficacy (SelfEfficacy)
and Perceived VE efficacy (PVEefficacy) for quantifying how effective VEs are
in procedural task training. SelfEfficacy and PVEefficacy have been identified
as affective construct potentially underlying VE efficacy that is not evident
from user task performance. The motivation for this study is to establish
subjective measures of VE efficacy and investigate the relationship between
PVEefficacy, SelfEfficacy and User task performance. Results demonstrated
different levels of prior experience in manipulating 3D objects in gaming or
computer environment (LOE3D) effects on task performance and user perception of
VE efficacy. Regression analysis revealed LOE3D, SelfEfficacy, PVEefficacy
explain significant portions of the variance in VE efficacy. Results of the
study provide further evidence that task performance may share relationships
with PVEefficacy and SelfEfficacy, and that affective constructs, such as
PVEefficacy, and SelfEfficacy may serve as alternative, subjective measures of
task performance that account for VE efficacy. Keywords: User-based evaluation; Virtual Environment; Evaluation methodology | |||
| Emergent Design: Serendipity in Digital Educational Games | | BIBAK | Full-Text | 206-215 | |
| Michael D. Kickmeier-Rust; Dietrich Albert | |||
| Using computer games for educational purposes is a fascinating idea that is
getting increasingly popular amongst educators, researchers, and developers.
From a technical as well as psycho-pedagogical viewpoint, today's educational
games are at an early stage. Most products cannot compete with non-educational,
commercial games and not with conventional educational software. Research must
address fundamental challenges such as methods for convincing learning-game
design or individualization of gaming experiences. An important key factor is
development costs. To enter the market successfully requires reducing
development costs significantly, however, without reducing gaming or learning
quality. In this paper we introduce an approach of using existing methods for
educational adaptation and personalization together with ideas of emergent game
design. Keywords: Digital educational games; game-based learning; adaptation; personalization;
interactive storytelling; emergent game design | |||
| Intuitive Change of 3D Wand Function in Surface Design | | BIBAK | Full-Text | 216-224 | |
| Sang-Hun Nam; Harksu Kim; Young-Ho Chai | |||
| According to the target model for a designer to sketch, an effective style
or shape of input device can be defined differently. The spatial sketch system
that supports various types of wand can help to sketch efficiently. We suggest
the idea of changing wand style by altering the posture of a 3D wand. This
method allows a designer to work in intuitive ways without being interrupted by
complicated menus. We implement the surface drawing and merging technique with
the grid based data structure which deals with multiple strokes from various
types of wand. Keywords: Virtual Reality; Virtual Conceptual Sketch; Surface Modeling; Interaction
Technique | |||
| Software-Agents for On-Demand Authoring of Mobile Augmented Reality Applications | | BIBA | Full-Text | 225-234 | |
| Rafael Radkowski | |||
| The paper presents an concept for the automatic authoring of augmented reality (AR) applications. The approach is based on software agents that provide different functions and content on demand for an AR application. Autonomous software agents encapsulate the specific functions of an AR application. It is distinguished between two kinds of software agents: So called provider-agents and user-agents. The user agent is configured by a human user, the provider-agent provides the functionality of an AR application. By communication and cooperation, provider and user agents form an AR application. The AR-based concept has been tested with the agent platform JADE. | |||
| Multiuser Collaborative Exploration of Immersive Photorealistic Virtual Environments in Public Spaces | | BIBA | Full-Text | 235-243 | |
| Scott Robertson; Brian Jones; Tiffany O'Quinn; Peter Presti; Jeff Wilson; Maribeth Gandy | |||
| We have developed and deployed a multimedia museum installation that enables
one or several users to interact with and collaboratively explore a 3D virtual
environment while simultaneously providing an engaging and educational,
theater-like experience for a larger crowd of passive viewers. This interactive
theater experience consists of a large, immersive projection display, a touch
screen display for gross navigation and three wireless, motion-sensing,
hand-held controllers which allow multiple users to collaboratively explore a
photorealistic virtual environment of Atlanta, Georgia and learn about
Atlanta's history and the philanthropic legacy of many of Atlanta's prominent
citizens. Note: Best Paper Award | |||
| A Design Method for Next Generation User Interfaces Inspired by the Mixed Reality Continuum | | BIBAK | Full-Text | 244-253 | |
| Jörg Stöcklein; Christian Geiger; Volker Paelke; Patrick Pogscheba | |||
| In this paper we present a new approach to the systematic user centric
development of next generation user interfaces (NGUI). Central elements of the
approach are a conceptual model that extends the well established model view
controller paradigm with an environment component, an iterative development
methodology that guides development along the mixed reality continuum and tools
to support the implementation. The approach is demonstrate with a concrete
example of NGUI development. Keywords: Mixed Reality; Next Generation User Interface Development | |||
| On a Qualitative Method to Evaluate Motion Sickness Induced by Stereoscopic Images on Liquid Crystal Displays | | BIBAK | Full-Text | 254-262 | |
| Hiroki Takada; Kazuhiro Fujikake; Masaru Miyao | |||
| Visually induced motion sickness (VIMS) is known to be caused by sensory
conflict, which is the disagreement between vergence and visual accommodation
while observing stereoscopic images. The simulator sickness questionnaire (SSQ)
is a well-known method that is used herein for verifying the occurrence of
VIMS. We quantitatively measure the sway of the centre of gravity of the human
body before and during exposure to several images. During the measurement,
subjects are instructed to maintain the Romberg posture for the first 60 s and
a wide stance (midlines of the heels 20 cm apart) for the next 60 s. The
stereoscopic images decrease the gradient of the potential function involved in
the stochastic differential equations as a mathematical model of the body sway.
We have succeeded in estimating the decrease in the gradient by using an index
called sparse density. Keywords: stabilometry; Simulator Sickness Questionnaire; sparse density; stochastic
differential equation; potential | |||
| Balancing Design Freedom and Constraints in Wall Posters Masquerading as AR Tracking Markers | | BIBAK | Full-Text | 263-272 | |
| Ryuhei Tenmoku; Akito Nishigami; Fumihisa Shibata; Asako Kimura; Hideyuki Tamura | |||
| This paper describes how to construct a mixed reality (MR) environment by
adopting a geometric registration method using visually unobtrusive flat
posters on the wall. The proposed method is one of the several approaches of
the semi-fiducial invisibly coded symbols (SFINCS) research project, the
purpose of which is achieving a good balance between elegance with regard to
the environment and robust registration. In this method, posters tentatively
used for geometric registration are designed to blend with the environment.
However, they are recognized as markers based on certain design rules. Posters
in a real scene can be found in real time using these design rules. This paper
introduces procedures for developing poster design rules using toolkits
developed by us. Keywords: mixed reality; geometric registration; poster; semi-fiducial; authoring tool | |||
| Development of RFID Textile and Human Activity Detection Applications | | BIBAK | Full-Text | 273-281 | |
| Ryoko Ueoka; Atsuji Masuda; Tetsuhiko Murakami; Hideyuki Miyayama; Hidenori Takeuchi; Kazuyuki Hashimoto; Michitaka Hirose | |||
| We developed RFID woven textile and a customized textile inspection machine
with an automatic map making function. These developments have potentials to
extend location aware systems in real use. In this paper, we present a process
of the development of RFID, the developed map-making system and an accuracy of
the automatically made map and time saving effect. And we outlined two
prototypes of human activity detection using RFID textile. One is a pilot
application of a tracking system using 19-meter coated RFID textile as a
carpet. A person's tracking is detected by shoes that RFID readers were
embedded. Another one is a pilot application of a human-activity tracking
system using RFID textile wear. By wearing it, the predefined behaviour is
detected by embedded RFID readers in the environment. In section 6 conclusions
and future works are discussed. Keywords: RFID textile; Map making system; Human activity detection | |||
| A Study on the Design of Augmented Reality User Interfaces for Mobile Learning Systems in Heritage Temples | | BIBAK | Full-Text | 282-290 | |
| Kuo-Hsiung Wang; Li-Chieh Chen; Po-Ying Chu; Yun-Maw Cheng | |||
| In order to reduce switching attention and increase the performance and
pleasure of mobile learning in heritage temples, the objective of this research
was to employ the technology of Augmented Reality (AR) on the user interfaces
of mobile devices. Based on field study and literature review, three user
interface prototypes were constructed. They both offered two service modes but
differed in the location of navigation bars and text display approaches. The
results of experiment showed that users preferred animated and interactive
virtual objects or characters with sound effects. In addition, transparent
background of images and text message boxes were better. The superimposed
information should not cover more than thirty percents of the screen so that
users could still see the background clearly. Keywords: Mobile Learning; User Interface Design; Augmented Reality | |||
| Haptic Interaction and Interactive Simulation in an AR Environment for Aesthetic Product Design | | BIBAK | Full-Text | 293-302 | |
| Monica Bordegoni; Francesco Ferrise; Marco Ambrogio | |||
| Market rules show that most of the times the aesthetic impact of a product
is an important aspect that makes the difference in terms of success among
different products. The product shape is generally created and represented
during the conceptual phase of the product and the last trends show that the
use of haptic devices allows users to more naturally and effectively interact
with 3D models. Nevertheless the shape needs to satisfy some engineering
requirements, and its aesthetic and functional analysis requires the
collaboration and synchronization of activities performed by various experts
having different competences and roles. This paper presents the description of
an environment named PUODARSI that allows designers to modify the shape of a
product and engineers to evaluate in real-time the impact of these changes on
the structural and fluid dynamic properties of the product, describing the
choice of the software tools, the implementation and some usability tests. Keywords: Mixed reality; haptic interaction; interactive simulation | |||
| Evaluation of a Haptic-Based Interaction System for Virtual Manual Assembly | | BIBAK | Full-Text | 303-312 | |
| Monica Bordegoni; Umberto Cugini; Paolo Belluco; Marcello Aliverti | |||
| This paper describes a mixed reality application for the assessment of
manual assembly of mechanical systems. The application aims at using low cost
technologies and at the same time at offering an effective environment for the
assessment of a typical task consisting of assembling two components of a
mechanical system. The application is based on the use of a 6-DOF interaction
device that is used for positioning an object in space, and a haptic interface
that is closer to reality and is used for simulating the insertion of a second
component into the first one while feeling a force feedback. The application
has been validated by an expert user in order to identify the main usability
and performance problems and improve its design. Keywords: Virtual Manual Assembly; haptic-based interaction; VR system evaluation | |||
| Transmission of Information through Haptic Interaction | | BIBAK | Full-Text | 313-317 | |
| Koichi Hirota; Yuichiro Sekiguchi | |||
| This paper describes a novel approach to haptic interface that transmits
information through dynamic interaction. The approach is based on the idea of
emulating an object that causes dynamic reaction, such as a box with content
inside, using a mechanical device. Hence, the device should be designed as an
object-oriented and self-contained form that can be handled similarly to the
real object. Implementation of a prototype device that materializes this idea
is introduced, and a possibility of expanding the idea into various scales of
interaction and different modality is also discussed. Keywords: Haptic device; haptic interaction; inertial force; virtual reality | |||
| Development of Realistic Haptic Presentation Media | | BIBAK | Full-Text | 318-325 | |
| Yasushi Ikei | |||
| This paper describes the development toward a realistic haptic presentation
media -- the haptic displays for surface textures. The display utilizes
vibratory simulation that is efficient for cutaneous sensation. First, the
characteristics of frequency mixture stimulation are demonstrated in terms of
the amplitude modulation and the additive synthesis of 250 Hz and 50 Hz where
the sensitivity of human skin takes peaks due to inherent mechanoreceptors. As
a part of elucidation, the perception of 50 Hz under 250 Hz stimulation and its
hardness sensation were measured. The amplitude modulation was more suitable
for its small absolute limen while the additive synthesis was for softer
sensation. In addition, the tactile/proprioceptive hybrid haptic display was
investigated in terms of 3D texture perception. Spatial textures on surfaces of
an icosahedron were matched and identified at about three levels of perception
difficulty. Textures were discriminated moderately despite limited stimulators
that suggested proper improvement. Keywords: Haptic texture display; Frequency mixture; Sensation scaling; Texture
discrimination/identification | |||
| Analysis of Tactual Impression by Audio and Visual Stimulation for User Interface Design in Mixed Reality Environment | | BIBAK | Full-Text | 326-335 | |
| Mami Kagimoto; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura | |||
| In a mixed-reality (MR) environment, a touchable object can be made to
change its appearance when a computer-generated image (MR visual stimulation)
is superimposed onto it. In this research, we conduct experiments to study the
effects of MR visual and audio stimuli on the tactual impression of the
"roughness" of an object. We show that MR visual stimulation alters a subject's
tactual impression of the roughness of an object and that the addition of MR
audio stimulation intensifies that effect. Keywords: Mixed Reality; Tactual Impression; Psychophysical Influence and Visual and
Audio Stimulation | |||
| Fundamental Research on Tactile Perception for Development of a Tactile Feel Display | | BIBA | Full-Text | 336-345 | |
| Iyo Kunimoto; Naoki Saiwaki; Osamu Katayama; Yasuji Inobe | |||
| In our daily life we use a large number of electronic devices incorporating a touch interface, e.g., mobile phones and the iPod Touch. This function is, however, in its infancy, permitting only input, with output being limited only to vibration to confirm input. Meanwhile, if we could create touch sensations with "qualitative information," such as the delicate sensation of materials or the feeling of touching an object, it would bring not only an improvement in the quality of touch sensations, but would also bring the possibility of developing new human interfaces such as more realistic VR systems and user-friendly universal communication tools for people with disability. Such human interfaces would be most effective if they did not require the development of special vibratory devices. On this basis, the authors have developed, based on knowledge gained from previous research, a prototype of a unique vibratory device employing a micro-motor, and employed it in evaluation experiment in which various differing tactile sensations are presented to study subjects. | |||
| Enhanced Industrial Maintenance Work Task Planning by Using Virtual Engineering Tools and Haptic User Interfaces | | BIBAK | Full-Text | 346-354 | |
| Simo-Pekka Leino; Salla Lind; Matthieu Poyade; Sauli Kiviranta; Petteri Multanen; Arcadio Reyes-Lecuona; Ari Mäkiranta; Ali Muhammad | |||
| Good maintainability is an essential feature for machines and processes in
industry. It promotes, among others, maintenance safety, post-maintenance
reliability and cost-effective maintenance by ensuring quick and easy operation
and short downtime. Virtual engineering tools provide an effective way for
maintainability design already during the design phase. Machine designers may
not consider maintenance tasks systematically, which can leave important task
details open. The missing detail planning can contribute significantly to the
probability of safety or reliability risks. So far, generic tools or facilities
for planning demanding maintenance tasks in detail have not been available for
companies' independent use. Another challenge is to develop and apply better
user interfaces for design processes. Virtual engineering tools, such as
virtual reality (VR) and haptics, provide a potential solution for improving
maintenance planning and maintainability design. This paper introduces
development and benefits of a new haptic interface for planning and training
industrial maintenance tasks. The paper introduces a test with haptics tools in
virtual maintenance case examples. As a conclusion we will sum up, whether the
use of a haptic user interface would enhance task planning and maintainability
design. In addition, we propose a set of recommendations regarding use of
haptics in maintenance planning and maintainability design. Keywords: Haptics; Virtual Environments; Maintenance | |||
| Characterizing the Space by Thermal Feedback through a Wearable Device | | BIBAK | Full-Text | 355-364 | |
| Takuji Narumi; Tomohiro Akagawa; Young Ah Seong; Michitaka Hirose | |||
| Thermal sensation is a kind of a haptic sensation and is very familiar
feeling. However it is difficult to realize a thermal display which gives
realistic thermal feedback because thermal characteristic has a larger
ambiguity and is late-response. Alternatively, thermal feedback could be used
as a new channel for the transmission of imaginary characteristics. We are
aiming to add characteristics to the existing space by providing people with
location-dependent thermal information. By manipulating thermal information
presented to people, we can change implicit partitioning of the space without
physically reconstructing it. "Thermotaxis" is a system that gives sensations
of cool and warm to users by controlling thermoelectric devices wirelessly. In
this system, the space is characterized as being cool or warm. Users experience
the difference in temperatures while they walk in the space. Preliminary
analysis shows that people stay close in the area of a comfortable temperature. Keywords: Ambient Controlling; Characterizing the Space; Thermal Sensation; Thermal
Feedback; Wearable Computing | |||
| A High-Level Haptic Interface for Enhanced Interaction within Virtools™ | | BIBAK | Full-Text | 365-374 | |
| Matthieu Poyade; Arcadio Reyes-Lecuona; Simo-Pekka Leino; Sauli Kiviranta; Raquel Viciana-Abad; Salla Lind | |||
| Haptics is the outstanding technology to provide tri-dimensional interaction
within Virtual Environments (VE). Nevertheless, many software solutions are not
fully prepared to support Haptics. This paper presents a user-friendly
implementation of Sensable Phantom haptic interfaces onto the interactive VE
authoring platform, Virtools 4.0. Haptics implementation was realized using the
Haptic Library (HLAPI) from OpenHaptics toolkit 2.0 which provides highly
satisfactory custom forces effects. The integration of Phantom interaction at
end-user development fulfils logical VE interactive authoring under Virtools.
Haptics implementation was qualitatively assessed in a manual maintenance case,
a welding task, as a part of the national Finnish project, VIRVO. Manipulation
enhancements provided by the integration of Phantom interaction in Virtools
suggest many further improvements for more complicated industrial pilot
experiments as a part of the European Commission funded project ManuVAR. Keywords: Virtual Reality; Haptics; OpenHaptics; Virtools™; Force Feedback | |||
| A Study of the Attenuation in the Properties of Haptic Devices at the Limit of the Workspace | | BIBAK | Full-Text | 375-384 | |
| Jose San Martin | |||
| In the context of the optimization in virtual reality systems involving a
haptic device, this paper introduces a correction in the formula that defined
the performance of the device near the boundary of its workspace. We introduce
too corrections to an index based on the Manipulability which takes in account
the frequency with which each zone of the application workspace is visited
during the simulation process, in order to help the designer for obtaining the
best positioning of the device respect to the virtual environment. We
demonstrate the new formula studying three different tasks to be accomplished.
Finally we look for this best positioning analyzing not only the displacement
but the different orientations we can introduce in the virtual environment in
order to take advantage of the best zones of the workspace in terms of
Manipulability. Keywords: Virtual reality; Haptic interface; Manipulability; Mechanical Performance;
Optimal designing | |||
| A Virtual Button with Tactile Feedback Using Ultrasonic Vibration | | BIBA | Full-Text | 385-393 | |
| Kaoru Tashiro; Yuta Shiokawa; Tomotake Aono; Takashi Maeno | |||
| A virtual button with tactile feedback is realized by use of ultrasonic vibration with amplitude of a few micrometers. Button-like click feeling is displayed by recreating rapid change in reaction force arising from buckling of a mechanical push button utilizing squeeze film effect. First, click feeling display system was constructed based on the principle of perceiving click feeling when pushing a mechanical button. In the system, stimulation are applied to the operators at both buckling and restitution point. Then, by conducting several sensory evaluation experiments, the optimum parameters of the ultrasonic vibration was determined to display button-like click feeling. Finally, by conducting usability test, it was verified that the usability of the virtual button was equivalent to that of a mechanical button. | |||
| Enhancing Haptic Rendering through Predictive Collision Detection | | BIBAK | Full-Text | 394-402 | |
| Athanasios Vogiannou; Konstantinos Moustakas; Dimitrios Tzovaras; Michael G. Strintzis | |||
| This paper presents an efficient collision detection method for interactive
haptic simulations of virtual environments that consist of both static and
moving objects. The proposed method is based on a novel algorithm for
predicting the time of proximity between a pair objects and the appropriate
employment of the calculated prediction in a complex virtual scene with
multiple objects. The user is able to interact with the virtual objects and
receive real-time haptic feedback using the PHANToM Desktop haptic device,
while the visual results are shown in the screen display. Experimental results
demonstrate the efficiency and the reliability of the presented approach
compared to state-of-the-art spatial subdivisions methods, especially for
haptic rendering, where collision detection and response is a procedure of
critical importance. Keywords: collision detection and prediction; haptic interaction | |||
| Shape Disparity Inspection of the Textured Object and Its Notification by Overlay Projection | | BIBA | Full-Text | 405-412 | |
| Toshiyuki Amano; Hirokazu Kato | |||
| In this paper we describe about use of the projector camera feedback system for shape disparity check of the textured object. Using the negative feedback in the proposed system, we realized real time shape disparity inspection and its visualization at the same time. In the experimental result, we confirmed the system has an ability to distinguish the 2 mm of shape disparity and its response time was 0.2 sec. | |||
| Complemental Use of Multiple Cameras for Stable Tracking of Multiple Markers | | BIBAK | Full-Text | 413-420 | |
| Yuki Arai; Hideo Saito | |||
| In many applications of Augmented Reality (AR), rectangular markers are
tracked in real time by capturing with cameras. In this paper, we consider the
AR application in which virtual objects are displayed onto markers while the
markers and the cameras are freely moving. In this situation, the marker cannot
be tracked when the marker is occluded by some objects. In this paper, we
propose a method for tracking the projection matrix between the image and the
marker even when the maker is occluded, by using cameras. In this method, we
transfer the projection matrix for the marker that is detected by the cameras
in order to estimate the relative projection matrix for the occluded marker.
After computing the relative projection matrices using multiple cameras, we
compute a more accurate projection matrix by using particle filter. As a
result, we can continuously track the markers even when the marker is occluded. Keywords: augmented reality; marker tracking; particle filter; multiple cameras | |||
| AR Display for Observing Sports Events Based on Camera Tracking Using Pattern of Ground | | BIBAK | Full-Text | 421-430 | |
| Akihito Enomoto; Hideo Saito | |||
| We present an AR display system for observing sports events on a desktop
stadium based on-camera tracking using pattern of the desktop ground by
overlaying players in real sports events captured with multiple cameras. In
this paper, we take soccer as the sports event. In the proposed system, the
pose and the position of an observing camera are estimated in real-time by
using a soccer field pattern on the desk top and an AR marker. The soccer field
pattern in the desktop stadium on which the object soccer game is observed via
AR display is previously registered with the real soccer stadium in which the
real soccer game is captured with multiple cameras. In the previous procedure,
we also estimate camera parameters (projection matrices) of the multiple
cameras capturing the real soccer game using planar structures in the soccer
field. Positions of soccer player and ball are also previously estimated based
on the camera parameters. In the on-line procedure for AR display, the textures
of the players captured in the multiple soccer video are simply overlaid onto
AR camera videos with CG models which are generated for giving additional
visual information. Keywords: Augmented Reality; Free viewpoint videos; Multiple cameras | |||
| Interactive Fluid Simulation Using Augmented Reality Interface | | BIBAK | Full-Text | 431-438 | |
| Makoto Fujisawa; Hirokazu Kato | |||
| This paper presents an interactive fluid simulation system using augmented
reality interface. The presented system uses Smoothed Particle Hydrodynamics to
simulate the behavior of liquid and adopts a particle-particle interaction
approach to calculate the surface tension that becomes important in a
small-scale liquid. Fluid-solid interaction can be calculated effectively by
representing a solid as a distance function. Therefore, the shape of the solid
can be represented precisely without increasing the number of the particles.
Moreover, The system can directly operate the solid by augmented reality
interface. Keywords: real-time fluid simulation; surface tension; augmented reality interface | |||
| Lens Accommodation to the Stereoscopic Vision on HMD | | BIBAK | Full-Text | 439-444 | |
| Satoshi Hasegawa; Masako Omori; Tomoyuki Watanabe; Kazuhiro Fujikake; Masaru Miyao | |||
| The purpose of this study was to clarify the effect on visual function of
gazing at stereoscopic images on a head mounted display (HMD). We measured
visual accommodation during stereoscopic viewing while using a HMD by using our
original instrument of measurement. The presented image was shown
3-dimensionally on an HMD set up at a visual distance of 3 cm. A spherical
object moved back and forth toward and away from the observer in a 10 sec
cycle. While the subjects were gazing at the 3D image with both eyes, the lens
accommodation in the right eye was measured and recorded. Accommodation to the
virtual objects was shown during the viewing of stereoscopic images of 3D
computer graphics, but was not shown when the images were displayed without
appropriate binocular parallax. It is suggested that stereoscopic moving images
on HMD induced the visual accommodation by the expansion and contraction of the
ciliary muscle, which is synchronizing with convergence. Keywords: Binocular HMD; Stereoscopic image; 3-dimension; Visual function | |||
| Acquiring a Physical World and Serving Its Mirror World Simultaneously | | BIBAK | Full-Text | 445-453 | |
| Sengpyo Hong; Jong-gil Ahn; Heedong Ko; Jinwook Kim | |||
| A mirror world, which is a virtual space modeling a physical space, attracts
enormous interests from VR community recently. Various applications such as
Second Life, Google Earth and Virtual Earth have proven their usefulness and
potentialities. We introduce a novel method to build a mirror world by
acquiring environment data represented as a point cloud. Since our system
provides a streaming service of the mirror world while gathering the
environment information simultaneously, users located in an immersive display
system can navigate and interact in the mirror world reflecting the physical
world of the present state. Mobile agent which is a mobile robot carrying two
laser rangefinder is responsible for exploring the physical world and creating
an environment model. Environment modeling involves position tracking method to
merge scattered geometric data. Optimizing method is also need to reduce space
complexity of environment model. Keywords: Laser Rangefinder; Laser Scan; Environment Modeling; Mirror World | |||
| In-Situ 3D Indoor Modeler with a Camera and Self-contained Sensors | | BIBAK | Full-Text | 454-464 | |
| Tomoya Ishikawa; Kalaivani Thangamani; Masakatsu Kourogi; Andrew P. Gee; Walterio W. Mayol-Cuevas; Keechul Jung; Takeshi Kurata | |||
| We propose a 3D modeler for supporting in-situ indoor modeling effectively.
The modeler allows a user easily to create models from a single photo by
interaction techniques taking advantage of features in indoor space and
visualization techniques. In order to integrate the models, the modeler
provides automatic integration functions using Visual SLAM and pedestrian
dead-reckoning (PDR), and interactive tools to modify the result. Moreover, for
preventing shortage of texture images to be used for the models, our modeler
automatically searches from 3D models created by the user for un-textured
regions and intuitively visualizes shooting positions to take a photo for the
regions. These functions make it possible that the user easily create
photorealistic indoor 3D models that have enough textures on the fly. Keywords: 3D indoor modeling; Mixed reality; Virtualized object; Visual SLAM;
Pedestrian dead-reckoning; Self-contained sensor | |||
| Evaluation of Visually-Controlled Task Performance in Three Dimension Virtual Reality Environment | | BIBAK | Full-Text | 465-471 | |
| Chiuhsiang Joe Lin; Tien-Lung Sun; Hung-Jen Chen; Ping-Yun Cheng | |||
| The present study aims to evaluate three commercial VR display devices on
the market via a 3D Fitts' task. In addition, a Simulation Sickness
Questionnaire (SSQ) was used to assess simulator sickness of participants. Ten
participants performed repetitive pointing tasks over different conditions of
varying display devices, movement directions and indices of difficulty. Based
on the results, it seems that the 3D TV technology may not provide enough
perceptual depth to enhance movement performance in a 3D VE. The projection
display obtained the best performance and preference among the three display
devices. The HMD gave the worst result in both the experimental task and the
SSQ assessment due to the accompanied discomfort and fatigue. Keywords: Fitts' law; virtual reality; projection display; HMD; 3D TV | |||
| Visual Data Mining in Immersive Virtual Environment Based on 4K Stereo Images | | BIBAK | Full-Text | 472-481 | |
| Tetsuro Ogi; Yoshisuke Tateyama; So Sato | |||
| In this study, super high-definition immersive visual data mining
environment using 4K stereo projector was developed. In this system, data can
be represented with high accuracy in the three-dimensional space using the
super high-definition stereo images, and the user can recognize the relation
among several kinds of data by integrating them in the immersive environment
using the plug-in function. This system was applied to the seismic data
visualization and the effectiveness of this system was evaluated. Keywords: Visual Data Mining; Immersive Virtual Environment; 4K Stereo Image; Seismic
Data Analysis | |||
| MR-Mirror: A Complex of Real and Virtual Mirrors | | BIBAK | Full-Text | 482-491 | |
| Hideaki Sato; Itaru Kitahara; Yuichi Ohta | |||
| MR-mirror is a novel Mixed-Reality (MR) display system created by using real
and virtual mirrors. It merges real visual information reflected on a real
mirror and a virtual one displayed on an electronic monitor. A user's body is
presented by the reflection on the real mirror, and virtual objects are
presented on a monitor that is visible through the real mirror. Users can
observe an MR scene without wearing such devices as a head-mounted display and
can interact with the virtual objects around them using their body motion in MR
space. We implemented a prototype MR-mirror and a demonstration system. Keywords: Mixed Reality; Virtual Mirror; Interaction | |||
| A Novel Approach to On-Site Camera Calibration and Tracking for MR Pre-visualization Procedure | | BIBAK | Full-Text | 492-502 | |
| Wataru Toishita; Yutaka Momoda; Ryuhei Tenmoku; Fumihisa Shibata; Hideyuki Tamura; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya | |||
| This paper presents camera calibration and tracking method for mixed reality
based pre-visualization system for filmmaking. The proposed calibration method
collects environmental information required for tracking efficiently since the
rough camera path and target environment are known before actual shooting.
Previous camera tracking methods using natural feature are suitable for outdoor
environment. However, it takes large human cost to construct the database. Our
proposed method reduces the cost of calibration process by using fiducial
markers. Fiducial markers are used as reference points and feature landmark
database is constructed automatically. In shooting phase, moreover, the speed
and robustness of tracking are improved by using SIFT descriptor. Keywords: Mixed Reality; Pre-visualization; Tracking; Natural Feature | |||
| Robust Hybrid Tracking with Life-Size Avatar in Mixed Reality Environment | | BIBA | Full-Text | 503-510 | |
| Tran Cong Thien Qui; Shang Ping Lee; William Russell Pensyl; Daniel Keith Jernigan | |||
| We have developed a system which enables us to track participant-observers accurately in a large area for the purpose of immersing them in a mixed reality environment. This system is robust even under uncompromising lighting conditions. Accurate tracking of the observer's spatial and orientation point of view is achieved by using hybrid inertial sensors and computer vision techniques. We demonstrate our results by presenting life-size, animated human avatars sitting in real chairs, in a stable and low-jitter manner. The system installation allows the observers to freely walk around and navigate themselves in the environment even while still being able to see the avatars from various angles. The project installation provides and exciting way for cultural and historical narratives to be presented vividly in the real present world. | |||
| Collaboration Design System Using Internet and Virtual Reality Technology | | BIBAK | Full-Text | 513-521 | |
| Hideki Aoyama; Rie Iida | |||
| Globalization of manufacturing industry makes production bases covering two
or more areas and countries. Moreover, in order to timely offer products which
respond to consumer needs, it has been becoming important to shorten the lead
time of product development. Opportunities to do collaboration work with
designers/engineers existing different places are thus increasing. A support
system for collaboration design work without physical moving of
designers/engineers is strongly demanded to cut down time and cost. This
research aims at proposing the intuitive 3-dimensional geometric model
construction method and developing a system which supports collaboration design
work for the discussion stage of ideas in the upstream of design processes. In
this paper, a system which can intuitively build a 3D model and support
collaboration design work for designers/engineers being different places by
sharing mutually a design object through the Internet is described. Keywords: Design Collaboration; Internet; Virtual Reality; 3D Modeling; Industry
Product Design; Basic Design; CAD | |||
| Evaluating the Potential of Cognitive Rehabilitation with Mixed Reality | | BIBAK | Full-Text | 522-531 | |
| Nicholas Beato; Daniel P. Mapes; Charles E. Hughes; Cali M. Fidopiastis; Eileen M. Smith | |||
| We describe the development and use of a mixed reality (MR) testbed to
evaluate potential scenarios that may alleviate performance deficits in
subjects who may be experiencing cognitive deficiencies, such as posttraumatic
stress disorder (PTSD). The system blends real world sensory data with
synthetic enhancements in the visual and aural domains. It captures user
actions (movement, view direction, environment interaction, and task
performance) and psychophysical states (engagement, workload, and skin
conductivity) during an MR-enabled experience in order to determine task
performance in the context of a variety of stimuli (visual and aural
distracters in time-constrained activities). The goal is to discover triggers
that affect stress levels and task performance in order to develop
individualized plans for personal improvement. Keywords: Mixed reality; post traumatic stress disorder; psychophysical sensing;
medical rehabilitation; cognitive rehabilitation | |||
| Augmented Reality Video See-through HMD Oriented to Product Design Assessment | | BIBAK | Full-Text | 532-541 | |
| Giandomenico Caruso; Umberto Cugini | |||
| Current state of the art technology offers various solutions for developing
virtual prototyping applications that also allow the interaction with the real
environment. In particular, Augmented Reality (AR) technologies include
tracking systems, stereoscopic visualization systems, photorealistic rendering
tools, hi-resolution video overlay systems that allow us to create various
types of applications where the virtual prototype is contextualized within the
real world. One application domain is product design: AR technologies allow
designers to perform some evaluation tests on the virtual prototype of
industrial products without the necessity to produce a physical prototype. This
paper describes the development of a new Video See-Through Head Mounted Display
(VST-HMD) that is high-performing and based on stereoscopic visualization. The
developed display system overcomes some issues concerning the correct
visualization of virtual objects that are close to the user's point of view.
The paper also presents the results of some tests about an AR application
developed for product design assessment. Keywords: Augmented Reality; Head Mounted Display; Video See-Through HMD; Design
Assessment | |||
| Mixed Reality Neurosurgical Microscope for Training and Intra-operative Purposes | | BIBAK | Full-Text | 542-549 | |
| Alessandro De Mauro; Jörg Raczkowsky; Marc Eric Halatsch; Heinz Wörn | |||
| In recent years, neurosurgery has been deeply influenced by new
technologies. It requires fine techniques targeted to obtain treatments
minimally invasive though often traumatic. The precision of the surgical
gesture is related both to experience of the surgeon and accuracy of the
available technological instruments. Computer Aided Surgery (CAS) can offer
several benefits for the patient's safety. From a technological point of view
we observe the use of the Virtual Reality (VR) for the surgeons training and
Augmented Reality (AR) for the intra-operative aid for treatments. This paper
presents a prototype for a mixed reality system for neurosurgical interventions
embedded on a real surgical microscope for pre- and intra-operative purposes.
Its main purposes are: the realistic simulation (visual and haptic) of the
spatula palpation of low-grade glioma and also the stereoscopic visualization
in AR of relevant 3D data for safe surgical movements in the image guided
interventions. Keywords: virtual and augmented reality; physical modeling; haptic feedback; training
systems; neurosurgery | |||
| A Real-Virtual Mapping Method for Mechanical Product Assembly Process Planning in Virtual Assembly Environment | | BIBAK | Full-Text | 550-559 | |
| Xiumin Fan; Feng Gao; Hongmin Zhu; Dianliang Wu; Qi Yin | |||
| In order to realize assembly process planning in virtual reality
environment, an assembly process planning generation method based on
real-virtual mapping of basic motion sequence is proposed. Based on the
analysis of current assembly process content from enterprise, assembly process
information is modeled; standard assembly operations and basic assembly motion
are defined. The mapping matrix among standard assembly operations and basic
assembly motions are set up. A method to get basic motion sequence in virtual
environment during virtual assembling process by real user is put forward. A
prototype system is developed based on these research results, and the system
function is demonstrated through assembly process of automobile engine
components. It shows that this assembly process generation method based on
real-virtual mapping of motion sequence is feasible, and it also provides a new
idea for the application of virtual assembly technique for product
manufacturing process. Keywords: Virtual reality; Virtual assembly; Assembly process planning; Standard
assembly operation; Assembly basic motion; Real-virtual mapping | |||
| Rebalancing the Visual System of People with Amblyopia "Lazy Eye" by Using HMD and Image Enhancement | | BIBAK | Full-Text | 560-565 | |
| Sina Fateh; Claude Speeg | |||
| Amblyopia or "lazy eye" occurs when during early childhood visual
information from one eye is absent or poorly transmitted to the brain. This
visual deprivation causes poor vision and the eye gradually becomes weaker
(amblyope) relative to the other eye which becomes stronger. The visual
imbalance is caused by the brain's preference for the strong eye. To restore
vision, conventional treatments use occlusion and vision penalization of the
strong eye to force the brain to use the amblyope eye. Conventional treatments
are regarded as effective in young children but impractical in older subjects
and patient compliance remains the main cause of treatment failure. This
presentation describes our preliminary efforts to develop a convenient and
viable binocular head mounted display (HMD) interface. The goal is to rebalance
the vision by using a simultaneous enhancing/attenuation image adjustment. The
image presented to the normal eye will be attenuated while the image presented
to the amblyope eye will be enhanced. During this operation the user will be
engage in recreational activities such as watching movies, using internet or
playing video games. Keywords: Binocular HMD; amblyopia; vision restoration; enhancement/attenuation;
visual rebalance; compliance | |||
| A Two-User Framework for Rapid Immersive Full Cycle Product Customization | | BIBAK | Full-Text | 566-575 | |
| Maxim Foursa; David d'Angelo; Gerold Wesche; Manfred Bogen | |||
| In this paper we present an approach for a full cycle product customization
in Virtual Environments (VE). The main goal of our work is to develop an
integrated immersive framework, which allows configuring products from a great
number of parts. Our framework supports collaborative work of two users and
operates both on desktop computers and in immersive environments. The framework
is integrated into a manufacturing environment, thus making the immediate
production of customized products possible. The integrated modules of the
framework allow importing CAD files directly into VE, creation of new objects
on the basis of constructive solid geometry principles, attaching virtual
connectivity-describing attributes to parts, guided assembly of parts and
comprehensive analysis of products. In order to identify the influence of
immersion and collaboration on the performance in assembly and manipulation
tasks in VE, we performed a quantitative assessment of user performance, which
we also describe in the paper. Keywords: Virtual Environment; Mass Customization; Product Development | |||
| A Mixed Reality-Based Assembly Verification and Training Platform | | BIBAK | Full-Text | 576-585 | |
| Shiqi Li; Tao Peng; Chi Xu; Yan Fu; Yang Liu | |||
| Mixed reality (MR) based human-machine interaction provides a seamless
interface between user and application environment, which synthesizes the
advantages of the convenient interaction of virtual reality, and the strong
realistic of augmented reality. In this paper, MR is applied in the context of
industrial assembly process, and a MR based assembly verification and training
platform is proposed. In the MR based assembly environment, virtual model, real
images and augmented information are jointly displayed on the assembly scene,
accessorizing multi-video display windows of different angles of view to browse
the real assembly scene. Additionally, constraint proxies figuratively
reconstruct part's constraint relationship in the MR environment, and avoid the
complex calculation of constraint's match. By using a virtual hand with
constraints guided to assemble, an effective and realistic assembly process
experience is provided to the user. Keywords: Mixed reality; Virtual assembly; Assembly verification; Assembly training | |||
| Trial of Formulating Affordance Features for Product Design | | BIBAK | Full-Text | 586-595 | |
| Tamotsu Murakami; Mariko Higuchi; Hideyoshi Yanagisawa | |||
| The aim of this research is to formulate relationships between the
geometrical attributes of objects and affordance for operations as affordance
features. If affordance features are well formulated, then they will allow
designers to strengthen intended affordances for higher usability of products
or to systematically examine and achieve product or interface shapes with both
high usability and aestheticity or novelty. In this paper we show some
affordance features and their relationships with quantitative conditions
obtained from an analysis of user tests involving sample objects of various
shapes. Keywords: Affordance; feature; product design; usability; emotional design | |||
| An Empirical Study of Assembly Error Detection Using an Augmented Vision System | | BIBAK | Full-Text | 596-604 | |
| Barbara Odenthal; Marcel Ph. Mayer; Wolfgang Kabuß; Bernhard Kausch; Christopher M. Schlick | |||
| Within the Cluster of Excellence "Integrative Production Technology for
High-Wage Countries" of RWTH Aachen University a numerical control unit and its
ergonomic human-machine interface are developed for a robotized production
unit. In order to cope with novel systems, the human operator will have to meet
new challenges regarding the work requirements. Therefore, a first prototype of
an augmented vision system to assist the human operator is developed dealing
with the task of error detection and identification in an assembly object.
Laboratory tests have been performed to find a preferable solution to display
information. Keywords: Augmented Reality; Assembly | |||
| Design and Implementation of Augmented Reality Environment for Complex Anatomy Training: Inguinal Canal Case Study | | BIBAK | Full-Text | 605-614 | |
| Sophia Sakellariou; Ben M. Ward; Vassilis Charissis; David Chanock; Paul Anderson | |||
| Adhering to contemporary requirements for reduction of cadaveric training of
medical trainees we have developed a prototype augmented reality environment
which investigates complex anatomical sections. A human 3D model has been
implemented in order to facilitate educational tactics presented in a Virtual
Reality (VR) environment. Opting for a sophisticated approach of interaction,
the interface elements are based on simplified visual representation of real
anatomical elements, and can be operated through haptic devices and surround
auditory cues. This paper discusses the challenges involved in the development
process of the augmented reality environment, and the HCI design, introduces
the visual components of the interface and presents the outcome of a
preliminary evaluation of the proposed VR training method on a group of twelve
medical doctors. The paper concludes with a tentative plan of future work which
aims to expand the context and interactivity of the system so as to enable the
trainees to rehearse surgical methods in a simulated VR environment. Keywords: Virtual Reality; Haptics; HCI; Inguinal Canal; Medical Training | |||
| The Use of Virtual Reality in the Treatment of Posttraumatic Stress Disorder (PTSD) | | BIBAK | Full-Text | 615-624 | |
| Deanne C. Simms; Susan O'Donnell; Heather Molyneaux | |||
| Background. Interest in the treatment of PTSD is increasing with concerns
about the psychological effects of war on troops. Objective. We performed a
comprehensive literature review on virtual reality (VR) for treating
combat-related PTSD. Methods. Canada's primary institute for scientific and
technical information (NRC-CISTI) performed the initial literature search in
2008. Of 296 items which met inclusion criteria, 20 pertained to VR in the
treatment of mental health.. An additional 20 more recent items were added in
2009, making a total of 40 items reviewed. Of those, 6 empirical studies
involved patients with PTSD [1, 2, 3, 4, 5, 6]. Results. VR exposure therapy
(VRET) has been successfully used to treat anxiety and phobia disorders
including PTSD [7, 8]. VRET may be particularly suitable for clients with
combat-related PTSD as it aids in exposure treatments for these clients whom
are often unable to engage in traditional therapy [9, 10]. Future research
should include randomized, controlled studies employing large samples. Keywords: Virtual reality; Posttraumatic stress disorder; treatment | |||
| Effect of an Eyesight Recovering Stereoscopic Movie System on Visual Acuity and Asthenopia | | BIBAK | Full-Text | 625-632 | |
| Akihiro Sugiura; Tetsuya Yamamoto; Hiroki Takada; Masaru Miyao | |||
| Relaxing the contracted muscles involved in focus-adjustment around the
eyeball, such as the ciliary body and extraocular muscles, is expected to
improve pseudomyopia. This hypothesis has led to the development of Dr.REX --
an apparatus for recovering eyesight by using a stereoscopic video. In this
study, we verified the effects of this apparatus on visual acuity and
asthenopia in the short and medium terms. Thirty-two myopic Japanese students
participated in this study. We compared the severity of asthenopia in subjects
who used Dr.REX and in those who performed close work on video display
terminals (VDTs). We determined that the use of the apparatus improved visual
acuity in both the short and medium terms. In addition, asthenopia seemed to be
less severe in subjects who used Dr.Rex than in those who performed close work
on VDTs. Keywords: Pseudomyopia; visual acuity; asthenopia; stereoscopic video; visually
induced motion sickness (VIMS) | |||
| Augmented Reality System for Dental Implant Surgery | | BIBAK | Full-Text | 633-638 | |
| Satoshi Yamaguchi; Takafumi Ohtani; Hirofumi Yatani; Taiji Sohmura | |||
| Recently, computer-assisted navigation systems have been developed to
realize safe and precise surgery. In conventional systems, surgeons feel
anxious intra-operatively because they have to watch a surgical monitor while
operating instruments in the oral cavity. The objective of this study is to
develop a novel dental implant navigation system by combining the retinal
projection head mounted display (RPHMD) and the augmented reality techniques
that can directly overlay pre-operative simulation images onto the real view of
the surgeon. In this paper, we propose an image overlay procedure based on the
RPHMD and verify its accuracy. Keywords: Dental Implant Surgery; Augmented Reality; Surgical Navigation | |||
| A Feasible Tracking Method of Augmented Reality for Supporting Fieldwork of Nuclear Power Plant | | BIBAK | Full-Text | 639-646 | |
| Weida Yan; Hirotake Ishii; Hiroshi Shimoda; Masanori Izumi | |||
| For the application of augmented reality in plant maintenance work,
real-time tracking and technology with higher accuracy is necessary. This study
focuses on the tracking method in vision based SLAM. In NPP, line features are
abundant, and they are detected more easily and reliably than point features.
Line features offer more information than points, but its tracking method is
more complex. In this study, line features are used as landmark for tracking.
The representation of the 3D line is relied on Plücker coordinates. A
Gaussian sum approximates the feature initial state and is updated as new
observations are gathered by the camera. Then extend Kalman filter is adopted
for SLAM approach. Keywords: Augmented Reality; Tracking; Line Feature; Plücker coordinates | |||